Tags: browser

800

sparkline

Saturday, February 27th, 2021

The web is something different - daverupert.com

The web is so much bigger than the little boxes we try to put it in. The web is good at many things and not great (yet) at others. The web is a snowball rolling down hill, absorbing other technologies along the way. The web is an interactive window across space and time, a near instant connection to anyone on the planet. The web is something different. I wish we’d see the web more for itself, not defined by its nearest neighbor or navel-gazing over some hypothetical pathway we could have gone down decades ago.

Tuesday, February 23rd, 2021

Introducing State Partitioning - Mozilla Hacks - the Web developer blog

This is a terrific approach to tackling cross-site surveillance. I’d love it to be implemented in all browsers. I can imagine Safari implementing this. Chrome …we’ll see.

Friday, February 12th, 2021

supercookie • workwise

Favicons are snitches.

Thursday, February 11th, 2021

WorldBrain’s Memex

A browser extension for bookmarking and annotation.

I like the name.

Wednesday, February 3rd, 2021

Authentication

Two-factor authentication is generally considered A Good Thing™️ when you’re logging in to some online service.

The word “factor” here basically means “kind” so you’re doing two kinds of authentication. Typical factors are:

  • Something you know (like a password),
  • Something you have (like a phone or a USB key),
  • Something you are (biometric Black Mirror shit).

Asking for a password and an email address isn’t two-factor authentication. They’re two pieces of identification, but they’re the same kind (something you know). Same goes for supplying your fingerprint and your face: two pieces of information, but of the same kind (something you are).

None of these kinds of authentication are foolproof. All of them can change. All of them can be spoofed. But when you combine factors, it gets a lot harder for an attacker to breach both kinds of authentication.

The most common kind of authentication on the web is password-based (something you know). When a second factor is added, it’s often connected to your phone (something you have).

Every security bod I’ve talked to recommends using an authenticator app for this if that option is available. Otherwise there’s SMS—short message service, or text message to most folks—but SMS has a weakness. Because it’s tied to a phone number, technically you’re only proving that you have access to a SIM (subscriber identity module), not a specific phone. In the US in particular, it’s all too easy for an attacker to use social engineering to get a number transferred to a different SIM card.

Still, authenticating with SMS is an option as a second factor of authentication. When you first sign up to a service, as well as providing the first-factor details (a password and a username or email address), you also verify your phone number. Then when you subsequently attempt to log in, you input your password and on the next screen you’re told to input a string that’s been sent by text message to your phone number (I say “string” but it’s usually a string of numbers).

There’s an inevitable friction for the user here. But then, there’s a fundamental tension between security and user experience.

In the world of security, vigilance is the watchword. Users need to be aware of their surroundings. Is this web page being served from the right domain? Is this email coming from the right address? Friction is an ally.

But in the world of user experience, the opposite is true. “Don’t make me think” is the rallying cry. Friction is an enemy.

With SMS authentication, the user has to manually copy the numbers from the text message (received in a messaging app) into a form on a website (in a different app—a web browser). But if the messaging app and the browser are on the same device, it’s possible to improve the user experience without sacrificing security.

If you’re building a form that accepts a passcode sent via SMS, you can use the autocomplete attribute with a value of “one-time-code”. For a six-digit passcode, your input element might look something like this:

<input type="text" maxlength="6" inputmode="numeric" autocomplete="one-time-code">

With one small addition to one HTML element, you’ve saved users some tedious drudgery.

There’s one more thing you can do to improve security, but it’s not something you add to the HTML. It’s something you add to the text message itself.

Let’s say your website is example.com and the text message you send reads:

Your one-time passcode is 123456.

Add this to the end of the text message:

@example.com #123456

So the full message reads:

Your one-time passcode is 123456.

@example.com #123456

The first line is for humans. The second line is for machines. Using the @ symbol, you’re telling the device to only pre-fill the passcode for URLs on the domain example.com. Using the # symbol, you’re telling the device the value of the passcode. Combine this with autocomplete="one-time-code" in your form and the user shouldn’t have to lift a finger.

I’m fascinated by these kind of emergent conventions in text messages. Remember that the @ symbol and # symbol in Twitter messages weren’t ideas from Twitter—they were conventions that users started and the service then adopted.

It’s a bit different with the one-time code convention as there is a specification brewing from representatives of both Google and Apple.

Tess is leading from the Apple side and she’s got another iron in the fire to make security and user experience play nicely together using the convention of the /.well-known directory on web servers.

You can add a URL for /.well-known/change-password which redirects to the form a user would use to update their password. Browsers and password managers can then use this information if they need to prompt a user to update their password after a breach. I’ve added this to The Session.

Oh, and on that page where users can update their password, the autocomplete attribute is your friend again:

<input type="password" autocomplete="new-password">

If you want them to enter their current password first, use this:

<input type="password" autocomplete="current-password">

All of the things I’ve mentioned—the autocomplete attribute, origin-bound one-time codes in text messages, and a well-known URL for changing passwords—have good browser support. But even if they were only supported in one browser, they’d still be worth adding. These additions do absolutely no harm to browsers that don’t yet support them. That’s progressive enhancement.

Tuesday, January 26th, 2021

The unreasonable effectiveness of simple HTML – Terence Eden’s Blog

I love the story that Terence relates here. It reminds me of all the fantastic work that Anna did documenting game console browsers.

Are you developing public services? Or a system that people might access when they’re in desperate need of help? Plain HTML works. A small bit of simple CSS will make look decent. JavaScript is probably unnecessary – but can be used to progressively enhance stuff.

Wednesday, January 20th, 2021

Get safe

The verbs of the web are GET and POST. In theory there’s also PUT, DELETE, and PATCH but in practice POST often does those jobs.

I’m always surprised when front-end developers don’t think about these verbs (or request methods, to use the technical term). Knowing when to use GET and when to use POST is crucial to having a solid foundation for whatever you’re building on the web.

Luckily it’s not hard to know when to use each one. If the user is requesting something, use GET. If the user is changing something, use POST.

That’s why links are GET requests by default. A link “gets” a resource and delivers it to the user.

<a href="/items/id">

Most forms use the POST method becuase they’re changing something—creating, editing, deleting, updating.

<form method="post" action="/items/id/edit">

But not all forms should use POST. A search form should use GET.

<form method="get" action="/search">
<input type="search" name="term">

When a user performs a search, they’re still requesting a resource (a page of search results). It’s just that they need to provide some specific details for the GET request. Those details get translated into a query string appended to the URL specified in the action attribute.

/search?term=value

I sometimes see the GET method used incorrectly:

  • “Log out” links that should be forms with a “log out” button—you can always style it to look like a link if you want.
  • “Unsubscribe” links in emails that immediately trigger the action of unsubscribing instead of going to a form where the POST method does the unsubscribing. I realise that this turns unsubscribing into a two-step process, which is a bit annoying from a usability point of view, but a destructive action should never be baked into a GET request.

When the it was first created, the World Wide Web was stateless by design. If you requested one web page, and then subsequently requested another web page, the server had no way of knowing that the same user was making both requests. After serving up a page in response to a GET request, the server promptly forgot all about it.

That’s how web browsing should still work. In fact, it’s one of the Web Platform Design Principles: It should be safe to visit a web page:

The Web is named for its hyperlinked structure. In order for the web to remain vibrant, users need to be able to expect that merely visiting any given link won’t have implications for the security of their computer, or for any essential aspects of their privacy.

The expectation of safe stateless browsing has been eroded over time. Every time you click on a search result in Google, or you tap on a recommended video in YouTube, or—heaven help us—you actually click on an advertisement, you just know that you’re adding to a dossier of your online profile. That’s not how the web is supposed to work.

Don’t get me wrong: building a profile of someone based on their actions isn’t inherently wrong. If a user taps on “like” or “favourite” or “bookmark”, they are actively telling the server to perform an update (and so those actions should be POST requests). But do you see the difference in where the power lies? With POST actions—fave, rate, save—the user is in charge. With GET requests, no one is supposed to be in charge—it’s meant to be a neutral transaction. Alas, the reality of today’s web is that many GET requests give more power to the dossier-building servers at the expense of the user’s agency.

The very first of the Web Platform Design Principles is Put user needs first :

If a trade-off needs to be made, always put user needs above all.

The current abuse of GET requests is damage that the web needs to route around.

Browsers are helping to a certain extent. Most browsers have the concept of private browsing, allowing you some level of statelessness, or at least time-limited statefulness. But it’s kind of messed up that private browsing is the exception, while surveillance is the default. It should be the other way around.

Firefox and Safari are taking steps to reduce tracking and fingerprinting. Rejecting third-party coookies by default is a good move. I’d love it if third-party JavaScript were also rejected by default:

In retrospect, it seems unbelievable that third-party JavaScript is even possible. I mean, putting arbitrary code—that can then inject even more arbitrary code—onto your website? That seems like a security nightmare!

I imagine if JavaScript were being specced today, it would almost certainly be restricted to the same origin by default.

Chrome has different priorities, which is understandable given that it comes from a company with a business model that is currently tied to tracking and surveillance (though it needn’t remain that way). With anti-trust proceedings rumbling in the background, there’s talk of breaking up Google to avoid monopolistic abuses of power. I honestly think it would be the best thing that could happen to Chrome if it were an independent browser that could fully focus on user needs without having to consider the surveillance needs of an advertising broker.

But we needn’t wait for the browsers to make the web a safer place for users.

Developers write the code that updates those dossiers. Developers add those oh-so-harmless-looking third-party scripts to page templates.

What if we refused?

Front-end developers in particular should be the last line of defence for users. The entire field of front-end devlopment is supposed to be predicated on the prioritisation of user needs.

And if the moral argument isn’t enough, perhaps the technical argument can get through. Tracking users based on their GET requests violates the very bedrock of the web’s architecture. Stop doing that.

Saturday, January 16th, 2021

HTML Video Sources Should Be Responsive | Filament Group, Inc.

Removing media support from HTML video was a mistake.

Damn right! It was basically Hixie throwing a strop, trying to sabotage responsive images. Considering how hard it is usually to remove a shipped feature from browsers, it’s bizarre that a good working feature was pulled out of production.

Wednesday, January 6th, 2021

Should The Web Expose Hardware Capabilities? — Smashing Magazine

This is a very thoughtful and measured response to Alex’s post Platform Adjacency Theory.

Unlike Alex, the author doesn’t fire off cheap shots.

Also, I’m really intrigued by the idea of certificate authorities for hardware APIs.

Wednesday, December 30th, 2020

Here Lies Flash » Mike Industries

Flash, from the very beginning, was a transitional technology. It was a language that compiled into a binary executable. This made it consistent and performant, but was in conflict with how most of the web works. It was designed for a desktop world which wasn’t compatible with the emerging mobile web. Perhaps most importantly, it was developed by a single company. This allowed it to evolve more quickly for awhile, but goes against the very spirit of the entire internet. Long-term, we never want single companies — no matter who they may be — controlling the very building blocks of the web.

Friday, December 18th, 2020

Conditional JavaScript - JavaScript - Dev Tips

This is a good round-up of APIs you can use to decide if and how much JavaScript to load. I might look into using storage.estimate() in service workers to figure out how much gets pre-cached.

Wednesday, December 16th, 2020

Audio

I spent the last couple of weekends rolling out a new feature on The Session. It involves playing audio in a web page. No big deal these days, right? But the history involves some old file formats…

The first venerable format is ABC notation. File extension: .abc, mime type: text/vnd.abc. It’s an ingenious text format for musical notation using ASCII. The metadata of the piece of music is defined in JSON-like key/value pairs. Then the contents are encoded with letters: A, B, C, etc. Uppercase and lowercase denote different octaves. Numbers can be used for note lengths.

The format was created by Chris Walshaw in 1997 when dial-up was the norm. With ABC, people were able to swap tunes on email lists or bulletin boards without transferring weighty image or sound files. If you had ABC software on your computer, you could convert that lightweight text file into sheet music …or audio.

That brings me to the second old format: midi files. File extension: .mid, mime-type: audio/midi. Like ABC, it’s a lightweight format for encoding the instructions for music instead of the music itself.

Think of it like SVG: instead of storing the final pixels of an image, SVG stores the instructions for drawing the image instead. The instructions in a midi file are like “play this note for this long on this instrument.” Again, as with ABC, you need some software to turn the instructions into sound.

There was a time when lots of software could play midi files. Quicktime on the Mac, for example. You could even embed midi files in web pages. I mean literally embed them …with the embed element. No Geocities page was complete without an autoplaying midi file.

On The Session, people submit tunes in ABC format. Then, using the amazing ABCJS JavaScript library, the ABC is turned into SVG on the fly! For years I’ve also offered midi files, generated on the server from the ABC notation.

But times have changed. These days it’s hard to find software that plays midi files. Quicktime doesn’t do it anymore. And you’d need to go to the app store on iOS to find a midi file player. It’s time to phase out the midi files on The Session.

I still want to provide automatically-generated audio though. Fortunately ABCJS gives me a way to do this. But instead of using the old technology of midi files, it uses a more modern browser feature: the Web Audio API.

The end result sounds like a midi file, but the underlying technique is more like a synthesiser. There’s a separate mp3 file for each note. The JavaScript figures out how long each “sample” needs to be played for, strings them all together, and outputs them with Web Audio. So you’ve got cutting-edge browser technology recreating a much older file format. Paul Rosen—the creator of ABCJS—has a presentation explaining how it all works under the hood.

Not only is there a separate short mp3 file for each note in seven octaves, but if you want the sound of a different instrument, you need samples for all seven octaves in that instrument. They’re called soundfonts.

Paul provides soundfonts for ABCJS. It’s a repo that was forked from this repo from Benjamin Gleitzman. And here’s where it gets small worldy…

The reason why Benjamin has a repo of soundfonts is because he needed to create midi-like audio in the browser. He wanted to do this for a project on September 28th and 29th, 2013 …at Science Hack Day San Francisco!

I was there too—working on my own audio-related hack—and I remember the excellent (and winning) hack that Benjamin worked on. It was called Symphony of Satellites and it’s still online along with the promo video. Here’s Benjamin’s post-hackday write-up from seven years ago.

It’s rare that the worlds of the web and Irish music cross over. When I got to meet Paul—creator of ABCJS—at a web conference a couple of years ago it kind of blew my mind. Last weekend when I set out to dabble with a feature on The Session, I certainly didn’t expect to stumble on a connection to Science Hack Day! (Aside: the first Science Hack Day was ten years ago—yowzers!)

Anyway, I was able to get that audio playback working on The Session. Except for some weirdness on iOS that I had to fix. But that’s a hack for another day.

Monday, December 14th, 2020

Lists

We often have brown bag lunchtime presentations at Clearleft. In the Before Times, this would involve a trip to Pret or Itsu to get a lunch order in, which we would then proceed to eat in front of whoever was giving the presentation. Often it’s someone from Clearleft demoing something or playing back a project, but whenever possible we’d rope in other people to swing by and share what they’re up to.

We’ve continued this tradition since making the switch to working remotely. Now the brown bag presentations happen over Zoom. This has two advantages. Firstly, if you don’t want the presenter watching you eat your lunch, you can switch your camera off. Secondly, because the presenter doesn’t have to be in Brighton, there’s no geographical limit on who could present.

Our most recent brown bag was truly excellent. I asked Léonie if she’d be up for it, and she very kindly agreed. As well as giving us a whirlwind tour of how assistive technology works on the web, she then invited us to observe her interacting with websites using a screen reader.

I’ve seen Léonie do this before and it’s always struck me as a very open and vulnerable thing to do. Think about it: the audience has more information than the presenter. We can see the website at the same time as we’re listening to Léonie and her screen reader.

We got to nominate which websites to visit. One of them—a client’s current site that we haven’t yet redesigned—was a textbook example of how important form controls are. There was a form where almost everything was hunky-dory: form fields, labels, it was all fine. But one of the inputs was a combo box. Instead of using a native select with a datalist, this was made with JavaScript. Because it was lacking the requisite ARIA additions to make it accessible, it was pretty much unusable to Léonie.

And that’s why you use the right HTML element wherever possible, kids!

The other site Léonie visited was Clearleft’s own. That was all fine. Léonie demonstrated how she’d form a mental model of a page by getting the screen reader to read out the headings. Interestingly, the nesting of headings on the Clearleft site is technically wrong—there’s a jump from an h1 to an h3—probably a result of the component-driven architecture where you don’t quite know where in the page a heading will appear. But this didn’t seem to be an issue. The fact that headings are being used at all was the more important fact. As Léonie said, there’s a lot of incorrect HTML out there so it’s no wonder that screen readers aren’t necessarily sticklers for nesting.

I’ve said it before and I’ll say it again: if you’re using headings, labelling form fields, and providing alternative text for images, you’re already doing a better job than most websites.

Headings weren’t the only way that Léonie got a feel for the page architecture. Landmark roles—like header and nav—really helped too. Inside the nav element, she also heard how many items there were. That’s because the navigation was marked up as a list: “List: six items.”

And that reminded me of the Webkit issue. On Webkit browsers like Safari, the list on the Clearleft site would not be announced as a list. That’s because the lists’s bullets have been removed using CSS.

Now this isn’t the only time that screen readers pay attention to styling. If you use display: none to hide an element from sight, it will also be unavailable to screen reader users. Makes sense. But removing the semantic meaning of lists based on CSS? That seems a bit much.

There are good reasons for it though. Here’s a thread from James Craig on where this decision came from (James, by the way, is an absolute unsung hero of accessibility). It turns out that developers went overboard with lists a while back and that’s why we can’t have nice things. In over-compensating from divitis, developers ended up creating listitis, marking up anything vaguely list-like as an unordered list with styling adjusted. That was very annoying for screen reader users trying to figure out what was actually a list.

And James also asks:

If a sighted user doesn’t need to know it’s a list, why would a screen reader user need to know or want to know? Stated another way, if the visible list markers (bullets, image markers, etc.) are deemed by the designers to be visually burdensome or redundant for sighted users, why burden screen reader users with those semantics?

That’s a fair point, but the thing is …bullets maketh not the list. There are many ways of styling something that is genuinely a list that doesn’t involve bullets or image markers. White space, borders, keylines—these can all indicate visually that something is a list of items.

If you look at, say, the tunes page on The Session, you can see that there are numerous lists—newest tunes, latest comments, etc. In this case, as a sighted visitor, you would be at an advantage over a screen reader user in that you can, at a glance, see that there’s a list of five items here, a list of ten items there.

So I’m not disagreeing with the thinking behind the Webkit decision, but I do think the heuristics probably aren’t going to be quite good enough to make the call on whether something is truly a list or not.

Still, while I used to be kind of upset about the Webkit behaviour, I’ve become more equanimous about it over time. There are two reasons for this.

Firstly, there’s something that Eric said:

We have come so far to agree that websites don’t need to look the same in every browser mostly due to bugs in their rendering engines or preferences of the user.

I think the same is true for screen readers and other assistive technology: Websites don’t need to sound the same in every screen reader.

That’s a really good point. If we agree that “pixel perfection” isn’t attainable—or desirable—in a fluid, user-centred medium like the web, why demand the aural equivalent?

The second reason why I’m not storming the barricades about this is something that James said:

Of course, heuristics are imperfect, so authors have the ability to explicitly override the heuristically determined role by adding role="list”.

That means more work for me as a developer, and that’s …absolutely fine. If I can take something that might be a problem for a user, and turn into something that’s a problem for me, I’ll choose to make it my problem every time.

I don’t have to petition Webkit to change their stance or update their heuristics. If I feel strongly that a list styled without bullets should still be announced as a list, I can specificy that in the markup.

It does feel very redundant to write ul role="list”. The whole point of having HTML elements with built-in semantics is that you don’t need to add any ARIA roles. But we did it for a while when new structural elements were introduced in HTML5—main role="main", nav role="navigation", etc. So I’m okay with a little bit of redundancy. I think the important thing is that you really stop and think about whether something should be announced as a list or not, regardless of styling. There isn’t a one-size-fits-all answer (hence why it’s nigh-on impossible to get the heuristics right). Each list needs to be marked up on a case-by-case basis.

And I wouldn’t advise spending too much time thinking about this either. There are other, more important areas to consider. Like I said, headings, forms, and images really matter. I’d prioritise those elements above thinking about lists. And it’s worth pointing out that Webkit doesn’t remove all semantic meaning from styled lists—it updates the role value from list to group. That seems sensible to me.

In the case of that page on The Session, I don’t think I’m guilty of listitis. Yes, there are seven lists on that page (two for navigation, five for content) but I’m reasonably confident that they all look like lists even without bullets or markers. So I’ve added role="list" to some ul elements.

As with so many things related to accessibility—and the web in general—this is a situation where the only answer I can confidentally come up with is …it depends.

History of the Web - YouTube

I really enjoyed this trip down memory lane with Chris:

From the Web’s inception, an ancient to contemporary history of the Web.

History of the Web

Wednesday, December 2nd, 2020

Creating websites with prefers-reduced-data | Polypane Browser for Developers

There’s no browser support yet but that doesn’t mean we can’t start adding prefers-reduced-data to our media queries today. I like the idea of switching between web fonts and system fonts.

Monday, November 30th, 2020

Why The Web Is Such A Mess - YouTube

Tom gives a succinct history of the ongoing arms race between trackers and end users.

Why The Web Is Such A Mess

Thursday, November 19th, 2020

Standardizing `select` And Beyond: The Past, Present And Future Of Native HTML Form Controls — Smashing Magazine

While a handful of form controls can be easily styled by CSS, like the button element, most form controls fall into a bucket of either requiring hacky CSS or are still unable to be styled at all by CSS.

Despite form controls no longer taking a style or technical dependency on the operating system and using modern rendering technology from the browser, developers are still unable to style some of the most used form control elements such as select. The root of this problem lies in the way the specification was originally written for form controls back in 1995.

Stephanie goes back in time to tell the history of form controls on the web, and how that history has led to our current frustrations:

The current state of working with controls on the modern web is that countless developer hours are being lost to rewriting controls from scratch, as custom elements due to a lack of flexibility in customizability and extensibility of native form controls. This is a massive gap in the web platform and has been for years. Finally, something is being done about it.

Amen!

The Browser Company

It looks like something interesting is going on here. A new browser? It’s hard to tell from the vague marketing copy, but I suspect we’re not looking at a new rendering engine here, but perhaps a new browser interface.

Tuesday, November 17th, 2020

Insecure …again

Back in March, I wrote about a dilemma I was facing. I could make the certificates on The Session more secure. But if I did that, people using older Android and iOS devices could no longer access the site:

As a site owner, I can either make security my top priority, which means you’ll no longer be able to access my site. Or I can provide you access, which makes my site less secure for everyone.

In the end, I decided in favour of access. But now this issue has risen from the dead. And this time, it doesn’t matter what I think.

Let’s Encrypt are changing the way their certificates work and once again, it’s people with older devices who are going to suffer:

Most notably, this includes versions of Android prior to 7.1.1. That means those older versions of Android will no longer trust certificates issued by Let’s Encrypt.

This makes me sad. It’s another instance of people being forced to buy new devices. Last time ‘round, my dilemma was choosing between security and access. This time, access isn’t an option. It’s a choice between security and the environment (assuming that people are even in a position to get new devices—not an assumption I’m willing to make).

But this time it’s out of my hands. Let’s Encrypt certificates will stop working on older devices and a whole lotta websites are suddenly going to be inaccessible.

I could look at using a different certificate authority, one I’d have to pay for. It feels a bit galling to have to go back to the scammy world of paying for security—something that Let’s Encrypt has taught us should quite rightly be free. But accessing a website should also be free. It shouldn’t come with the price tag of getting a new device.

Monday, November 16th, 2020

The Core Web Vitals hype train

Goodhart’s Law applied to Google’s core web vitals:

If developers start to focus solely on Core Web Vitals because it is important for SEO, then some folks will undoubtedly try to game the system.

Personally, my beef with core web vitals is that they introduce even more uneccessary initialisms (see, for example, Harry’s recent post where he uses CWV metrics like LCP, FID, and CLS—alongside TTFB and SI—to look at PLPs, PDPs, and SRPs. I mean, WTF?).