Journal tags: post

5

sparkline

Words I wrote in 2022

Here’s a highlight reel of some of my blog posts from 2022:

I also published the transcript of my conference talk, In And Out Of Style, a journey through the history of CSS.

Get safe

The verbs of the web are GET and POST. In theory there’s also PUT, DELETE, and PATCH but in practice POST often does those jobs.

I’m always surprised when front-end developers don’t think about these verbs (or request methods, to use the technical term). Knowing when to use GET and when to use POST is crucial to having a solid foundation for whatever you’re building on the web.

Luckily it’s not hard to know when to use each one. If the user is requesting something, use GET. If the user is changing something, use POST.

That’s why links are GET requests by default. A link “gets” a resource and delivers it to the user.

<a href="/items/id">

Most forms use the POST method becuase they’re changing something—creating, editing, deleting, updating.

<form method="post" action="/items/id/edit">

But not all forms should use POST. A search form should use GET.

<form method="get" action="/search">
<input type="search" name="term">

When a user performs a search, they’re still requesting a resource (a page of search results). It’s just that they need to provide some specific details for the GET request. Those details get translated into a query string appended to the URL specified in the action attribute.

/search?term=value

I sometimes see the GET method used incorrectly:

  • “Log out” links that should be forms with a “log out” button—you can always style it to look like a link if you want.
  • “Unsubscribe” links in emails that immediately trigger the action of unsubscribing instead of going to a form where the POST method does the unsubscribing. I realise that this turns unsubscribing into a two-step process, which is a bit annoying from a usability point of view, but a destructive action should never be baked into a GET request.

When the it was first created, the World Wide Web was stateless by design. If you requested one web page, and then subsequently requested another web page, the server had no way of knowing that the same user was making both requests. After serving up a page in response to a GET request, the server promptly forgot all about it.

That’s how web browsing should still work. In fact, it’s one of the Web Platform Design Principles: It should be safe to visit a web page:

The Web is named for its hyperlinked structure. In order for the web to remain vibrant, users need to be able to expect that merely visiting any given link won’t have implications for the security of their computer, or for any essential aspects of their privacy.

The expectation of safe stateless browsing has been eroded over time. Every time you click on a search result in Google, or you tap on a recommended video in YouTube, or—heaven help us—you actually click on an advertisement, you just know that you’re adding to a dossier of your online profile. That’s not how the web is supposed to work.

Don’t get me wrong: building a profile of someone based on their actions isn’t inherently wrong. If a user taps on “like” or “favourite” or “bookmark”, they are actively telling the server to perform an update (and so those actions should be POST requests). But do you see the difference in where the power lies? With POST actions—fave, rate, save—the user is in charge. With GET requests, no one is supposed to be in charge—it’s meant to be a neutral transaction. Alas, the reality of today’s web is that many GET requests give more power to the dossier-building servers at the expense of the user’s agency.

The very first of the Web Platform Design Principles is Put user needs first :

If a trade-off needs to be made, always put user needs above all.

The current abuse of GET requests is damage that the web needs to route around.

Browsers are helping to a certain extent. Most browsers have the concept of private browsing, allowing you some level of statelessness, or at least time-limited statefulness. But it’s kind of messed up that private browsing is the exception, while surveillance is the default. It should be the other way around.

Firefox and Safari are taking steps to reduce tracking and fingerprinting. Rejecting third-party coookies by default is a good move. I’d love it if third-party JavaScript were also rejected by default:

In retrospect, it seems unbelievable that third-party JavaScript is even possible. I mean, putting arbitrary code—that can then inject even more arbitrary code—onto your website? That seems like a security nightmare!

I imagine if JavaScript were being specced today, it would almost certainly be restricted to the same origin by default.

Chrome has different priorities, which is understandable given that it comes from a company with a business model that is currently tied to tracking and surveillance (though it needn’t remain that way). With anti-trust proceedings rumbling in the background, there’s talk of breaking up Google to avoid monopolistic abuses of power. I honestly think it would be the best thing that could happen to Chrome if it were an independent browser that could fully focus on user needs without having to consider the surveillance needs of an advertising broker.

But we needn’t wait for the browsers to make the web a safer place for users.

Developers write the code that updates those dossiers. Developers add those oh-so-harmless-looking third-party scripts to page templates.

What if we refused?

Front-end developers in particular should be the last line of defence for users. The entire field of front-end devlopment is supposed to be predicated on the prioritisation of user needs.

And if the moral argument isn’t enough, perhaps the technical argument can get through. Tracking users based on their GET requests violates the very bedrock of the web’s architecture. Stop doing that.

A curl in every port

A few years back, Zach Bloom wrote The History of the URL: Path, Fragment, Query, and Auth. He recently expanded on it and republished it on the Cloudflare blog as The History of the URL. It’s well worth the time to read the whole thing. It’s packed full of fascinating tidbits.

In the section on ports, Zach says:

The timeline of Gopher and HTTP can be evidenced by their default port numbers. Gopher is 70, HTTP 80. The HTTP port was assigned (likely by Jon Postel at the IANA) at the request of Tim Berners-Lee sometime between 1990 and 1992.

Ooh, I can give you an exact date! It was January 24th, 1992. I know this because of the hack week in CERN last year to recreate the first ever web browser.

Kimberly was spelunking down the original source code, when she came across this line in the HTUtils.h file:

#define TCP_PORT 80 /* Allocated to http by Jon Postel/ISI 24-Jan-92 */

We showed this to Jean-François Groff, who worked on the original web technologies like libwww, the forerunner to libcurl. He remembers that day. It felt like they had “made it”, receiving the official blessing of Jon Postel (in the same RFC, incidentally, that gave port 70 to Gopher).

Then he told us something interesting about the next line of code:

#define OLD_TCP_PORT 2784 /* Try the old one if no answer on 80 */

Port 2784? That seems like an odd choice. Most of us would choose something easy to remember.

Well, it turns out that 2784 is easy to remember if you’re Tim Berners-Lee.

Those were the last four digits of his parents’ phone number.

Posting to my site

I was idly thinking about the different ways I can post to adactio.com. I decided to count the ways.

Admin interface

This is the classic CMS approach. In my case the CMS is a crufty hand-rolled affair using PHP and MySQL that I wrote years ago. I log in to an admin interface and fill in a form, putting the text of my posts into a textarea. In truth, I usually write in a desktop text editor first, and then paste that into the textarea. That’s what I’m doing now—copying and pasting Markdown from the Typed app.

Directly from my site

If I’m logged in, I get a stripped down posting interface in the notes section of my site.

Notes posting interface

Bookmarklet

This is how I post links. When I’m at a URL I want to bookmark, I hit the “Bookmark it” bookmarklet in my browser’s bookmarks bar. That pops open a version of the admin interface tailored specifically for links. I really, really like bookmarklets. The one big downside is that they don’t work on mobile.

Text message

This is something I knocked together at Indie Web Camp Brighton 2015 using the Twilio API. It’s handy for posting notes if I’m travelling somewhere and data is at a premium. But I don’t use it that often.

Instagram

Thanks to Aaron’s OwnYourGram service—and the fact that my site has a micropub endpoint—I can post images from Instagram to my site. This used to happen instantaneously but Instagram changed their API rules for the worse. Between that and their shitty “algorithmic” timeline, I find myself using the service less and less. At this point I’m only on their for the doggos.

Swarm

Like OwnYourGram, Aaron’s OwnYourSwarm allows me to post check-ins and photos from the Swarm app to my site. Again, micropub makes it all possible.

OwnYourGram and OwnYourSwarm are very similar and could probably be abstracted into a generic service for posting from third-party apps to micropub endpoints. I’d quite like to post my check-ins on Untappd to my site.

Other people’s admin interfaces

Thanks to rel="me" and IndieAuth, I can log into other people’s posting interfaces using my own website as the log-in, and post to my micropub endpoint, like this. Quill is a good example of this. I don’t use it that much, but I really should—the editor interface is quite Medium-like in its design.

Anyway, those are the different ways I can update my website that I can think of right now.

Syndication

In terms of output, I’ve got a few different ways of syndicating what I post here:

Just so you know, if you comment on one of my posts on Facebook, I probably won’t see it. But if you reply to a copy of one of posts on Twitter or Instagram, it will show up over here on adactio.com thanks to the magic of Brid.gy and webmention.

A little progress

I’ve got a fairly simple posting interface for my notes. A small textarea, an optional file upload, some checkboxes for syndicating to Twitter and Flickr, and a submit button.

Notes posting interface

It works fine although sometimes the experience of uploading a file isn’t great, especially if I’m on a slow connection out and about. I’ve been meaning to add some kind of Ajax-y progress type thingy for the file upload, but never quite got around to it. To be honest, I thought it would be a pain.

But then, in his excellent State Of The Gap hit parade of web technologies, Remy included a simple file upload demo. Turns out that all the goodies that have been added to XMLHttpRequest have made this kind of thing pretty easy (and I’m guessing it’ll be easier still once we have fetch).

I’ve made a little script that adds a progress bar to any forms that are POSTing data.

Feel free to use it, adapt it, and improve it. It isn’t using any ES6iness so there are some obvious candidates for improvement there.

It’s working a treat on my little posting interface. Now I can stare at a slowly-growing progress bar when I’m out and about on a slow connection.