Get safe

The verbs of the web are GET and POST. In theory there’s also PUT, DELETE, and PATCH but in practice POST often does those jobs.

I’m always surprised when front-end developers don’t think about these verbs (or request methods, to use the technical term). Knowing when to use GET and when to use POST is crucial to having a solid foundation for whatever you’re building on the web.

Luckily it’s not hard to know when to use each one. If the user is requesting something, use GET. If the user is changing something, use POST.

That’s why links are GET requests by default. A link “gets” a resource and delivers it to the user.

<a href="/items/id">

Most forms use the POST method becuase they’re changing something—creating, editing, deleting, updating.

<form method="post" action="/items/id/edit">

But not all forms should use POST. A search form should use GET.

<form method="get" action="/search">
<input type="search" name="term">

When a user performs a search, they’re still requesting a resource (a page of search results). It’s just that they need to provide some specific details for the GET request. Those details get translated into a query string appended to the URL specified in the action attribute.


I sometimes see the GET method used incorrectly:

  • “Log out” links that should be forms with a “log out” button—you can always style it to look like a link if you want.
  • “Unsubscribe” links in emails that immediately trigger the action of unsubscribing instead of going to a form where the POST method does the unsubscribing. I realise that this turns unsubscribing into a two-step process, which is a bit annoying from a usability point of view, but a destructive action should never be baked into a GET request.

When the it was first created, the World Wide Web was stateless by design. If you requested one web page, and then subsequently requested another web page, the server had no way of knowing that the same user was making both requests. After serving up a page in response to a GET request, the server promptly forgot all about it.

That’s how web browsing should still work. In fact, it’s one of the Web Platform Design Principles: It should be safe to visit a web page:

The Web is named for its hyperlinked structure. In order for the web to remain vibrant, users need to be able to expect that merely visiting any given link won’t have implications for the security of their computer, or for any essential aspects of their privacy.

The expectation of safe stateless browsing has been eroded over time. Every time you click on a search result in Google, or you tap on a recommended video in YouTube, or—heaven help us—you actually click on an advertisement, you just know that you’re adding to a dossier of your online profile. That’s not how the web is supposed to work.

Don’t get me wrong: building a profile of someone based on their actions isn’t inherently wrong. If a user taps on “like” or “favourite” or “bookmark”, they are actively telling the server to perform an update (and so those actions should be POST requests). But do you see the difference in where the power lies? With POST actions—fave, rate, save—the user is in charge. With GET requests, no one is supposed to be in charge—it’s meant to be a neutral transaction. Alas, the reality of today’s web is that many GET requests give more power to the dossier-building servers at the expense of the user’s agency.

The very first of the Web Platform Design Principles is Put user needs first :

If a trade-off needs to be made, always put user needs above all.

The current abuse of GET requests is damage that the web needs to route around.

Browsers are helping to a certain extent. Most browsers have the concept of private browsing, allowing you some level of statelessness, or at least time-limited statefulness. But it’s kind of messed up that private browsing is the exception, while surveillance is the default. It should be the other way around.

Firefox and Safari are taking steps to reduce tracking and fingerprinting. Rejecting third-party coookies by default is a good move. I’d love it if third-party JavaScript were also rejected by default:

In retrospect, it seems unbelievable that third-party JavaScript is even possible. I mean, putting arbitrary code—that can then inject even more arbitrary code—onto your website? That seems like a security nightmare!

I imagine if JavaScript were being specced today, it would almost certainly be restricted to the same origin by default.

Chrome has different priorities, which is understandable given that it comes from a company with a business model that is currently tied to tracking and surveillance (though it needn’t remain that way). With anti-trust proceedings rumbling in the background, there’s talk of breaking up Google to avoid monopolistic abuses of power. I honestly think it would be the best thing that could happen to Chrome if it were an independent browser that could fully focus on user needs without having to consider the surveillance needs of an advertising broker.

But we needn’t wait for the browsers to make the web a safer place for users.

Developers write the code that updates those dossiers. Developers add those oh-so-harmless-looking third-party scripts to page templates.

What if we refused?

Front-end developers in particular should be the last line of defence for users. The entire field of front-end devlopment is supposed to be predicated on the prioritisation of user needs.

And if the moral argument isn’t enough, perhaps the technical argument can get through. Tracking users based on their GET requests violates the very bedrock of the web’s architecture. Stop doing that.

Have you published a response to this? :



This piece by @adactio got me thinking: What would break if browsers restricted Set-Cookie and other state-setting code to unsafe HTTP requests, and restricted unsafe HTTP requests to code triggered by user interaction?

# Posted by JimDabell on Wednesday, January 20th, 2021 at 1:45pm


# Shared by Aslak Raanes on Friday, May 7th, 2021 at 7:47am

# Shared by Magnus Lie Hetland on Friday, May 7th, 2021 at 7:54am

# Shared by Liam S. Crouch on Friday, May 7th, 2021 at 8:04am

# Shared by Walter Higgins on Friday, May 7th, 2021 at 8:29am


# Liked by Dominik Schwind on Wednesday, January 20th, 2021 at 3:43pm

# Liked by Marty McGuire on Wednesday, January 20th, 2021 at 5:07pm

# Liked by Chris Taylor on Thursday, January 21st, 2021 at 10:41pm

# Liked by Marc Görtz on Sunday, January 24th, 2021 at 12:41pm

# Liked by tams sokari on Friday, May 7th, 2021 at 8:10am

# Liked by J’antifa Schjetne on Friday, May 7th, 2021 at 8:10am

# Liked by Paul Bakaus on Friday, May 7th, 2021 at 8:11am

# Liked by Markus Gebka 🏕️ on Friday, May 7th, 2021 at 8:11am

# Liked by Aslak Raanes on Friday, May 7th, 2021 at 8:11am

# Liked by Angelika Tyborska on Friday, May 7th, 2021 at 8:58am

# Liked by Donough O'Donovan on Friday, May 7th, 2021 at 8:58am

# Liked by Andy Bell on Friday, May 7th, 2021 at 8:58am

# Liked by Dumbrava Razvan Aurel on Friday, May 7th, 2021 at 8:58am

# Liked by Matt Bee on Friday, May 7th, 2021 at 9:47am

# Liked by Tom on Friday, May 7th, 2021 at 10:28am

# Liked by Galdin Raphael on Friday, May 7th, 2021 at 11:50am

# Liked by ali in hell on Friday, May 7th, 2021 at 5:00pm

# Liked by Theresa O’Connor on Sunday, May 9th, 2021 at 9:33pm

Previously on this day

1 year ago I wrote Unity

One way of looking at the new browser landscape.

3 years ago I wrote Needs must

The tension between developer convenience and user needs.

6 years ago I wrote Lining up Responsive Day Out 3

Two-thirds of the way there.

13 years ago I wrote Outgoing

How a badly implemented feature made me scared to search.

14 years ago I wrote Explaining Ajax, transcribed

17 years ago I wrote GarageBandLand

I’m going to have to get my hands on iLife pronto.

17 years ago I wrote Oh, the humanity!

Coca Cola today launched an online music download service that aims to compete with the iTunes Music Store.

18 years ago I wrote Brighton pier collapses again

Is this going to happen every time I leave Brighton?