Educational Sensational Inspirational Foundational
A historical record of foundational web development blog posts.
Every one of these 42 articles are gold!
It warms my heart to see Resilient Web Design included in this list.
A historical record of foundational web development blog posts.
Every one of these 42 articles are gold!
It warms my heart to see Resilient Web Design included in this list.
Social networks come and social networks go.
Right now, there’s a whole bunch of social networks coming (Blewski, Freds, Mastication) and one big one going, thanks to Elongate.
Me? I watch all of this unfold like Doctor Manhattan on Mars. I have no great connection to any of these places. They’re all just syndication endpoints to me.
I used to have a checkbox in my posting interface that said “Twitter”. If I wanted to add a copy of one of my notes to Twitter, I’d enable that toggle.
I have, of course, now removed that checkbox. Twitter is dead to me (and it should be dead to you too).
I used to have another checkbox next to that one that said “Flickr”. If I was adding a photo to one of my notes, I could toggle that to send a copy to my Flickr account.
Alas, that no longer works. Flickr only allows you to post 1000 photos before requiring a pro account. Fair enough. I’ve actually posted 20 times that amount since 2005, but I let my pro membership lapse a while back.
So now I’ve removed the “Flickr” checkbox too.
Instead I’ve now got a checkbox labelled “Mastodon” that sends a copy of a note to my Mastodon account.
When I publish a blog post like the one you’re reading now here on my journal, there’s yet another checkbox that says “Medium”. Toggling that checkbox sends a copy of my post to my page on Ev’s blog.
At least it used to. At some point that stopped working too. I was going to start debugging my code, but when I went to the documentation for the Medium API, I saw this:
This repository has been archived by the owner on Mar 2, 2023. It is now read-only.
I guessed I missed the memo. I guess Medium also missed the memo, because developers.medium.com is still live. It proudly proclaims:
Medium’s Publishing API makes it easy for you to plug into the Medium network, create your content on Medium from anywhere you write, and expand your audience and your influence.
Not a word of that is accurate.
That page also has a link to the Medium engineering blog. Surely the announcement of the API deprecation would be published there?
Crickets.
Moving on…
I have an account on Bluesky. I don’t know why.
I was idly wondering about sending copies of my notes there when I came across a straightforward solution: micro.blog.
That’s yet another place where I have an account. They make syndication very straightfoward. You can go to your account and point to a feed from your own website.
That’s it. Syndication enabled.
It gets better. Micro.blog can also cross-post to other services. One of those services is Bluesky. I gave permission to micro.blog to syndicate to Bluesky so now my notes show up there too.
It’s like dominoes falling: I post something on my website which updates my RSS feed which gets picked up by micro.blog which passes it on to Bluesky.
I noticed that one of the other services that micro.blog can post to is Medium. Hmmm …would that still work given the abandonment of the API?
I gave permission to micro.blog to cross-post to Medium when my feed of blog posts is updated. It seems to have worked!
We’ll see how long it lasts. We’ll see how long any of them last. Today’s social media darlings are tomorrow’s Friendster and MySpace.
When the current crop of services wither and die, my own website will still remain in full bloom.
I didn’t know the Washington Post had a design system or that the system has this good section on accessibility.
Here’s a highlight reel of some of my blog posts from 2022:
I also published the transcript of my conference talk, In And Out Of Style, a journey through the history of CSS.
If you’re thinking of signing up to Hive or Post:
If posts in a social media app do not have URLs that can be linked to and viewed in an unauthenticated browser, or if there is no way to make a new post from a browser, then that program is not a part of the World Wide Web in any meaningful way.
Consign that app to oblivion.
You had me at “beautifully resilient apps with progressive enhancement”.
This is a great clear walkthrough of enhancing a form submission. A lot of this seems like first principles to me, but if you’ve only ever built single page apps, then thinking about a server-submission process first might well be revelatory.
I like this high-level view of the state of CSS today. There are two main takeaways:
This is exactly the direction we should be going in! More and more power from the native web technologies (while still remaining learnable), with less and less reliance on tooling. For CSS, the tools have been like polyfills that we can now start to remove.
Alas, while the same should be true of JavaScript (there’s so much you can do in native JavaScript now), people seem to have tied their entire identities to the tooling they use.
They could learn a thing or two from the trajectory of CSS: treat your frameworks as cattle, not pets.
Applying Postel’s Law to relationships:
I aspire to be conservative in what and how I share (i.e., avoid drama) while understanding that other people will say all sorts of unmindful things.
This responds to your Freedom of Information Act (FOIA) request, which was received by this office on 5 February 2016 for “A digital/electronic copy of the NSA old security posters from the 1950s and 1960s.”
The graphic design is …um, mixed.
The verbs of the web are GET and POST. In theory there’s also PUT, DELETE, and PATCH but in practice POST often does those jobs.
I’m always surprised when front-end developers don’t think about these verbs (or request methods, to use the technical term). Knowing when to use GET and when to use POST is crucial to having a solid foundation for whatever you’re building on the web.
Luckily it’s not hard to know when to use each one. If the user is requesting something, use GET. If the user is changing something, use POST.
That’s why links are GET requests by default. A link “gets” a resource and delivers it to the user.
<a href="/items/id">
Most forms use the POST method becuase they’re changing something—creating, editing, deleting, updating.
<form method="post" action="/items/id/edit">
But not all forms should use POST. A search form should use GET.
<form method="get" action="/search">
<input type="search" name="term">
When a user performs a search, they’re still requesting a resource (a page of search results). It’s just that they need to provide some specific details for the GET request. Those details get translated into a query string appended to the URL specified in the action
attribute.
/search?term=value
I sometimes see the GET method used incorrectly:
When the it was first created, the World Wide Web was stateless by design. If you requested one web page, and then subsequently requested another web page, the server had no way of knowing that the same user was making both requests. After serving up a page in response to a GET request, the server promptly forgot all about it.
That’s how web browsing should still work. In fact, it’s one of the Web Platform Design Principles: It should be safe to visit a web page:
The Web is named for its hyperlinked structure. In order for the web to remain vibrant, users need to be able to expect that merely visiting any given link won’t have implications for the security of their computer, or for any essential aspects of their privacy.
The expectation of safe stateless browsing has been eroded over time. Every time you click on a search result in Google, or you tap on a recommended video in YouTube, or—heaven help us—you actually click on an advertisement, you just know that you’re adding to a dossier of your online profile. That’s not how the web is supposed to work.
Don’t get me wrong: building a profile of someone based on their actions isn’t inherently wrong. If a user taps on “like” or “favourite” or “bookmark”, they are actively telling the server to perform an update (and so those actions should be POST requests). But do you see the difference in where the power lies? With POST actions—fave, rate, save—the user is in charge. With GET requests, no one is supposed to be in charge—it’s meant to be a neutral transaction. Alas, the reality of today’s web is that many GET requests give more power to the dossier-building servers at the expense of the user’s agency.
The very first of the Web Platform Design Principles is Put user needs first :
If a trade-off needs to be made, always put user needs above all.
The current abuse of GET requests is damage that the web needs to route around.
Browsers are helping to a certain extent. Most browsers have the concept of private browsing, allowing you some level of statelessness, or at least time-limited statefulness. But it’s kind of messed up that private browsing is the exception, while surveillance is the default. It should be the other way around.
Firefox and Safari are taking steps to reduce tracking and fingerprinting. Rejecting third-party coookies by default is a good move. I’d love it if third-party JavaScript were also rejected by default:
In retrospect, it seems unbelievable that third-party JavaScript is even possible. I mean, putting arbitrary code—that can then inject even more arbitrary code—onto your website? That seems like a security nightmare!
I imagine if JavaScript were being specced today, it would almost certainly be restricted to the same origin by default.
Chrome has different priorities, which is understandable given that it comes from a company with a business model that is currently tied to tracking and surveillance (though it needn’t remain that way). With anti-trust proceedings rumbling in the background, there’s talk of breaking up Google to avoid monopolistic abuses of power. I honestly think it would be the best thing that could happen to Chrome if it were an independent browser that could fully focus on user needs without having to consider the surveillance needs of an advertising broker.
But we needn’t wait for the browsers to make the web a safer place for users.
Developers write the code that updates those dossiers. Developers add those oh-so-harmless-looking third-party scripts to page templates.
What if we refused?
Front-end developers in particular should be the last line of defence for users. The entire field of front-end devlopment is supposed to be predicated on the prioritisation of user needs.
And if the moral argument isn’t enough, perhaps the technical argument can get through. Tracking users based on their GET requests violates the very bedrock of the web’s architecture. Stop doing that.
Ah, look at this beautiful timeline that Cassie designed and built—so many beautiful little touches! It covers the fifteen years(!) of Clearleft so far.
But you can also contribute to it …by looking ahead to the next fifteen years:
Let’s imagine it’s 2035…
How do you hope the practice of design will have changed for the better?
Fill out an online postcard with your hopes for the future.
I spent far too long hitting refresh and then clicking on the names of some of the Irish bands down near the bottom of the line-up.
This is a very nifty use of CSS gradients!
Beautiful high resolution posters of our planetary neighbourhood.
A few years back, Zach Bloom wrote The History of the URL: Path, Fragment, Query, and Auth. He recently expanded on it and republished it on the Cloudflare blog as The History of the URL. It’s well worth the time to read the whole thing. It’s packed full of fascinating tidbits.
In the section on ports, Zach says:
The timeline of Gopher and HTTP can be evidenced by their default port numbers. Gopher is 70, HTTP 80. The HTTP port was assigned (likely by Jon Postel at the IANA) at the request of Tim Berners-Lee sometime between 1990 and 1992.
Ooh, I can give you an exact date! It was January 24th, 1992. I know this because of the hack week in CERN last year to recreate the first ever web browser.
Kimberly was spelunking down the original source code, when she came across this line in the HTUtils.h
file:
#define TCP_PORT 80 /* Allocated to http by Jon Postel/ISI 24-Jan-92 */
We showed this to Jean-François Groff, who worked on the original web technologies like libwww
, the forerunner to libcurl
. He remembers that day. It felt like they had “made it”, receiving the official blessing of Jon Postel (in the same RFC, incidentally, that gave port 70 to Gopher).
Then he told us something interesting about the next line of code:
#define OLD_TCP_PORT 2784 /* Try the old one if no answer on 80 */
Port 2784? That seems like an odd choice. Most of us would choose something easy to remember.
Well, it turns out that 2784 is easy to remember if you’re Tim Berners-Lee.
Those were the last four digits of his parents’ phone number.
Hidde takes one iconic design and shows how it could be recreated with CSS grid using either 4 columns, 9 columns, or 17 columns.
Look, it’s Friday—were you really going to get any work done today anyway?
Um …if I’m reading this right, then my IFTTT recipe will also stop working and my Facebook activity will drop to absolute zero.
Oh, well. No skin off my nose. Facebook is a roach motel in more ways than one.
Nobody can afford to volunteer to be extra virtuous in a system where the only rule is quarterly profit and shareholder value. Where the market rules, all of us are fighting for the crumbs to get the best investment for the market. And so, this loose money can go anywhere in the planet without penalty. The market can say: “It doesn’t matter what else is going on, it doesn’t matter if the planet crashes in fifty years and everybody dies, what’s more important is that we have quarterly profit and shareholder value and immediate return on our investment, right now.” So, the market is like a blind giant driving us off a cliff into destruction.
Kim Stanley Robinson journeys to the heart of the Anthropocene.
Economics is the quantitative and systematic analysis of capitalism itself. Economics doesn’t do speculative or projective economics; perhaps it should, I mean, I would love it if it did, but it doesn’t. It’s a dangerous moment, as well as a sign of cultural insanity and incapacity. It’s like you’ve got macular degeneration and your vision of reality itself were just a big black spot precisely in the direction you are walking.
This is a smart way to queue up POST submissions for later if the user is offline. It’s not as powerful as background sync (because it requires the user to revisit your site) but it’s a good fallback for browsers that support service workers but don’t yet support background sync