Another dive into the archives of the www-talk mailing list. This time there are some gems about the origins of the
input element, triggered by the old
Another dive into the archives of the www-talk mailing list. This time there are some gems about the origins of the
From the ARPANET to the internet, this is a great history of the Domain Name System:
Root DNS servers operate in safes, inside locked cages. A clock sits on the safe to ensure the camera feed hasn’t been looped. Particularly given how slow DNSSEC implementation has been, an attack on one of those servers could allow an attacker to redirect all of the Internet traffic for a portion of Internet users. This, of course, makes for the most fantastic heist movie to have never been made.
Issue 596729 - chromium - Do not show the app banner unless the Manifest has a display set to standalone or fullscreen - Monorail
I am shocked and disgusted by this arbitrary decision by the Chrome team. If your Progressive Web App doesn’t set its manifest to obscure its URL, you get punished by missing out on the add to home screen prompt.
I’ve been poking around at Google’s information on “instant apps” since they announced it at Google I/O. My initial impressions mirror Peter’s.
Either they allow access to more device APIs (which could be a massive security hole) or else they’re more or less websites.
Ah, how I wish that this were published at a long-lived URL:
The one part of the web that I believe is truly genius, and that keeps standing the test of time, is the URI. The Web gave us a way to point to anything, forever. Everything else about the web has changed and grown to encyclopedic lengths, but URIs have been killing it for decades.
And yet the numbers show we’re hell-bent on screwing all that up with link-shorteners, moving URIs without redirection, and so forth. As always happens in technology we’ve taken a simple idea and found expedient ways to add fragility and complexity to it.
Making web apps? Care about SEO? Here’s Google’s advice:
Use feature detection & progressive enhancement techniques to make your content available to all users.
How the Web Works: A Primer for Newcomers to Web Development (or anyone, really) by Preethi Kasireddy
This is a great reminder of the fundamental nuts’n’bolts of the internet and the World Wide Web: clients, servers, URLs, DNS, HTTP, TCP/IP, packet switching, and all the other building blocks we sometimes take for granted.
This is part one of a four-part series:
- A Primer for Newcomers to Web Development (or anyone, really)
- Client-Server Model & the Structure of a Web Application
- HTTP & REST
- Stay tuned…
I really like Alex’s framing of best-of-breed progressively enhanced websites as “progressive apps” (although Bruce has some other ideas about the naming).
It’s a shame that the add-to-homescreen part isn’t standardised yet though.
This is a really good point from Tim Berners-Lee: there’s no good reason why switching to TLS should require a change of URLs from http:// to https://
Remember Aaron’s dConstruct talk? Well, the Atlantic has more details of his work at the Cooper Hewitt museum in this wide-ranging piece that investigates the role of museums, the value of APIs, and the importance of permanent URLs.
As I was leaving, Cope recounted how, early on, a curator had asked him why the collections website and API existed. Why are you doing this?
His retrospective answer wasn’t about scholarship or data-mining or huge interactive exhibits. It was about the web.
I find this incredibly inspiring.
A concept browser from Yandex that takes an interesting approach to URLs: on the one hand, hiding them …but then putting them front and centre.
But the main focus of this concept browser is to blur the line between browser chrome and the website it’s displaying.
Aaron raises a point that I’ve discussed before in regards to the indie web (and indeed, the web in general): we don’t buy domain names; we rent them.
It strikes me that all the good things about the web are decentralised (one-way linking, no central authority required to add a node), but all the sticking points are centralised: ICANN, DNS.
Aaron also points out that we are beholden to our hosting companies, although—having moved hosts a number of times myself—that’s an issue that DNS (and URLs in general) helps alleviate. And there’s now some interesting work going on in literally owning your own website: a web server in the home.
Looks like Phil’s talk at The Web Is in Cardiff was terrific.
This is hilarious …for about two dozen people.
For everyone else, it’s as opaque as the rest of the standardisation process.
This is what Scott Jenson has been working on—a first stab at just-in-time interactions by having physical devices broadcasting URLs.
Walk up and use anything
Jason writes about the closing of Ficly. This is a lesson in how to do this right:
We knew as soon as we decided to wind down Ficly that we wanted to provide users with continued access to their work, even if they couldn’t create more. We’re still working on some export tools, but more importantly, we’re guaranteeing that all original work on the site will live on at its current URL far into the future.
This is four years old, but it’s solid advice that stands the test of time.
An early look at the just-in-time interactions that Scott has been working on:
Nearby works like this. An enabled object broadcasts a short description of itself and a URL to devices nearby listening. Those URLs are grabbed and listed by the app, and tapping on one brings you to the object’s webpage, where you can interact with it—say, tell it to perform a task.
Some URLs are ugly. Some URLs aren’t. Let’s not sacrifice them.
A lovely post by Mark on the value of URLs.
Nat’s take on Chrome’s proposal to bury URLs:
The URLs are the cornerstone of the interconnected, decentralised web. Removing the URLs from the browser is an attempt to expand and consolidate centralised power.
Right now, this move to remove URLs from the interface of Chrome is just an experiment …but the fact that Google are even experimenting with it is very disturbing.
“Who? Me? No, I was never going to actually blow the web’s brains out—I just wanted to feel the heft of the weapon as I stroked it against the face of the web.”
Chris is putting together a series about the neglected building blocks of the web. First up; the much-abused hyperlink, the very foundation of the world wide web.
It is the most simple and most effective world-wide, open and free publishing mechanism. That it is why we need to protect them from extinction.
Coming from anyone else, this glorious vision might seem far-fetched, but Anne is working to make it a reality.
I agree completely with the sentiment of this article (although the title is perhaps a bit overblown): you shouldn’t need a separate API—that’s what you’re existing URL structure should be.
I’m not entirely sure that content negotiation is the best way to go when it comes to serving up different representations: there’s a real value in being able to paste a URL into a browser window to get back a JSON or XML representation of a resource.
But this is spot-on about the ludicrous over-engineered complexity of most APIs. It’s ridiculous that I can enter a URL into a browser window to get an HTML representation of my latest tweets, but I have to sign up for an API key and jump through OAuth hoops, and agree to display the results in a specific way if I want to get a JSON representation of the same content. Ludicrous!
A heartfelt response from Vitaly to .net magazine’s digital destruction.
This a great proposal: well-researched and explained, it tackles the tricky subject of balancing security and access to native APIs.
Far too many ideas around installable websites focus on imitating native behaviour in a cargo-cult kind of way, whereas this acknowledges addressability (with URLs) as a killer feature of the web …a beautiful baby that we definitely don’t want to throw out with the bathwater.
I gave the opening keynote at the Beyond Tellerand conference a few weeks back. I’m talked about the web from my own perspective, so expect excitement and anger in equal measure.
This was a new talk but it went down well, and I’m quite happy with it.
The web’s walled gardens are threatened by the decentralised power of RSS.
Google is threatened by RSS. Google is closing down Google Reader.
Twitter is threatened by RSS. Twitter has switched off all of its RSS feeds.
It will dip and diminish, but will RSS ever go away? Nah. One of RSS’s weaknesses in its early days—its chaotic decentralized weirdness—has become, in its dotage, a surprising strength. RSS doesn’t route through a single leviathan’s servers. It lacks a kill switch.
My presentation from the Industry conference in Newcastle a little while back, when I stepped in for John Allsopp to deliver the closing talk.
I like the idea of a /purpose page: I should add one to The Session and Huffduffer.
This is a breath of fresh air: a blogging platform that promises to keep its URLs online in perpetuity.
Yes, yes, yes!
I like these design principles for server-side and client-side frameworks. I would say that they’re common sense but looking at many popular frameworks, this sense isn’t as common as it should be.
Luke’s notes from my talk at An Event Apart in Chicago.
Remember when I linked to the story of Twitter’s recent redesign of their mobile site and I said it would be great to see it progressively enhanced up to the desktop version? Well, here’s a case study that does just that.
A cautionary tale from Dave Winer of not considering digital preservation from the outside. We must learn the past. We must.
A really great article from Stephen on how we are mistakenly making assumptions about what users want. He means it, man!
Looks like the scourge of hashbangs is finally being cleansed from Twitter.
The Long Now blog is featuring the bet between myself and Matt on URL longevity. Just being mentioned on that site gives me a warm glow.
Matt has transcribed the notes from his excellent Webstock talk. I highly recommend giving this a read.
I really enjoyed Matt’s talk from Webstock. I know some people thought it might be a bit of a downer but I actually found it very inspiring.
A truly excellent article outlining the difference between share-cropping and self-hosting. It may seem that the convenience of using a third-party service outweighs the hassle of owning your own URLs but this puts everything into perspective.
- Can I bookmark this information? (stable URIs)
- Can I go from here to there with a click? (hyperlinks)
- Can I save the content locally? (open accessible formats)
I really like this proposal to allow for more nuanced linking using CSS selectors in fragment identifiers (though I worry about the overloading of the # symbol in URLs).
I really like the thinking that’s gone into the design of Github, as shown in this presentation. It’s not really about responsive design as we commonly know it, but boy, is it a great deep dive into the importance of URLs and performance.
Here’s one to add to Instapaper or Readability to savour at your leisure: Aaron Straup Cope’s talk at Museums and the Web 2010:
This paper examines the act of association, the art of framing and the participatory nature of robots in creating artifacts and story-telling in projects like Flickr Galleries, the API-based Suggestify project (which provides the ability to suggest locations for other people’s photos) and the increasing number of bespoke (and often paper-based) curatorial productions.
Namespacing fragment identifiers — CSS Wizardry—CSS, Web Standards, Typography, and Grids by Harry Roberts
Well, here’s something I didn’t know: fragment identifiers can use the colon to add another level of addressability.
A superb post by Dan on the bigger picture of what’s wrong with hashbang URLs. Well written and well reasoned.
James follows up on his previous excellent post on hashbangs by diving into the situations where client-side routing is desirable. Watch this space for a follow-up post on performance.
Read it and weep. Here are the articles on Wikipedia that reference URLs that are getting axed as part of the BBC’s upcoming cull.
Tim Bray calmly explains why hash-bang URLs are a very bad idea.
This is what we call “tight coupling” and I thought that anyone with a Computer Science degree ought to have been taught to avoid it.
So why use a hash-bang if it’s an artificial URL, and a URL that needs to be reformatted before it points to a proper URL that actually returns content?
Out of all the reasons, the strongest one is “Because it’s cool”. I said strongest not strong.
It turns out my Boolean URL tag hacking in Huffduffer is answering a real need: Will Myddelton had already put the same functionality together using Yahoo Pipes.
Documenting the use and abuse of fragment identifiers.
An excellent collection of best practices for designing URLs. I found myself nodding vigorously along with each suggestion.
Eleven years old and more relevant than ever.
Blaine is doing his bit to battle the great linkrot apocalypse with an archive of short urls and their corresponding endpoints.
Chris Shiflett gets behind the rev="canonical" movement. This thing is really gaining momentum.
rev="canonical" has a posse.
An aggregator of aggregators... and I'm posting a link to it on one of the aggregators.