Tags: arc

527

sparkline

Thursday, September 14th, 2023

Canadian Typography Archives

Go spelunking down the archives to find some lovely graphic design artefacts.

Wednesday, September 13th, 2023

Multi-page web apps

I received this email recently:

Subject: multi-page web apps

Hi Jeremy,

lately I’ve been following you through videos and texts and I’m curious as to why you advocate the use of multi-page web apps and not single-page ones.

Perhaps you can refer me to some sources where your position and reasoning is evident?

Here’s the response I sent…

Hi,

You can find a lot of my reasoning laid out in this (short and free) online book I wrote called Resilient Web Design:

https://resilientwebdesign.com/

The short answer to your question is this: user experience.

The slightly longer answer…

For most use cases, a website (or multi-page app if you prefer) is going to provide the most robust experience for the most number of users. That’s because a user’s web browser takes care of most of the heavy lifting.

Navigating from one page to another? That’s taken care of with links.

Gathering information from a user to process on a server? That’s taken care of with forms.

This frees me up to concentrate on the content and the design without having to reinvent the wheels of links and form fields.

These (let’s call them) multi-page apps are stateless, and for most use cases that’s absolutely fine.

There are some cases where you’d want a state to persist across pages. Let’s say you’re playing a song, or a podcast episode. Ideally you’d want that player to continue seamlessly playing even as the user navigates around the site. In that situation, a single-page app would be a suitable architecture.

But that architecture comes at a cost. Now you’ve got stop the browser doing what it would normally do with links and forms. It’s up to you to recreate that functionality. And you can’t do it with HTML, a robust fault-tolerant declarative language. You need to reimplement all that functionality in JavaScript, a less tolerant, more brittle language.

Then you’ve got to ship all that code to the user before they can use your site. It might be JavaScript code you’ve written yourself or it might be a third-party library designed for building single-page apps. Either way, the user pays a download tax (and a parsing tax, and an execution tax). Whereas with links and forms, all of that functionality is pre-bundled into the user’s web browser.

So that’s my reasoning. At least nine times out of ten, a multi-page approach is leaner, more robust, and simpler.

Like I said, there are times when a single-page approach makes sense—it all comes down to whether state needs to be constantly preserved. But these use cases are the exceptions, not the rule.

That’s why I find the framing of your question a little concerning. It should be inverted. The default approach should be to assume a multi-page approach (which is the way the web works by default). Deciding to take a JavaScript-driven single-page approach should be the exception.

It’s kind of like when people ask, “Why don’t you have children?” Surely the decision to have a child should require deliberation and commitment, rather than the other way around.

When it comes to front-end development, I’m worried that we’ve reached a state where the more complex over-engineered approach is viewed as the default.

I may be committing a fundamental attribution error here, but I think that we’ve reached this point not because of any consideration for users, but rather because of how it makes us developers feel. Perhaps building an old-fashioned website that uses HTML for navigations feels too easy, like it’s beneath us. But building an “app” that requires JavaScript just to render text on a screen feels like real programming.

I hope I’m wrong. I hope that other developers will start to consider user experience first and foremost when making architectural decisions.

Anyway. That’s my answer. User experience.

Cheers,

Jeremy

Friday, September 1st, 2023

Shining a Light on the Digital Dark Age - Long Now

A false sense of security persists surrounding digitized documents: because an infinite number of identical copies can be made of any original, most of us believe that our electronic files have an indefinite shelf life and unlimited retrieval opportunities. In fact, preserving the world’s online content is an increasing concern, particularly as file formats (and the hardware and software used to run them) become scarce, inaccessible, or antiquated, technologies evolve, and data decays. Without constant maintenance and management, most digital information will be lost in just a few decades. Our modern records are far from permanent.

Tuesday, August 29th, 2023

How Google made the world go viral - The Verge

On the sad state of Google search today:

How did a site that captured the imagination of the internet and fundamentally changed the way we communicate turn into a burned-out Walmart at the edge of town?

Monday, August 7th, 2023

Lunar Codex

Time capsules on the moon, using NanoFiche as the storage medium.

35mm scans — writer/editor/reporter

Clicking through these cold war slides gives an uncomfortable mixture of nostalgic appreciation for the retro aesthetic combined with serious heebie-jeebies for the content.

The slides appear to be 1970s/1980s informational or training images from the United States Air Force, NORAD, Navy, and beyond.

Wednesday, July 12th, 2023

Pulling my site from Google over AI training – Tracy Durnell

I’m not down with Google swallowing everything posted on the internet to train their generative AI models.

A principled approach to evolving choice and control for web content

This would mean a lot more if it happened before the wholesale harvesting of everyone’s work.

But I’m sure Google will put a mighty fine lock on that stable door that the horse bolted from.

Tuesday, July 11th, 2023

Permission

Back when the web was young, it wasn’t yet clear what the rules were. Like, could you really just link to something without asking permission?

Then came some legal rulings to establish that, yes, on the web you can just link to anything without checking if it’s okay first.

What about search engines and directories? Technically they’re rifling through all the stuff we publish and reposting snippets of it. Is that okay?

Again, through some legal precedents—but mostly common agreement—everyone decided that on balance it was fine. After all, those snippets they publish are helping your site get traffic.

In short order, search came to rule the web. And Google came to rule search.

The mutually beneficial arrangement persisted uneasily. Despite Google’s search results pages getting worse and worse in recent years, the company’s huge market share of search means you generally want to be in their good books.

Google’s business model relies on us publishing web pages so that they can put ads around the search results linking to that content, and we rely on Google to send people to our websites by responding smartly to search queries.

That has now changed. Instead of responding to search queries by linking to the web pages we’ve made, Google is instead generating dodgy summaries rife with hallucina… lies (a psychic hotline, basically).

Google still benefits from us publishing web pages. We no longer benefit from Google slurping up those web pages.

With AI, tech has broken the web’s social contract:

Google has steadily been manoeuvring their search engine results to more and more replace the pages in the results.

As Chris puts it:

Me, I just think it’s fuckin’ rude.

Google is a portal to the web. Google is an amazing tool for finding relevant websites to go to. That was useful when it was made, and it’s nothing but grown in usefulness. Google should be encouraging and fighting for the open web. But now they’re like, actually we’re just going to suck up your website, put it in a blender with all other websites, and spit out word smoothies for people instead of sending them to your website. Instead.

Ben proposes an update to robots.txt that would allow us to specify licensing information:

Robots.txt needs an update for the 2020s. Instead of just saying what content can be indexed, it should also grant rights.

Like crawl my site only to provide search results not train your LLM.

It’s a solid proposal. But Google has absolutely no incentive to implement it. They hold all the power.

Or do they?

There is still the nuclear option in robots.txt:

User-agent: Googlebot
Disallow: /

That’s what Vasilis is doing:

I have been looking for ways to not allow companies to use my stuff without asking, and so far I coulnd’t find any. But since this policy change I realised that there is a simple one: block google’s bots from visiting your website.

The general consensus is that this is nuts. “If you don’t appear in Google’s results, you might as well not be on the web!” is the common cry.

I’m not so sure. At least when it comes to personal websites, search isn’t how people get to your site. They get to your site from RSS, newsletters, links shared on social media or on Slack.

And isn’t it an uncomfortable feeling to think that there’s a third party service that you absolutely must appease? It’s the same kind of justification used by people who are still on Twitter even though it’s now a right-wing transphobic cesspit. “If I’m not on Twitter, I might as well not be on the web!”

The situation with Google reminds me of what Robin said about Twitter:

The speed with which Twitter recedes in your mind will shock you. Like a demon from a folktale, the kind that only gains power when you invite it into your home, the platform melts like mist when that invitation is rescinded.

We can rescind our invitation to Google.

Friday, July 7th, 2023

Monday, May 15th, 2023

Google’s AI Hype Circle

Google has a serious AI problem. That problem isn’t “how to integrate AI into Google products?” That problem is “how to exclude AI-generated nonsense from Google products?”

Tuesday, May 9th, 2023

Google AMP: how Google tried to fix the web by taking it over - The Verge

AMP succeeded spectacularly. Then it failed. And to anyone looking for a reason not to trust the biggest company on the internet, AMP’s story contains all the evidence you’ll ever need.

This is a really good oral history of how AMP soured Google’s reputation.

Full disclosure: I’m briefly cited:

“When it suited them, it was open-source,” says Jeremy Keith, a web developer and a former member of AMP’s advisory council. “But whenever there were any questions about direction and control… it was Google’s.”

As an aside, this article contains a perfect description of the company cultures of Facebook, Apple, and Google:

“You meet with a Facebook person and you see in their eyes they’re psychotic,” says one media executive who’s dealt with all the major platforms. “The Apple person kind of listens but then does what it wants to do. The Google person honestly thinks what they’re doing is the best thing.”

Spot. On.

Tuesday, May 2nd, 2023

Dark Patterns: We Asked 500+ People How Brands Deceive Them

User research into deceptive design patterns:

Despite our technological advancement and understanding of human nature, it’s still common to see deception in digital products. To understand digital deception (and better define it), we surveyed 536 people to measure and discuss people’s understanding of this matter.

Wednesday, April 26th, 2023

“the secret list of websites” - Chris Coyier

Google is a portal to the web. Google is an amazing tool for finding relevant websites to go to. That was useful when it was made, and it’s nothing but grown in usefulness. Google should be encouraging and fighting for the open web. But now they’re like, actually we’re just going to suck up your website, put it in a blender with all other websites, and spit out word smoothies for people instead of sending them to your website. Instead.

I concur with Chris’s assessment:

I just think it’s fuckin’ rude.

Tuesday, April 25th, 2023

Typography Manual by Mike Mai

A short list of opinions on typography. I don’t necessarily agree with all of it, but it’s all fairly sensible advice.

Wednesday, April 19th, 2023

A History Of The World According To Getty Images

A short documentary that you can dowload or watch online:

The film explores how image banks including Getty gain control over, and then restrict access to, archive images – even when these images are legally in the public domain. It also forms a small act of resistance against this practice: the film includes six legally licensed clips, and is downloadable as an HD ProRes file. In this way, it aims to liberate these few short clips from corporate control, and make them freely available for viewing and artistic use.

Licensed under aCreative Commons 0: “No rights reserved” license.

Site Search in Arc Browser — For Your Own Site - Jim Nielsen’s Blog

I can now have adactio.com on speed dial for search — a valid use case indeed.

😊

Funnily enough, I tend to use tags more often than my own site search. I type adactio.com/tags/… followed by whatever I think past me would’ve used. So my browser’s address bar is kind of a sitesearch interface already.

Monday, April 17th, 2023

LukeW | Ask LukeW: New Ways into Web Content

I like how Luke is using a large language model to make a chat interface for his own content.

This is the exact opposite of how grifters are selling the benefits of machine learning (“Generate copious amounts of new content instantly!”) and instead builds on over twenty years of thoughtful human-made writing.

Thursday, April 13th, 2023

Browser history

I woke up today to a very annoying new bug in Firefox. The browser shits the bed in an unpredictable fashion when rounding up single pixel line widths in SVG. That’s quite a problem on The Session where all the sheet music is rendered in SVG. Those thin lines in sheet music are kind of important.

Browser bugs like these are very frustrating. There’s nothing you can do from your side other than filing a bug. The locus of control is very much with the developers of the browser.

Still, the occasional regression in a browser is a price I’m willing to pay for a plurality of rendering engines. Call me old-fashioned but I still value the ecological impact of browser diversity.

That said, I understand the argument for converging on a single rendering engine. I don’t agree with it but I understand it. It’s like this…

Back in the bad old days of the original browser wars, the browser companies just made shit up. That made life a misery for web developers. The Web Standards Project knocked some heads together. Netscape and Microsoft would agree to support standards.

So that’s where the bar was set: browsers agreed to work to the same standards, but competed by having different rendering engines.

There’s an argument to be made for raising that bar: browsers agree to work to the same standards, and have the same shared rendering engine, but compete by innovating in all other areas—the browser chrome, personalisation, privacy, and so on.

Like I said, I understand the argument. I just don’t agree with it.

One reason for zeroing in a single rendering engine is that it’s just too damned hard to create or maintain an entirely different rendering engine now that web standards are incredibly powerful and complex. Only a very large company with very deep pockets can hope to be a rendering engine player. Google. Apple. Heck, even Microsoft threw in the towel and abandoned their rendering engine in favour of Blink and V8.

And yet. Andreas Kling recently wrote about the Ladybird browser. How we’re building a browser when it’s supposed to be impossible:

The ECMAScript, HTML, and CSS specifications today are (for the most part) stellar technical documents whose algorithms can be implemented with considerably less effort and guesswork than in the past.

I’ll be watching that project with interest. Not because I plan to use the brower. I’d just like to see some evidence against the complexity argument.

Meanwhile most other browser projects are building on the raised bar of a shared browser engine. Blisk, Brave, and Arc all use Chromium under the hood.

Arc is the most interesting one. Built by the wonderfully named Browser Company of New York, it’s attempting to inject some fresh thinking into everything outside of the rendering engine.

Experiments like Arc feel like they could have more in common with tools-for-thought software like Obsidian and Roam Research. Those tools build knowledge graphs of connected nodes. A kind of hypertext of ideas. But we’ve already got hypertext tools we use every day: web browsers. It’s just that they don’t do much with the accumulated knowledge of our web browsing. Our browsing history is a boring reverse chronological list instead of a cool-looking knowledge graph or timeline.

For inspiration we can go all the way back to Vannevar Bush’s genuinely seminal 1945 article, As We May Think. Bush imagined device, the Memex, was a direct inspiration on Douglas Engelbart, Ted Nelson, and Tim Berners-Lee.

The article describes a kind of hypertext machine that worked with microfilm. Thanks to Tim Berners-Lee’s World Wide Web, we now have a global digital hypertext system that we access every day through our browsers.

But the article also described the idea of “associative trails”:

Wholly new forms of encyclopedias will appear, ready made with a mesh of associative trails running through them, ready to be dropped into the memex and there amplified.

Our browsing histories are a kind of associative trail. They’re as unique as fingerprints. Even if everyone in the world started on the same URL, our browsing histories would quickly diverge.

Bush imagined that these associative trails could be shared:

The lawyer has at his touch the associated opinions and decisions of his whole experience, and of the experience of friends and authorities.

Heck, making a useful browsing history could be a real skill:

There is a new profession of trail blazers, those who find delight in the task of establishing useful trails through the enormous mass of the common record.

Taking something personal and making it public isn’t a new idea. It was what drove the wave of web 2.0 startups. Before Flickr, your photos were private. Before Delicous, your bookmarks were private. Before Last.fm, what music you were listening to was private.

I’m not saying that we should all make our browsing histories public. That would be a security nightmare. But I am saying there’s a lot of untapped potential in our browsing histories.

Let’s say we keep our browsing histories private, but make better use of them.

From what I’ve seen of large language model tools, the people getting most use of out of them are training them on a specific corpus. Like, “take this book and then answer my questions about the characters and plot” or “take this codebase and then answer my questions about the code.” If you treat these chatbots as calculators for words they can be useful for some tasks.

Large language model tools are getting smaller and more portable. It’s not hard to imagine one getting bundled into a web browser. It feeds on your browsing history. The bigger your browsing history, the more useful it can be.

Except, y’know, for the times when it just make shit up.

Vannevar Bush didn’t predict a Memex that would hallucinate bits of microfilm that didn’t exist.

Wednesday, March 29th, 2023

The search element | scottohara.me

I’ve already add the search element to thesession.org, but while browser support is still rolling out, I’m being extra verbose:

<search role="search">
 ...
</search>

Brought to you by the department of redunancy department.

I’ll remove the ARIA role once browsers are all on board. As Scott says:

Please be aware that this element landing in the HTML spec today does not mean it is available in browsers today. Issues have been filed to implement the search element in the major browsers, including the necessary accessibility mappings. Keep this in mind before you get all super excited and willy nilly add this new element to your pages.