Google’s AI Hype Circle
Google has a serious AI problem. That problem isn’t “how to integrate AI into Google products?” That problem is “how to exclude AI-generated nonsense from Google products?”
Google has a serious AI problem. That problem isn’t “how to integrate AI into Google products?” That problem is “how to exclude AI-generated nonsense from Google products?”
Writing, both code and prose, for me, is both an end product and an end in itself. I don’t want to automate away the things that give me joy.
And that is something that I’m more and more aware of as I get older – sources of joy. It’s good to diversify them, to keep track of them, because it’s way too easy to run out. Or to end up with just one, and then lose it.
The thing about luddites is that they make good punchlines, but they were all people.
Stick a singularity in your “effective altruism” pipe and smoke it.
Manufactured inevitability a.k.a bullshit:
There’s a standard trope that tech evangelists deploy when they talk about the latest fad. It goes something like this:
- Technology XYZ is arriving. It will be incredible for everyone. It is basically inevitable.
- The only thing that can stop it is regulators and/or incumbent industries. If they are so foolish as to stand in its way, then we won’t be rewarded with the glorious future that I am promising.
We can think of this rhetorical move as a Reverse Scooby-Doo. It’s as though Silicon Valley has assumed the role of a Scooby-Doo villain — but decided in this case that he’s actually the hero. (“We would’ve gotten away with it, too, if it wasn’t for those meddling regulators!”)
The critical point is that their faith in the promise of the technology is balanced against a revulsion towards existing institutions. (The future is bright! Unless they make it dim.) If the future doesn’t turn out as predicted, those meddlers are to blame. It builds a safety valve into their model of the future, rendering all predictions unfalsifiable.
AI becomes a stand-in term for whatever technologies and techniques are new, shiny, and just beyond the grasp of our understanding. We use it to gesture at a future we cannot fully comprehend or currently realise. As soon as we do, it will no longer be “AI.”
I should emphasize that rejecting longtermism does not mean that one must reject long-term thinking. You ought to care equally about people no matter when they exist, whether today, next year, or in a couple billion years henceforth. If we shouldn’t discriminate against people based on their spatial distance from us, we shouldn’t discriminate against them based on their temporal distance, either. Many of the problems we face today, such as climate change, will have devastating consequences for future generations hundreds or thousands of years in the future. That should matter. We should be willing to make sacrifices for their wellbeing, just as we make sacrifices for those alive today by donating to charities that fight global poverty. But this does not mean that one must genuflect before the altar of “future value” or “our potential,” understood in techno-Utopian terms of colonizing space, becoming posthuman, subjugating the natural world, maximizing economic productivity, and creating massive computer simulations stuffed with 1045 digital beings.
Technologies are always coming out of networks that require other related ideas to have the next one. The fact that we have simultaneous independent invention as a norm works against the idea of the heroic inventor, that we’re dependent on them for inventions. These things will come when all the other pieces are ready.
In 1990, the science fiction writer Douglas Adams produced a “fantasy documentary” for the BBC called Hyperland. It’s a magnificent paleo-futuristic artifact, rich in sideways predictions about the technologies of tomorrow.
I remember coming across a repeating loop of this documentary playing in a dusty corner of a Smithsonian museum in Washington DC. Douglas Adams wasn’t credited but I recognised his voice.
Hyperland aired on the BBC a full year before the World Wide Web. It is a prophecy waylaid in time: the technology it predicts is not the Web. It’s what William Gibson might call a “stub,” evidence of a dead node in the timeline, a three-point turn where history took a pause and backed out before heading elsewhere.
Here, Claire L. Evans uses Adams’s documentary as an opening to dive into the history of hypertext starting with Bush’s Memex, Nelson’s Xanadu and Engelbart’s oNLine System. But then she describes some lesser-known hypertext systems…
In 1985, the students at Brown who encountered Intermedia had never seen anything like it before in their lives. The system laid a world of information at their fingertips, saved them hours at the library, and helped them work through tangles of thought.
Nice and straightforward. Locally:
git branch -m master main
git push -u origin main
Then on the server:
git branch -m master main
git branch -u origin/main
On github.com, go into the repo’s settings and update the default branch.
Thanks for this, Scott!
P.S. Don’t read the comments.
I think Simon is onto something here. While the word “performance” means something amongst devs, it’s too vague to be useful when communicating with other disciplines. I like the idea of using the more descriptive “page speed” or “site speed” in those situations.
Web Performance and Web Performance Optimization are still valid and descriptive terms for our industry, but we might benefit from a change to our language when working with others. The language we use could be critical to the success of making the web a faster and more accessible place.
The parallels between Alex Garland’s Devs and Tom Stoppard’s Arcadia.
80 geocoding service plans to choose from.
I’m going to squirrel this one away for later—I’ve had to switch geocoding providers in the past, so I have a feeling that this could come in handy.
I enjoyed this documentary on legendary sound designer and editor Walter Murch. Kinda makes me want to rewatch The Conversation and The Godfather.
Frank yearns for just-in-time computing:
With each year that goes by, it feels like less and less is happening on the device itself. And the longer our work maintains its current form (writing documents, updating spreadsheets, using web apps, responding to emails, monitoring chat, drawing rectangles), the more unnecessary high-end computing seems. Who needs multiple computers when I only need half of one?
Okay, I knew about the Python shortcut—I mentioned it in Going Offline—but I had no idea it was so easy to do the same thing for PHP. This is a bit of a revelation for me!
Once in the desired directory, run:
php -S localhost:2222
Now you can go to “localhost:2222” in your browser, and if you have an index.html or .php file in your root directory, you’re in business.
What an excellent question! And what an excellent bit of sleuthing to get to the bottom of it. This is like linguistic spelunking on the World Wide Web.
Oh, and of course I love the little sidenote at the end.
Here’s a nice one-sentence definition for the marketing folk:
A Progressive Web App is a regular website following a progressive enhancement strategy to deliver native-like user experiences by using modern Web standards.
But if you’re talking to developers, I implore you to concretely define a Progressive Web App as the combination of HTTPS, a service worker, and a Web App Manifest.
This is a rather lovely idea—technical terms explained with analogies.
I just finished writing something about HTTPS and now I wish I had used this.
An online training course that will banish all fear of the command line, expertly delivered by the one and only Remy Sharp.
For designers, new developers, UX, UI, product owners and anyone who’s been asked to “just open the terminal”.
Take advantage of the special launch price—there are some serious price reductions there.