Better Living Through Algorithms by Naomi Kritzer : Clarkesworld Magazine – Science Fiction & Fantasy
This short story feels like a prequel to Maneki Neko.
This short story feels like a prequel to Maneki Neko.
- Start with mostly static HTML.
- Progressively enhance the dynamic parts.
- Pick small, focused tools.
LLMs have never experienced anything. They are just programs that have ingested unimaginable amounts of text. LLMs might do a great job at describing the sensation of being drunk, but this is only because they have read a lot of descriptions of being drunk. They have not, and cannot, experience it themselves. They have no purpose other than to produce the best response to the prompt you give them.
This doesn’t mean they aren’t impressive (they are) or that they can’t be useful (they are). And I truly believe we are at a watershed moment in technology. But let’s not confuse these genuine achievements with “true AI.”
Coincidentally, I was just talking about hammers and nails in another context.
Progressive enhancement used to be a standard approach. Then React came along and didn’t support that approach. So, folks stopped talking about that and focused entirely on JS-centric client solutions. A few years later and now folks are talking about progressive enhancement again, under the new name of “islands”.
What is going on here?
It turns out, it’s the same old thing. Vendors peddling their wares. When Facebook introduced React, that act transformed the font-end space into a hype-driven, cult-of-personality disaster zone where folks could profit from creating the right image and narrative. I observed that it particularly preyed on the massive influx of young web developers. Facebook had finally found the silver bullet of Web Development, or so they claimed! Just adopt our tech, no questions asked, and you too can be a rock star making six figures! We’ve been living through this mess for ten years now.
The cosmic ballet goes on.
Google has a serious AI problem. That problem isn’t “how to integrate AI into Google products?” That problem is “how to exclude AI-generated nonsense from Google products?”
Bosses have certain goals, but don’t want to be blamed for doing what’s necessary to achieve those goals; by hiring consultants, management can say that they were just following independent, expert advice. Even in its current rudimentary form, A.I. has become a way for a company to evade responsibility by saying that it’s just doing what “the algorithm” says, even though it was the company that commissioned the algorithm in the first place.
Once again, absolutely spot-on analysis from Ted Chiang.
I’m not very convinced by claims that A.I. poses a danger to humanity because it might develop goals of its own and prevent us from turning it off. However, I do think that A.I. is dangerous inasmuch as it increases the power of capitalism. The doomsday scenario is not a manufacturing A.I. transforming the entire planet into paper clips, as one famous thought experiment has imagined. It’s A.I.-supercharged corporations destroying the environment and the working class in their pursuit of shareholder value. Capitalism is the machine that will do whatever it takes to prevent us from turning it off, and the most successful weapon in its arsenal has been its campaign to prevent us from considering any alternatives.
Whether consciously or not, AI manufacturers have decided to prioritise plausibility over accuracy. It means AI systems are impressive, but in a world plagued by conspiracy and disinformation this decision only deepens the problem.
An exploration of the problems and possible futures of flooding the web with generative AI content.
Google is a portal to the web. Google is an amazing tool for finding relevant websites to go to. That was useful when it was made, and it’s nothing but grown in usefulness. Google should be encouraging and fighting for the open web. But now they’re like, actually we’re just going to suck up your website, put it in a blender with all other websites, and spit out word smoothies for people instead of sending them to your website. Instead.
I concur with Chris’s assessment:
I just think it’s fuckin’ rude.
This anthology of Steve Jobs interviews, announcements and emails is available to read for free as a nicely typeset web book.
Writing, both code and prose, for me, is both an end product and an end in itself. I don’t want to automate away the things that give me joy.
And that is something that I’m more and more aware of as I get older – sources of joy. It’s good to diversify them, to keep track of them, because it’s way too easy to run out. Or to end up with just one, and then lose it.
The thing about luddites is that they make good punchlines, but they were all people.
I feel like there’s a connection here between what Kevin Kelly is describing and what I wrote about guessing (though I think he might be conflating consciousness with intelligence).
This, by the way, is also true of immersive “virtual reality” environments. Instead of trying to accurately recreate real-world places like meeting rooms, we should be leaning into the hallucinatory power of a technology that can generate dream-like situations where the pleasure comes from relinquishing control.
Erin is back! Add this beautiful blog’s RSS feed to your reader now.
Solarpunk and synthetic biology as a two-pronged approach to the future:
Neither synbio nor Solarpunk has all the right answers, but when they are joined in a symbiotic relationship, they become greater than the sum of their parts. If people could express what they needed, and if scientists could champion those desires — then Solarpunk becomes a will and synbio becomes a way.
I have been reminded time and time again of the utility of writing. How it is a way to turn messy thoughts into coherent ideas, and how – as we all know – practice makes perfect. So I’m going to give it a go.
Welcome to the indie web, Sam!
The story that “artificial intelligence” tells is a smoke screen. But smoke offers only temporary cover. It fades if it isn’t replenished.
A profile of the Xerox Alto and the people behind it.
A great piece by James, adapted from the new edition of his book New Dark Age.
The lesson of the current wave of “artificial” “intelligence”, I feel, is that intelligence is a poor thing when it is imagined by corporations. If your view of the world is one in which profit maximisation is the king of virtues, and all things shall be held to the standard of shareholder value, then of course your artistic, imaginative, aesthetic and emotional expressions will be woefully impoverished. We deserve better from the tools we use, the media we consume and the communities we live within, and we will only get what we deserve when we are capable of participating in them fully. And don’t be intimidated by them either – they’re really not that complicated. As the science-fiction legend Ursula K Le Guin wrote: “Technology is what we can learn to do.”
I think it’s mostly inertia.