A prayer wheel for capitalism
Why “AI” won’t help you get past the blank page in any meaningful way:
The value in writing lies in what we discover while writing.
Why “AI” won’t help you get past the blank page in any meaningful way:
The value in writing lies in what we discover while writing.
There’s a general consensus that large language models are going to get better and better. But what if this as good as it gets …before the snake eats its own tail?
The tails of the original content distribution disappear. Within a few generations, text becomes garbage, as Gaussian distributions converge and may even become delta functions. We call this effect model collapse.
Just as we’ve strewn the oceans with plastic trash and filled the atmosphere with carbon dioxide, so we’re about to fill the Internet with blah. This will make it harder to train newer models by scraping the web, giving an advantage to firms which already did that, or which control access to human interfaces at scale.
Simulmatics as a company was established in 1959 and declared bankruptcy in 1970. The founders picked this name as a mash of ‘simulation’ and ‘automatic’, hoping to coin a new term that would live for decades, which apparently didn’t happen! They worked on building what they called the People Machine to simulate and predict human behavior. It was marketed as a revolutionary technology that would completely change business, politics, warfare and more. Doesn’t this sound familiar?!
No matter what a specific technology does — convert the world’s energy into gambling tokens, encourage people to live inside a helmet, replace living cognition with a statistical analysis of past language use, etc., etc. — all of them are treated mainly as instances of the “creative destruction” necessary for perpetuating capitalism.
Meet the new hype, same as the old hype:
Recent technological pitches — crypto, the “metaverse,” and generative AI — seem harder to defend as inevitable universal improvements of anything at all. It is all too easy to see them as gratuitous innovations whose imagined use cases seem far-fetched at best and otherwise detrimental to all but the select few likely to profit from imposing them on society. They make it starkly clear that the main purpose of technology developed under capitalism is to secure profit and sustain an unjust economic system and social hierarchies, not to advance human flourishing.
Consequently, the ideological defense of technology becomes a bit more desperate.
Forget every article you’ve read that tries to explain large language models. Just read this post by Peter and feel it.
Ingesting every piece of art ever into a machine which lovelessly boils them down to some approximated median result isn’t artistic expression. It may be a neat parlour trick, a fun novelty, but an AI is only able to produce semi-convincing knock-offs of our creations precisely because real, actual people once had the thought, skill and will to create them.
Today’s highly-hyped generative AI systems (most famously OpenAI) are designed to generate bullshit by design. To be clear, bullshit can sometimes be useful, and even accidentally correct, but that doesn’t keep it from being bullshit. Worse, these systems are not meant to generate consistent bullshit — you can get different bullshit answers from the same prompts. You can put garbage in and get… bullshit out, but the same quality bullshit that you get from non-garbage inputs! And enthusiasts are current mistaking the fact that the bullshit is consistently wrapped in the same envelope as meaning that the bullshit inside is consistent, laundering the unreasonable-ness into appearing reasonable.
Let’s be rational here. If I were to imagine a job that was a perfect candidate for replacement by AI, it would be one that consists of measurable tasks that can be learned—allocation of capital, creation and execution of market strategy, selection of candidates for top roles—and one that costs the company a shitload of money. In other words: executives.
The logic is sound. However…
The CEOs will be spared from automation not because they should be, but because they are making the decisions about who is spared from automation.
AI is great anything quantity-related and bad and anything quality-related.
Sensible thinking from Dan here, that mirrors what we’re thinking at Clearleft.
In other words, it leans heavily on averages; the closer the training data matches an average, the higher degree of confidence that the result is more “correct,” or at least desirable.
The problem is that this is the polar opposite of what we consider creativity to be. Creativity isn’t about averages. It’s about the outliers, sometimes the one thing that’s different than all the rest.
Generative AI: What You Need To Know is a free resource that will help you develop an AI-bullshit detector.
You can read all the cards on one page, print them out, or print to PDF.
But in calling these programs “artificial intelligence” we grant them a claim to authorship that is simply untrue. Each of those tokens used by programs like ChatGPT—the “language” in their “large language model”—represents a tiny, tiny piece of material that someone else created. And those authors are not credited for it, paid for it or asked permission for its use. In a sense, these machine-learning bots are actually the most advanced form of a chop shop: They steal material from creators (that is, they use it without permission), cut that material into parts so small that no one can trace them and then repurpose them to form new products.
Seven principles for journalism in the age of AI
- Be rigorous with your definitions.
- Predict less, explain more.
- Don’t hype things up.
- Focus on the people building AI systems — and the people affected by its release.
- Offer strategic takes on products.
- Emphasize the tradeoffs involved.
- Remember that nothing is inevitable.
LLMs have never experienced anything. They are just programs that have ingested unimaginable amounts of text. LLMs might do a great job at describing the sensation of being drunk, but this is only because they have read a lot of descriptions of being drunk. They have not, and cannot, experience it themselves. They have no purpose other than to produce the best response to the prompt you give them.
This doesn’t mean they aren’t impressive (they are) or that they can’t be useful (they are). And I truly believe we are at a watershed moment in technology. But let’s not confuse these genuine achievements with “true AI.”
I’m sure you’ve heard the law of the instrument: when all you have is a hammer, everything looks like a nail.
There’s another side to it. If you’re selling hammers, you’ll depict a world full of nails.
Recent hammers include cryptobollocks and virtual reality. It wasn’t enough for blockchains and the metaverse to be potentially useful for some situations; they staked their reputations on being utterly transformative, disrupting absolutely every facet of life.
This kind of hype is a terrible strategy in the long-term. But if you can convince enough people in the short term, you can make a killing on the stock market. In truth, the technology itself is superfluous. It’s the hype that matters. And if the hype is over-inflated enough, you can even get your critics to do your work for you, broadcasting their fears about these supposedly world-changing technologies.
You’d think we’d learn. If an industry cries wolf enough times, surely we’d become less trusting of extraordinary claims. But the tech industry continues to cry wolf—or rather, “hammer!”—at regular intervals.
The latest hammer is machine learning, usually—incorrectly—referred to as Artificial Intelligence. What makes this hype cycle particularly infuriating is that there are genuine use cases. There are some nails for this hammer. They’re just not as plentiful as the breathless hype—both positive and negative—would have you believe.
When I was hosting the DiBi conference last week, there was a little section on generative “AI” tools. Matt Garbutt covered the visual side, demoing tools like Midjourney. Scott Salisbury covered the text side, showing how you can generate code. Afterwards we had a panel discussion.
During the panel I asked some fairly straightforward questions that nobody could answer. Who owns the input (the data used by these generative tools)? Who owns the output?
On the whole, it stayed quite grounded and mercifully free of hyperbole. Both speakers were treating the current crop of technologies as tools. Everyone agreed we were on the hype cycle, probably the peak of inflated expectations, looking forward to reaching the plateau of productivity.
Scott explicitly warned people off using generative tools for production code. His advice was to stick to side projects for now.
Matt took a closer look at where these tools could fit into your day-to-day design work. Mostly it was pretty sensible, except when he suggested that there could be any merit to using these tools as a replacement for user testing. That’s a terrible idea. A classic hammer/nail mismatch.
I think I moderated the panel reasonably well, but I have one regret. I wish I had first read Baldur Bjarnason’s new book, The Intelligence Illusion. I started reading it on the train journey back from Edinburgh but it would have been perfect for the panel.
The Intelligence Illusion is very level-headed. It is neither pro- nor anti-AI. Instead it takes a pragmatic look at both the benefits and the risks of using these tools in your business.
It has excellent advice for spotting genuine nails. For example:
Generative AI has impressive capabilities for converting and modifying seemingly unstructured data, such as prose, images, and audio. Using these tools for this purpose has less copyright risk, fewer legal risks, and is less error prone than using it to generate original output.
Think about transcripts of videos or podcasts—an excellent use of this technology. As Baldur puts it:
The safest and, probably, the most productive way to use generative AI is to not use it as generative AI. Instead, use it to explain, convert, or modify.
He also says:
Prefer internal tools over externally-facing chatbots.
That chimes with what I’ve been seeing. The most interesting uses of this technology that I’ve seen involve a constrained dataset. Like the way Luke trained a language model on his own content to create a useful chat interface.
Anyway, The Intelligence Illusion is full of practical down-to-earth advice based on plenty of research backed up with copious citations. I’m only halfway through it and it’s already helped me separate the hype from the reality.
In some ways, the fervor around AI is reminiscent of blockchain hype, which has steadily cooled since its 2021 peak. In almost all cases, blockchain technology serves no purpose but to make software slower, more difficult to fix, and a bigger target for scammers. AI isn’t nearly as frivolous—it has several novel use cases—but many are rightly wary of the resemblance. And there are concerns to be had; AI bears the deceptive appearance of a free lunch and, predictably, has non-obvious downsides that some founders and VCs will insist on learning the hard way.
This is a good level-headed overview of how generative language model tools work.
If something can be reduced to patterns, however elaborate they may be, AI can probably mimic it. That’s what AI does. That’s the whole story.
There’s very practical advice on deciding where and when these tools make sense:
The sweet spot for AI is a context where its choices are limited, transparent, and safe. We should be giving it an API, not an output box.
Of course, users can learn over time what prompts work well and which don’t, but the burden to learn what works still lies with every single user. When it could instead be baked into the interface.
Maggie Appleton:
An exploration of the problems and possible futures of flooding the web with generative AI content.
I feel like there’s a connection here between what Kevin Kelly is describing and what I wrote about guessing (though I think he might be conflating consciousness with intelligence).
This, by the way, is also true of immersive “virtual reality” environments. Instead of trying to accurately recreate real-world places like meeting rooms, we should be leaning into the hallucinatory power of a technology that can generate dream-like situations where the pleasure comes from relinquishing control.
Baldur has new book coming out:
The Intelligence Illusion is an exhaustively researched guide to the business risks of Generative AI.
I like how Luke is using a large language model to make a chat interface for his own content.
This is the exact opposite of how grifters are selling the benefits of machine learning (“Generate copious amounts of new content instantly!”) and instead builds on over twenty years of thoughtful human-made writing.