Statement on Generative AI | Ben Myers
I endorse this statement.
I endorse this statement.
It looks like it will be a great tool for prototyping. A tool to help developers that don’t have experience with CSS and layout to have a starting point. As someone who spent some time building smoke and mirrors prototypes for UX research, I welcome tools like this.
What concerns me is the assertion that this is production-grade code when it simply is not.
The slides and transcript from a great talk by Maggie Appleton, including this perfect description of the vibes we get from large language models:
It feels like they’re either geniuses playing dumb or dumb machines playing genius, but we don’t know which.
Another great talk from Simon that explains large language models in a hype-free way.
Now that the horse has bolted—and ransacked the web—you can shut the barn door:
To disallow GPTBot to access your site you can add the GPTBot to your site’s robots.txt:
User-agent: GPTBot Disallow: /
Emily M. Bender:
I dislike the term because “artificial intelligence” suggests that there’s more going on than there is, that these things are autonomous thinking entities rather than tools and simply kinds of automation. If we focus on them as autonomous thinking entities or we spin out that fantasy, it is easier to lose track of the people in the picture, both the people who should be accountable for what the systems are doing and the people whose labor and data are being exploited to create them in the first place.
Alternative terms:
And this is worth shouting from the rooftops:
The threat is not the generative “AI” itself. It’s the way that management might choose to use it.
This is a really clear, practical, level-headed explanatory talk from Simon. You can read the transcript or watch the video.
I’m not down with Google swallowing everything posted on the internet to train their generative AI models.
This would mean a lot more if it happened before the wholesale harvesting of everyone’s work.
But I’m sure Google will put a mighty fine lock on that stable door that the horse bolted from.
I want to live in a future where Artificial Intelligences can relieve humans of the drudgery of labour. But I don’t want to live in a future which is built by ripping-off people against their will.
- Be skeptical of PR hype
- Question the training data
- Evaluate the model
- Consider downstream harms
Taken together, these flaws make LLMs look less like an information technology and more like a modern mechanisation of the psychic hotline.
Delegating your decision-making, ranking, assessment, strategising, analysis, or any other form of reasoning to a chatbot becomes the functional equivalent to phoning a psychic for advice.
Imagine Google or a major tech company trying to fix their search engine by adding a psychic hotline to their front page? That’s what they’re doing with Bard.
Could the tsunami of AI shite turn out to be a flash flood? Might the models rapidly degrade into uselessness or soon be sued or blocked out of existence? Will users rebel as their experience of the internet is degraded?
In my most optimistic moments, I find myself hoping that the whole AI edifice will come tumbling down as tools disintegrate, people realise how unreliable they are, and how valuable human-generated and curated information really is. But it’s not a safe bet.
As part of this pointless push, an “AI explain” button appeared on MDN articles. This terrible idea actually got pushed to production (bypassing the usual deploy steps) where it lasted less than a day.
You can read the havoc it wreaked in the short term. We’ll find out how much long-term damage it has done to trust in Mozilla and MDN.
This may be the worst use of a large language model I’ve seen since synthentic users (if you click that link, no it’s not a joke: “user research without the users” is what they’re actually proposing).
Today’s AI promoters are trying to have it both ways: They insist that AI is crossing a profound boundary into untrodden territory with unfathomable risks. But they also define AI so broadly as to include almost any large-scale, statistically-driven computer program.
Under this definition, everything from the Google search engine to the iPhone’s face-recognition unlocking tool to the Facebook newsfeed algorithm is already “AI-driven” — and has been for years.
Why “AI” won’t help you get past the blank page in any meaningful way:
The value in writing lies in what we discover while writing.
There’s a general consensus that large language models are going to get better and better. But what if this as good as it gets …before the snake eats its own tail?
The tails of the original content distribution disappear. Within a few generations, text becomes garbage, as Gaussian distributions converge and may even become delta functions. We call this effect model collapse.
Just as we’ve strewn the oceans with plastic trash and filled the atmosphere with carbon dioxide, so we’re about to fill the Internet with blah. This will make it harder to train newer models by scraping the web, giving an advantage to firms which already did that, or which control access to human interfaces at scale.
Simulmatics as a company was established in 1959 and declared bankruptcy in 1970. The founders picked this name as a mash of ‘simulation’ and ‘automatic’, hoping to coin a new term that would live for decades, which apparently didn’t happen! They worked on building what they called the People Machine to simulate and predict human behavior. It was marketed as a revolutionary technology that would completely change business, politics, warfare and more. Doesn’t this sound familiar?!
The fascinating—and tragic—story of Walter Pitts and Walter McCulloch whose lives and work intersected with Norbert Wiener and John von Neumann:
Thanks to their work, there was a moment in history when neuroscience, psychiatry, computer science, mathematical logic, and artificial intelligence were all one thing, following an idea first glimpsed by Leibniz—that man, machine, number, and mind all use information as a universal currency. What appeared on the surface to be very different ingredients of the world—hunks of metal, lumps of gray matter, scratches of ink on a page—were profoundly interchangeable.
No matter what a specific technology does — convert the world’s energy into gambling tokens, encourage people to live inside a helmet, replace living cognition with a statistical analysis of past language use, etc., etc. — all of them are treated mainly as instances of the “creative destruction” necessary for perpetuating capitalism.
Meet the new hype, same as the old hype:
Recent technological pitches — crypto, the “metaverse,” and generative AI — seem harder to defend as inevitable universal improvements of anything at all. It is all too easy to see them as gratuitous innovations whose imagined use cases seem far-fetched at best and otherwise detrimental to all but the select few likely to profit from imposing them on society. They make it starkly clear that the main purpose of technology developed under capitalism is to secure profit and sustain an unjust economic system and social hierarchies, not to advance human flourishing.
Consequently, the ideological defense of technology becomes a bit more desperate.