The AI hype bubble is the new crypto hype bubble
A handy round-up of recent wrtings on artificial insemination.
A handy round-up of recent wrtings on artificial insemination.
It doesn’t bother me much that bleeding-edge ML technology sometimes gets things wrong. It bothers me a lot when it gives no warnings, cites no sources, and provides no confidence interval.
Yes! Like I said:
Expose the wires. Show the workings-out.
The last talk at the last dConstruct was by local clever clogs Anil Seth. It was called Your Brain Hallucinates Your Conscious Reality. It’s well worth a listen.
Anil covers a lot of the same ground in his excellent book, Being You. He describes a model of consciousness that inverts our intuitive understanding.
We tend to think of our day-to-day reality in a fairly mechanical cybernetic manner; we receive inputs through our senses and then make decisions about reality informed by those inputs.
As another former dConstruct speaker, Adam Buxton, puts it in his interview with Anil, it feels like that old Beano cartoon, the Numskulls, with little decision-making homonculi inside our head.
But Anil posits that it works the other way around. We make a best guess of what the current state of reality is, and then we receive inputs from our senses, and then we adjust our model accordingly. There’s still a feedback loop, but cause and effect are flipped. First we predict or guess what’s happening, then we receive information. Rinse and repeat.
The book goes further and applies this to our very sense of self. We make a best guess of our sense of self and then adjust that model constantly based on our experiences.
There’s a natural tendency for us to balk at this proposition because it doesn’t seem rational. The rational model would be to make informed calculations based on available data …like computers do.
Maybe that’s what sets us apart from computers. Computers can make decisions based on data. But we can make guesses.
Enter machine learning and large language models. Now, for the first time, it appears that computers can make guesses.
The guess-making is not at all like what our brains do—large language models require enormous amounts of inputs before they can make a single guess—but still, this should be the breakthrough to be shouted from the rooftops: we’ve taught machines how to guess!
And yet. Almost every breathless press release touting some revitalised service that uses AI talks instead about accuracy. It would be far more honest to tout the really exceptional new feature: imagination.
Using AI, we will guess who should get a mortgage.
Using AI, we will guess who should get hired.
Using AI, we will guess who should get a strict prison sentence.
Reframed like that, it’s easy to see why technologists want to bury the lede.
Alas, this means that large language models are being put to use for exactly the wrong kind of scenarios.
(This, by the way, is also true of immersive “virtual reality” environments. Instead of trying to accurately recreate real-world places like meeting rooms, we should be leaning into the hallucinatory power of a technology that can generate dream-like situations where the pleasure comes from relinquishing control.)
Take search engines. They’re based entirely on trust and accuracy. Introducing a chatbot that confidentally conflates truth and fiction doesn’t bode well for the long-term reputation of that service.
But what if this is an interface problem?
Currently facts and guesses are presented with equal confidence, hence the accurate descriptions of the outputs as bullshit or mansplaining as a service.
What if the more fanciful guesses were marked as such?
As it is, there’s a “temperature” control that can be adjusted when generating these outputs; the more the dial is cranked, the further the outputs will stray from the safest predictions. What if that could be reflected in the output?
I don’t know what that would look like. It could be typographic—some markers to indicate which bits should be taken with pinches of salt. Or it could be through content design—phrases like “Perhaps…”, “Maybe…” or “It’s possible but unlikely that…”
I’m sure you’ve seen the outputs when people request that ChatGPT write their biography. Perfectly accurate statements are generated side-by-side with complete fabrications. This reinforces our scepticism of these tools. But imagine how differently the fabrications would read if they were preceded by some simple caveats.
A little bit of programmed humility could go a long way.
Right now, these chatbots are attempting to appear seamless. If 80% or 90% of their output is accurate, then blustering through the other 10% or 20% should be fine, right? But I think the experience for the end user would be immensely more empowering if these chatbots were designed seamfully. Expose the wires. Show the workings-out.
Mind you, that only works if there is some way to distinguish between fact and fabrication. If there’s no way to tell how much guessing is happening, then that’s a major problem. If you can’t tell me whether something is 50% true or 75% true or 25% true, then the only rational response is to treat the entire output as suspect.
I think there’s a fundamental misunderstanding behind the design of these chatbots that goes all the way back to the Turing test. There’s this idea that the way to make a chatbot believable and trustworthy is to make it appear human, attempting to hide the gears of the machine. But the real way to gain trust is through honesty.
I want a machine to tell me when it’s guessing. That won’t make me trust it less. Quite the opposite.
After all, to guess is human.
Manufactured inevitability a.k.a bullshit:
There’s a standard trope that tech evangelists deploy when they talk about the latest fad. It goes something like this:
- Technology XYZ is arriving. It will be incredible for everyone. It is basically inevitable.
- The only thing that can stop it is regulators and/or incumbent industries. If they are so foolish as to stand in its way, then we won’t be rewarded with the glorious future that I am promising.
We can think of this rhetorical move as a Reverse Scooby-Doo. It’s as though Silicon Valley has assumed the role of a Scooby-Doo villain — but decided in this case that he’s actually the hero. (“We would’ve gotten away with it, too, if it wasn’t for those meddling regulators!”)
The critical point is that their faith in the promise of the technology is balanced against a revulsion towards existing institutions. (The future is bright! Unless they make it dim.) If the future doesn’t turn out as predicted, those meddlers are to blame. It builds a safety valve into their model of the future, rendering all predictions unfalsifiable.
I had a video call this morning with someone who was in India. The call went great, except for a few moments when the video stalled.
“Sorry about that”, said the person I was talking to. “It’s the monkeys. They like messing with the cable.”
There’s something charming about an intercontinental internet-enabled meeting being slightly disrupted by some fellow primates being unruly.
It also made me stop and think about how amazing it was that we were having the call in the first place. I remembered Arthur C. Clarke’s predictions from 1964:
I’m thinking of the incredible breakthrough which has been possible by developments in communications, particularly the transistor and, above all, the communications satellite.
These things will make possible a world in which we can be in instant contact with each other wherever we may be, where we can contact our friends anywhere on Earth even if we don’t know their actual physical location.
It will be possible in that age—perhaps only 50 years from now—for a man to conduct his business from Tahiti or Bali just as well as he could from London.
The casual sexism of assuming that it would be a “man” conducting business hasn’t aged well. And it’s not the communications satellite that enabled my video call, but old-fashioned undersea cables, many in the same locations as their telegraphic antecedents. But still; not bad, Arthur.
After my call, I caught up on some email. There was a new newsletter from Ariel who’s currently in Antarctica.
Just thinking about the fact that I know someone who’s in Antarctica—who sent me a postcard from Antarctica—gave me another rush of feeling like I was living in the future. As I started to read the contents of the latest newsletter, that feeling became even more specific. Doesn’t this sound exactly like something straight out of a late ’80s/early ’90s cyberpunk novel?
Four of my teammates head off hiking towards the mountains to dig holes in the soil in hopes of finding microscopic animals contained within them. I hang back near the survival bags with the remaining teammate and begin unfolding my drone to get a closer look at the glaciers. After filming the textures of the land and ice from multiple angles for 90 minutes, my batteries are spent, my hands are cold and my stomach is growling. I land the drone, fold it up into my bright yellow Pelican case, and pull out an expired granola bar to keep my hunger pangs at bay.
I want to posit that, in a time of great uncertainty—in an era of climate change and declining freedom, of attrition and layoffs and burnout, of a still-unfolding rearrangement of our relationship to work—we would do well to build more space for practicing the future. Not merely anticipating it or fearing it or feeding our anxiety over the possibilities—but for building the skill and strength and habits to nurture the future we need. We can’t control what comes next, of course. But we can nudge, we can push, we can guide and shape, we can have an impact. We can move closer to the future we want to live in, no matter how far away it seems to be.
Prompted by the rhetorical question at the end of my post Today, the distant future, Jim casts his gaze into a crystal ball. I like what he sees.
It’s possible the behemoth that is JavaScript in 2022 continues to metastasize as we move towards 2036, especially with technologies like WASM. I wouldn’t rule out the lordship of JavaScript as a possibility of the future.
However, I also think it’s possible—and dare I predict—to say we are peaking in our divergence and are now facing a convergence back towards building with the grain of the web and its native primitives.
Why do I say that? In our quest for progress, we explored so far beyond the standards-based platform that we came to appreciate the modesty of the approach “use the platform”.
He also makes one prediction that lies within his control:
This blog post will still be accessible via its originally published URL in 2036.
It’s a bit of a cliché to talk about living in the future. It’s also a bit pointless. After all, any moment after the big bang is a future when viewed from any point in time before it.
Still, it’s kind of fun when a sci-fi date rolls around. Like in 2015 when we reached the time depicted in Back To The Future 2, or in 2019 when we reached the time of Blade Runner.
In 2022 we are living in the future of web standards. Again, technically, we’re always living in the future of any past discussion of web standards, but this year is significant …in a very insignificant way.
It all goes back to 2008 and an interview with Hixie, editor of the HTML5 spec at the WHATWG at the time. In it, he mentioned the date 2022 as the milestone for having two completely interoperable implementations.
The far more important—and ambitious—date was 2012, when HTML5 was supposed to become a Candidate Recommendation, which is standards-speak for done’n’dusted.
But the mere mention of the year 2022 back in the year 2008 was too much for some people. Jeff Croft, for example, completely lost his shit (Jeff had a habit of posting angry rants and then denying that he was angry or ranty, but merely having a bit of fun).
The whole thing was a big misunderstanding and soon irrelevant: talk of 2022 was dropped from HTML5 discussions. But for a while there, it was fascinating to see web designers and developers contemplate a year that seemed ludicriously futuristic. Jeff wrote:
God knows where I’ll be in 13 years. Quite frankly, I’ll be pretty fucking disappointed in myself (and our entire industry) if I’m writing HTML in 13 years.
That always struck me as odd. If I thought like that, I’d wonder what the point would be in making anything on the web to begin with (bear in mind that both my own personal website and The Session are now entering their third decade of life).
I had a different reaction to Jeff, as I wrote in 2010:
Many web developers were disgusted that such a seemingly far-off date was even being mentioned. My reaction was the opposite. I began to pay attention to HTML5.
But Jeff was far from alone. Scott Gilbertson wrote an angry article on Webmonkey:
If you’re thinking that planning how the web will look and work 13 years from now is a little bit ridiculous, you’re not alone.
Even if your 2022 ronc-o-matic web-enabled toaster (It slices! It dices! It browses! It arouses!) does ship with Firefox v22.3, will HTML still be the dominant language of web? Given that no one can really answer that question, does it make sense to propose a standard so far in the future?
(I’m re-reading that article in the current version of Firefox: 95.0.2.)
Brian Veloso wrote on his site:
Two-thousand-twenty-two. That’s 14 years from now. Can any of us think that far? Wouldn’t our robot overlords, whether you welcome them or not, have taken over by then? Will the internet even matter then?
From the comments on Jeff’s post, there’s Corey Dutson:
2022: God knows what the Internet will look like at that point. Will we even have websites?
Dan Rubin, who has indeed successfully moved from web work to photography, wrote:
I certainly don’t intend to be doing “web work” by that time. I’m very curious to see where the web actually is in 14 years, though I can’t imagine that HTML5 will even get that far; it’ll all be obsolete before 2022.
Joshua Works made a prediction that’s worryingly close to reality:
I’ll be surprised if website-as-HTML is still the preferred method for moving around the tons of data we create, especially in the manner that could have been predicted in 2003 or even today. Hell, iPods will be over 20 years old by then and if everything’s not run as an iPhone App, then something went wrong.
Someone with the moniker Grand Caveman wrote:
In 2022 I’ll be 34, and hopefully the internet will be obsolete by then.
Perhaps the most level-headed observation came from Jonny Axelsson:
The world in 2022 will be pretty much like the world in 2009.
The world in 2009 is pretty much like 1996 which was pretty much like the world in 1983 which was pretty much like the world in 1970. Some changes are fairly sudden, others are slow, some are dramatic, others subtle, but as a whole “pretty much the same” covers it.
The Web in 2022 will not be dramatically different from the Web in 2009. It will be less hot and it will be less cool. The Web is a project, and as it succeeds it will fade out of our attention and into the background. We don’t care about things when they work.
Now that’s a sensible perspective!
So who else is looking forward to seeing what the World Wide Web is like in 2036?
I must remember to write a blog post then and link back to this one. I have no intention of trying to predict the future, but I’m willing to bet that hyperlinks will still be around in 14 years.
A great tool is not a universal tool it’s a tool well suited to a specific problem.
The more universal a solution someone claims to have to whatever software engineering problem exists, and the more confident they are that it is a fully generalized solution, the more you should question them.
The Long Now Foundation is dedicated to long-term thinking. I’ve been a member for quite a few years now …which, in the grand scheme of things, is not very long at all.
One of their projects is Long Bets. It sets out to tackle the problem that “there’s no tax on bullshit.” Here’s how it works: you make a prediction about something that will (or won’t happen) by a particular date. So far, so typical thought leadery. But then someone else can challenge your prediction. And here’s the crucial bit: you’ve both got to place your monies where your mouths are.
Ten years ago, I made a prediction on the Long Bets website. It’s kind of meta:
The original URL for this prediction (www.longbets.org/601) will no longer be available in eleven years.
I made the prediction on February 22nd, 2011 when my mind was preoccupied with digital preservation.
One year later I was on stage in Wellington, New Zealand, giving a talk called Of Time And The Network. I mentioned my prediction in the talk and said:
If anybody would like to take me up on that bet, you can put your money down.
Matt was also speaking at Webstock. When he gave his talk, he officially accepted my challenge.
So now it’s a bet. We both put $500 into the pot. If I win, the Bletchly Park Trust gets that money. If Matt wins, the money goes to The Internet Archive.
As I said in my original prediction:
I would love to be proven wrong.
That was ten years ago today. There’s just one more year to go until the pleasingly alliterative date of 2022-02-22 …or as the Long Now Foundation would write it, 02022-02-22 (gotta avoid that Y10K bug).
It is looking more and more likely that I will lose this bet. This pleases me.
Trying to predict the future is a discouraging and hazardous occupation becaue the profit invariably falls into two stools. If his predictions sounded at all reasonable, you can be quite sure that in 20 or most 50 years, the progress of science and technology has made him seem ridiculously conservative. On the other hand, if by some miracle a prophet could describe the future exactly as it was going to take place, his predictions would sound so absurd, so far-fetched, that everybody would laugh him to scorn.
But I couldn’t resist responding to a recent request for augery. Eric asked An Event Apart speakers for their predictions for the coming year. The responses have been gathered together and published, although it’s in the form of a PDF for some reason.
Here’s what I wrote:
This is probably more of a hope than a prediction, but 2021 could be the year that the ponzi scheme of online tracking and surveillance begins to crumble. People are beginning to realize that it’s far too intrusive, that it just doesn’t work most of the time, and that good ol’-fashioned contextual advertising would be better. Right now, it feels similar to the moment before the sub-prime mortgage bubble collapsed (a comparison made in Tim Hwang’s recent book, Subprime Attention Crisis). Back then people thought “Well, these big banks must know what they’re doing,” just as people have thought, “Well, Facebook and Google must know what they’re doing”…but that confidence is crumbling, exposing the shaky stack of cards that props up behavioral advertising. This doesn’t mean that online advertising is coming to an end—far from it. I think we might see a golden age of relevant, content-driven advertising. Laws like Europe’s GDPR will play a part. Apple’s recent changes to highlight privacy-violating apps will play a part. Most of all, I think that people will play a part. They will be increasingly aware that there’s nothing inevitable about tracking and surveillance and that the web works better when it respects people’s right to privacy. The sea change might not happen in 2021 but it feels like the water is beginning to swell.
Still, predicting the future is a mug’s game with as much scientific rigour as astrology, reading tea leaves, or haruspicy.
It me:
Although some communities have listed journalists as “essential workers,” no one claims that status for the keynote speaker. The “work” of being a keynote speaker feels even more ridiculous than usual these days.
Naomi Kritzer published a short story five years ago called So Much Cooking about a food blogger in lockdown during a pandemic. Prescient.
I left a lot of the details about the disease vague in the story, because what I wanted to talk about was not the science but the individuals struggling to get by as this crisis raged around them. There’s a common assumption that if the shit ever truly hit the fan, people would turn on one another like sharks turning on a wounded shark. In fact, the opposite usually happens: humans in disasters form tight community bonds, help their neighbors, offer what they can to the community.
- Wrong: web workers will take over the world
- Wrong: Safari is the new IE
- Right: developer experience is trumping user experience
- Right: I’m better off without a Twitter account
- Right: the cost of small modules
- Mixed: progressive enhancement isn’t dead, but it smells funny
Maybe I should do one of these.
I am not a believer in the AI singularity — the rapture of the nerds — that is, in the possibility of building a brain-in-a-box that will self-improve its own capabilities until it outstrips our ability to keep up. What CS professor and fellow SF author Vernor Vinge described as “the last invention humans will ever need to make”. But I do think we’re going to keep building more and more complicated, systems that are opaque rather than transparent, and that launder our unspoken prejudices and encode them in our social environment. As our widely-deployed neural processors get more powerful, the decisions they take will become harder and harder to question or oppose. And that’s the real threat of AI — not killer robots, but “computer says no” without recourse to appeal.
I’m really enjoying this end-of-the-year round-up from people speaking their brains. It’s not over yet, but there’s already a lot of thoughtful stuff to read through.
There are optimistic hopeful thoughts from Sam and from Ire:
Only a few years ago, I would need a whole team of developers to accomplish what can now be done with just a few amazing tools.
And I like this zinger from Geoff:
HTML, CSS, and JavaScript: it’s still the best cocktail in town.
Then there are more cautious prognostications from Dave and from Robin:
The true beauty of web design is that you can pick up HTML, CSS, and the basics of JavaScript within a dedicated week or two. But over the past year, I’ve come to the conclusion that building a truly great website doesn’t require much skill and it certainly doesn’t require years to figure out how to perform the coding equivalent of a backflip.
What you need to build a great website is restraint.
Old technology seldom just goes away. Whiteboards and LED screens join chalk blackboards, but don’t eliminate them. Landline phones get scarce, but not phones. Film cameras become rarities, but not cameras. Typewriters disappear, but not typing. And the technologies that seem to be the most outclassed may come back as a the cult objects of aficionados—the vinyl record, for example. All this is to say that no one can tell us what will be obsolete in fifty years, but probably a lot less will be obsolete than we think.
A cli-fi short story by Paolo Bacigalupi.
Speculative fiction as a tool for change:
We need to think harder about the future and ask: What if our policies, institutions, and societies didn’t have to be organized as they are now? Good science fiction taps us into a rich seam of radical answers to this question.
This is the best explanation of quantum computing I’ve read. I mean, it’s not like I can judge its veracity, but I could actually understand it.