Tags: ai

788

sparkline

Tuesday, July 4th, 2023

Talking about “web3” and “AI”

When I was hosting the DIBI conference in Edinburgh back in May, I moderated an impromptu panel on AI:

On the whole, it stayed quite grounded and mercifully free of hyperbole. Both speakers were treating the current crop of technologies as tools. Everyone agreed we were on the hype cycle, probably the peak of inflated expectations, looking forward to reaching the plateau of productivity.

Something else that happened at that event was that I met Deborah Dawton from the Design Business Association. She must’ve liked the cut of my jib because she invited me to come and speak at their get-together in Brighton on the topic of “AI, Web3 and design.”

The representative from the DBA who contacted me knew what they were letting themselves in for. They wrote:

I’ve read a few of your posts on the subject and it would be great if you could join us to share your perspectives.

How could I say no?

I’ve published a transcript of the short talk I gave.

“Web3” and “AI”

A short talk delivered at a gathering in Brighton by the Design Business Association in July 2023 on the topic of “Web3, AI and Design”.

Hello. I was asked by the Design Business Association to talk to you today about “web3 and AI.”

I’d like to explain what those terms mean.

“Web3”

Let’s start with “web3.” Fortunately I don’t have to come up with an explanation for this term because my friend Heydon Pickering has recorded a video entitled “what is web 3.0?

What is web trois point nought?

Web uno dot zilch was/is a system of interconnected documents traversible by hyperlink.

However, web deux full stop nowt was/is a system of interconnected documents traversible by hyperlink.

On the other hand, web drei dot zilch is a system of interconnected documents traversible by hyperlink.

Should you wish to upgrade to web three point uno, expect a system of interconnected documents traversible by hyperlink.

If we ever get to web noventa y cinco, you can bet your sweet @rse, it will be a system of interconnected documents traversible by f*!king hyperlink.

There you have it. “Web3” is a completely meaningless term. If someone uses it, they’re probably trying to sell you something.

If you ask for a definition, you’ll get a response like “something something decentralisation something something blockchain.”

As soon as someone mentions blockchain, you can tune out. It’s the classic example of a solution in search of a problem (although it’s still early days; it’s only been …more than a decade).

I can give you a defintion of what a blockchain is. A blockchain is multiple copies of a spreadsheet.

I find it useful to be able to do mental substitions like that when it comes to buzzwords. Like, remember when everyone was talking about “the cloud” but no one was asking what that actually meant? Well, by mentally substituting “the cloud” with “someone else’s server” you get a much better handle on the buzzword.

So, with “web3” out of the way, we can move onto the next buzzword. AI.

“AI”

The letters A and I are supposed to stand for Artificial Intelligence. It’s a term that’s almost as old as digital computing itself. It goes right back to the 1950s.

These days we’d use the term Artificial General Intelligence—AGI—to talk about that original vision of making computers as smart as people.

Vision is the right term here, because AGI remains a thought experiment. This is the realm of super intelligence: world-ending AI overlords; paperclip maximisers; Roko’s basilisk.

These are all fascinating thought experiments but they’re in the same arena as speculative technologies like faster-than-light travel or time travel. I’m happy to talk about any of those theoretically-possible topics, but that’s not what we’re here to talk about today.

When you hear about AI today, you’re probably hearing about specific technologies like large language models and machine learning.

Let’s take a look at large language models and their visual counterparts, diffusion models. They both work in the same way. You take a metric shit ton of data and you assign each one to a token. So you’ve got a numeric token that represents a bigger item: a phrase in a piece of text, or an object in an image.

The author Ted Chiang used a really good analogy to describe this process when he said ChatGPT is like a blurry JPEG of the web.

Just as image formats like JPG use compression to smush image data, these models use compression to smush data into tokens.

By the way, the GPT part of ChatGPT stands for Generative Pre-trained Transformer. The pre-training is that metric shit ton of data I mentioned. The generative part is about combining—or transforming—tokens in a way that should make probabalistic sense.

Terminology

Here’s some more terminology that comes up when people talk about these tools.

Overfitting. This is when the output produced by a generative pre-trained transformer is too close to the original data that fed the model. Another word for overfitting is plagiarism.

Hallucinations. People use this word when the output produced by a generative pre-trained transformer strays too far from reality. Another word for this is lying. Although the truth is that all of the output is a form of hallucination—that’s the generative part. Sometimes the output happens to match objective reality. Sometimes it doesn’t.

What about the term AI itself? Is there a more accurate term we could be using?

I’m going to quote Ted Chiang again. He proposes that a more accurate term is applied statistics. I like that. It points to the probabalistic nature of these tools: take an enormous amount of inputs, then generate something that feels similar based on implied correlations.

I like to think of “AI” as a kind of advanced autocomplete. I don’t say that to denigrate it. Quite the opposite. Autocomplete is something that appears mundane on the surface but has an incredible amount of complexity underneath: real-time parsing of input, a massive database of existing language, and on-the-fly predictions of the next most suitable word. Large language models do the same thing, but on a bigger scale.

What’s it good for?

So what is AI good for? Or rather, what is a language or diffusion model good for? Or what is applied statistics or advanced autocomplete good for?

Transformation. These tools are really good at transforming between formats. Text to speech. Speech to text. Text to images. Long form to short form. Short form to long form.

Think of transcripts. Summaries. These are smart uses of this kind of technology.

Coding, to a certain extent, can be considered a form of transformation. I’ve written books on programming, and I always advise people to first write out what they want in English. Then translate each line of English into the programming language. Large language models do a pretty good job of this right now, but you still need a knowledgable programmer to check the output for errors—there will be errors.

(As for long-form and short-form text transformations, the end game may be an internet filled with large language models endlessly converting our written communications.)

When it comes to the design process, these tools are good at quantity, not quality. If you need to generate some lorem ipsum placeholder text—or images—go for it.

What they won’t help with is problem definition. And it turns out that understanding and defining the problem is the really hard part of the design process.

Use these tools for inputs, not outputs. I would never publish the output of one of these tools publicly. But I might use one of these tools at the beginning of the process to get over the blank page. If I want to get a bunch of mediocre ideas out of the way quickly, these tools can help.

There’s an older definition of the intialism AI that dairy farmers would be familiar with, when “the AI man” would visit the farm. In that context, AI stands for artificial insemination. Perhaps thats also a more helpful definition of AI tools in the design process.

But, like I said, the outputs are not for public release. For one thing, the generated outputs aren’t automatically copyrighted. That’s only fair. Technically, it’s not your work. It is quite literally derivative.

Why all the hype?

Everything I’ve described here is potentially useful in some circumstances, but not Earth-shattering. So what’s with all the hype?

Venture capital. With this model of funding, belief in a technology’s future matters more than the technology’s actual future.

We’ve already seen this in action with self-driving cars, the metaverse, and cryptobollocks. Reality never matched the over-inflated expectations but that made no difference to the people profiting from the investments in those technologies (as long as they make sure to get out in time).

By the way, have you noticed how all your crypto spam has been replaced by AI spam? Your spam folder is a good gauge of what’s hot in venture capital circles right now.

The hype around AI is benefiting from a namespace clash. Remember, AI as in applied statistics or advanced autocomplete has nothing in common with AI as in Artificial General Intelligence. But because the same term is applied to both, the AI hype machine can piggyback on the AGI discourse.

It’s as if we decided to call self-driving cars “time machines”— we’d be debating the ethics of time travel as though it were plausible.

For a refreshing counter-example, take a look at what Apple is saying about AI. Or rather, what it isn’t saying. In the most recent Apple keynote, the term AI wasn’t mentioned once.

Technology blogger Om Malik wrote:

One of the most noticeable aspects of the keynote was the distinct lack of mention of AI or ChatGPT.

I think this was a missed marketing opportunity for the company.

I couldn’t disagree more. Apple is using machine learning a-plenty: facial recognition, categorising your photos, and more. But instead of over-inflating that work with the term AI, they stick to the more descriptive term of machine learning.

I think this will pay off when the inevitable hype crash comes. Other companies, that have tied their value to the mast of AI will see their stock prices tank. But because Apple is not associating themselves with that term, they’re well positioned to ride out that crash.

What should you do?

Alright, it’s time for me to wrap this up with some practical words of advice.

Beware of the Law of the instrument. You know the one: when all you have is a hammer, everything looks a nail. There’s a corollary to that: when the market is investing heavily in hammers, everyone’s going to try to convince you that the world is full of nails. See if you can instead cultivate a genuine sense of nailspotting.

It should ring alarm bells if you find yourself thinking “how can I find a use for this technology?” Rather, spend your time figuring out what problem you’re trying to solve and only then evaluate which technologies might help you.

Never make any decision out of fear. FOMO—Fear Of Missing Out—has been weaponised again and again, by crypto, by “web3”, by “AI”.

The message is always the same: “don’t get left behind!”

“It’s inevitable!” they cry. But you know what’s genuinely inevitable? Climate change. So maybe focus your energy there.

Links

I’ll leave you with some links.

I highly recommend you get a copy of the book, The Intelligence Illusion by Baldur Bjarnason. You can find it at illusion.baldurbjarnason.com

The subtitle is “a practical guide to the business risks of generative AI.” It doesn’t get into philosophical debates on potential future advances. Instead it concentrates squarely on the pros and cons of using these tools in your business today. It’s backed up by tons of research with copious amounts of footnotes and citations if you want to dive deeper into any of the issues.

If you don’t have time to read the whole book, Baldur has also created a kind of cheat sheet. Go to needtoknow.fyi and you can a one-page list of cards to help you become an AI bullshit detector.

I keep track of interesting developments in this space on my own website, tagging with “machine learning” at adactio.com/tags/machinelearning

Thank you very much for your time today.

Word Count 53: The state of AI and the Goodreads fiasco

Could the tsunami of AI shite turn out to be a flash flood? Might the models rapidly degrade into uselessness or soon be sued or blocked out of existence? Will users rebel as their experience of the internet is degraded?

In my most optimistic moments, I find myself hoping that the whole AI edifice will come tumbling down as tools disintegrate, people realise how unreliable they are, and how valuable human-generated and curated information really is. But it’s not a safe bet.

Saturday, July 1st, 2023

Introducing AI Help: Your Trusted Companion for Web Development | MDN Blog

As part of this pointless push, an “AI explain” button appeared on MDN articles. This terrible idea actually got pushed to production (bypassing the usual deploy steps) where it lasted less than a day.

You can read the havoc it wreaked in the short term. We’ll find out how much long-term damage it has done to trust in Mozilla and MDN.

This may be the worst use of a large language model I’ve seen since synthentic users (if you click that link, no it’s not a joke: “user research without the users” is what they’re actually proposing).

Monday, June 26th, 2023

In new AI hype frenzy, tech is applying the label to everything now

Today’s AI promoters are trying to have it both ways: They insist that AI is crossing a profound boundary into untrodden territory with unfathomable risks. But they also define AI so broadly as to include almost any large-scale, statistically-driven computer program.

Under this definition, everything from the Google search engine to the iPhone’s face-recognition unlocking tool to the Facebook newsfeed algorithm is already “AI-driven” — and has been for years.

Tuesday, June 20th, 2023

A prayer wheel for capitalism

Why “AI” won’t help you get past the blank page in any meaningful way:

The value in writing lies in what we discover while writing.

Sunday, June 18th, 2023

Will GPT models choke on their own exhaust? | Light Blue Touchpaper

There’s a general consensus that large language models are going to get better and better. But what if this as good as it gets …before the snake eats its own tail?

The tails of the original content distribution disappear. Within a few generations, text becomes garbage, as Gaussian distributions converge and may even become delta functions. We call this effect model collapse.

Just as we’ve strewn the oceans with plastic trash and filled the atmosphere with carbon dioxide, so we’re about to fill the Internet with blah. This will make it harder to train newer models by scraping the web, giving an advantage to firms which already did that, or which control access to human interfaces at scale.

AI Hype-Driven Development - Parallels in History -

Simulmatics as a company was established in 1959 and declared bankruptcy in 1970. The founders picked this name as a mash of ‘simulation’ and ‘automatic’, hoping to coin a new term that would live for decades, which apparently didn’t happen! They worked on building what they called the People Machine to simulate and predict human behavior. It was marketed as a revolutionary technology that would completely change business, politics, warfare and more. Doesn’t this sound familiar?!

The man who tried to redeem the world with logic - Big Think

The fascinating—and tragic—story of Walter Pitts and Walter McCulloch whose lives and work intersected with Norbert Wiener and John von Neumann:

Thanks to their work, there was a moment in history when neuroscience, psychiatry, computer science, mathematical logic, and artificial intelligence were all one thing, following an idea first glimpsed by Leibniz—that man, machine, number, and mind all use information as a universal currency. What appeared on the surface to be very different ingredients of the world—hunks of metal, lumps of gray matter, scratches of ink on a page—were profoundly interchangeable.

Probable events poison reality - by Rob Horning

No matter what a specific technology does — convert the world’s energy into gambling tokens, encourage people to live inside a helmet, replace living cognition with a statistical analysis of past language use, etc., etc. — all of them are treated mainly as instances of the “creative destruction” necessary for perpetuating capitalism.

Meet the new hype, same as the old hype:

Recent technological pitches — crypto, the “metaverse,” and generative AI — seem harder to defend as inevitable universal improvements of anything at all. It is all too easy to see them as gratuitous innovations whose imagined use cases seem far-fetched at best and otherwise detrimental to all but the select few likely to profit from imposing them on society. They make it starkly clear that the main purpose of technology developed under capitalism is to secure profit and sustain an unjust economic system and social hierarchies, not to advance human flourishing.

Consequently, the ideological defense of technology becomes a bit more desperate.

Saturday, June 17th, 2023

Vibe Shift

Forget every article you’ve read that tries to explain large language models. Just read this post by Peter and feel it.

Tuesday, June 13th, 2023

When I lost my job, I learned to code. Now AI doom mongers are trying to scare me all over again | Tristan Cross | The Guardian

Ingesting every piece of art ever into a machine which lovelessly boils them down to some approximated median result isn’t artistic expression. It may be a neat parlour trick, a fun novelty, but an AI is only able to produce semi-convincing knock-offs of our creations precisely because real, actual people once had the thought, skill and will to create them.

Monday, June 12th, 2023

Today’s AI is unreasonable - Anil Dash

Today’s highly-hyped generative AI systems (most famously OpenAI) are designed to generate bullshit by design. To be clear, bullshit can sometimes be useful, and even accidentally correct, but that doesn’t keep it from being bullshit. Worse, these systems are not meant to generate consistent bullshit — you can get different bullshit answers from the same prompts. You can put garbage in and get… bullshit out, but the same quality bullshit that you get from non-garbage inputs! And enthusiasts are current mistaking the fact that the bullshit is consistently wrapped in the same envelope as meaning that the bullshit inside is consistent, laundering the unreasonable-ness into appearing reasonable.

Sunday, June 11th, 2023

Sunday

Today was a good day. The weather was beautiful.

Jessica and I did a little bit of work in the garden—nothing too sweaty. Then Jessica cut my hair. It looks good. And it feels good to have my neck freed up.

We went for a Sunday roast at the nearest pub, which does a most excellent carvery. It was tasty and plentiful so after strolling home, I wanted to do nothing more than sit around.

I sat outside in the back garden under the dappled shade offered by the overhanging trees. I had a good book. I had my mandolin to hand. I’d reach for it occassionally to play a tune or two.

Coco the cat—not our cat—sat nearby, stretching her paws out lazily in the warm muggy air.

It was a good day.

Tuesday, June 6th, 2023

Reaction

It all started with a trip into the countryside one Sunday a few weeks back.

The weather has been getting better and better. The countryside was calling. Meanwhile, Jessica was getting worried about her newly-acquired driving skills getting rusty. She has her license, but doesn’t get the chance to drive very often. She signed up to a car club that lets her book a hybrid car for a few hours at a time—just enough to keep in practice, and also just enough for a little jaunt into the countryside.

We went for Sunday lunch at the Shepherd and Dog in Fulking, near to Devil’s Dyke (I swear that sentence makes sense if you live ’round these parts). It was a lovely day. The Sunday roast was good. But it was on the way back that things started to go wrong.

We had noticed that one of the front tyres was looking a little flat so we planned to stop into a garage to get that seen to. We never made it that far. The tell-tale rhythmic sounds of rubber flapping around told us that we now had a completely flat tyre. Cue panic.

Fortunately we weren’t too far from a layby. We pulled in on the side of the busy road that runs by Saddlescombe Farm.

This is when the Kafkaesque portion of the day began. Jessica had to call the car club, but reception was spotty to put it mildly. There was much frustration, repitition, and hold music.

Eventually it was sorted out enough that we were told to wait for someone from the AA who’d come by and change the tyre in a few hours. To be fair, there are worse places to be stuck on a sunny Summer’s day. We locked the car and walked off across the rolling hills to pass the time.

The guy from the AA actually showed up earlier than expected. We hurried back and then sat and watched as he did his mechanical mending. We got the all-clear to drive the car back to Brighton, as long we didn’t exceed 50 miles per hour.

By the time we got home, we were beat. What a day! I could feel the beginnings of a headache so I popped some ibuprofin to stave it off. Neither of us could be bothered cooking, so we opted for a lazy evening in front of the telly eating takeaway.

I went onto Deliveroo and realised I couldn’t even manage the cognitive overhead of deciding what to eat. So I just went to my last order—a nice mix of Chinese food—and clicked on the option to place exactly the same order again.

And so we spent our Sunday evening munching on Singapore fried noodles and catching up on the most excellent Aussie comedy series, Colin From Accounts. It was just what I needed after an eventful day.

I had just finished my last bite when I felt I needed to cough. That kicked off some wheezing. That was a bit weird. So was the itchy sensation in my ears. Like, the insides of my ears were itchy. Come to think of it, my back was feeling really itchy too.

The wheeziness was getting worse. I had been trying to pass it off, responding to Jessica’s increasingly worried questions with “I’m grand, I’ll be f…” Sorry, had to cough. Trying to clear my throat. It feels a bit constricted.

When Jessica asked if she should call 111, I nodded. Talking took a bit of effort.

Jessica described my symptoms over the phone. Then the operator asked to speak to me. I answered the same questions, but in a much wheezier way.

An ambulance was on its way. But if the symptoms got worse, we should call 999.

The symptoms got worse. Jessica called 999. The ambulance arrived within minues.

The two paramedics, Alastair and Lucy, set to work diagnosing the problem. Let’s go into the ambulance, they said. They strapped a nebuliser onto my face which made breathing easier. It also made everything I said sound like a pronouncement from Bane.

They were pretty sure it was anaphylaxis. I’ve never been allergic to anything in my life, but clearly I was reacting to something. Was it something in the Chinese food? Something in the countryside?

In any case, they gave me a jab of antihistamine into my arm and took us to the emergency room.

By the time we got there, I was feeling much better. But they still needed to keep me under observation. So Jessica and I spent a few hours sitting in the hallway. Someone came by every now and then to check on me and offer us some very welcome cups of tea.

Once it was clear that I was fully recovered, I was discharged with a prescription for an EpiPen.

I picked up the prescription the next day. Having an EpiPen filled with adrenaline was reassuring but it was disconcerting not knowing what caused my anaphylactic reaction in the first place.

After that stressful weekend, life went back to normal, but with this cloud of uncertainty hovering above. Was that it? Would it happen again? Why did it happen?

The weather stayed nice all week. By the time the next weekend rolled around, I planned to spend it doing absolutely nothing. That was just as well, because when I woke up on Saturday morning, I had somehow managed to twist something in my shoulder. I guess I’m at that age now where I can injure myself in my sleep.

I took some neproxin, which helped. After a while, the pain was gone completely.

Jessica and I strolled to the park and had brunch in a nice local café. Then we strolled home and sat out in the garden, enjoying the sunshine.

I was sitting there reading my book when I noticed it. The insides of my ears. They were getting itchy. I swallowed nervously. Was it my imagination or did that swallowing sensation feel slightly constricted. And is that a wheeze I hear?

It was happening again.

The symptoms continued to get worse. Alright, it was time to use that EpiPen. I had read the instructions carefully so I knew just what to do. I did the EpiPen mambo: hold, jab, press.

It worked. We called 999 (as instructed) and were told to go the emergency room. This time we went by taxi.

I checked in, and then sat in the waiting room. I noticed that everyone else had white wristbands, but mine was red. I guess my place in the triage was high priority.

As I sat there, I could feel some of those symptoms returning, but very slowly. By the time we saw someone, there was no mistaking it. The symptoms were coming back.

I was hooked up to the usual instruments—blood pressure, heart rate, blood oxygen—while the hospital staff conferred about what to do. I was getting a bit clammy. I started to feel a bit out of it.

Beep, beep! One of those numbers—blood oxygen?—had gone below a safe threshold. I saw the staff go into action mode. Someone hit a button—the red light in the ceiling started flashing. Staff who had been dealing with other patients came to me.

Instructions were spoken clearly and efficiently, then repeated back with equal clarity and efficiency. “Adrenaline. One in ten thousand.” “Adrenaline. One in ten thousand.” They reclined my chair, elevated my legs, pulled down my trousers, and gave me my second shot in one day.

It worked. I started to feel much better straight away. But once again, I needed to be kept under observation. I was moved to the “recus” ward, passing through the corridor that was so familiar from the previous weekend.

This time we’d spend a grand total of twelve hours in the hospital. Once again, it was mercifully uneventful. But it gave us the opportunity to put two and two together. What was the common thread between both episodes?

Ibuprofin. Neproxin. They’re both non-steroidal anti-inflammatory drugs (NSAIDS). That fits

Foods are the most common trigger in children and young adults, while medications and insect bites and stings are more common in older adults. … Any medication may potentially trigger anaphylaxis. The most common are β-lactam antibiotics (such as penicillin) followed by aspirin and NSAIDs.

The doctors agreed—the connection looked pretty clear. I saw my GP a few days later and she’s reffered me to an allergy-testing clinic to confirm it. That might take a while though. In the meantime, I also got another prescription for more EpiPens.

Hopefully I won’t need them. I’m very, very glad that I don’t appear to be allergic to a foodstuff. I’d rather do without ibuprofin and aspirin than have to vigilantly monitor my diet.

But I do need to get into the habit of making sure I’ve got at least one EpiPen with me wherever I go. I’ll probably never need to use it. I feel like I’ve had enough anaphylaxis in the past couple of weeks to last me a lifetime.

Oh, and one more thing. I know everyone says this after dealing with some kind of health emergency in this country, but I’m going to say it anyway:

The NHS is easily the best thing ever invented in the UK. Everyone I dealt with was fantastic. It was all in a day’s work for them, but I am forever in their debt (whereas had this happened in, say, the USA, I would forever be in a much more literal debt).

Thank you, NHS!

Thursday, June 1st, 2023

Automate the CEOs - by Hamilton Nolan - How Things Work

Let’s be rational here. If I were to imagine a job that was a perfect candidate for replacement by AI, it would be one that consists of measurable tasks that can be learned—allocation of capital, creation and execution of market strategy, selection of candidates for top roles—and one that costs the company a shitload of money. In other words: executives.

The logic is sound. However…

The CEOs will be spared from automation not because they should be, but because they are making the decisions about who is spared from automation.

Wednesday, May 31st, 2023

Future-first design thinking

If we’re serious about creating a sustainable future, perhaps we should change this common phrase from “Form follows Function” to “Form – Function – Future”. While form and function are essential considerations, the future, represented by sustainability, should be at the forefront of our design thinking. And actually, if sustainability is truly at the forefront of the way we create new products, then maybe we should revise the phrase even further to “Future – Function – Form.” This revised approach would place our future, represented by sustainability, at the forefront of our design thinking. It would encourage us to first ask ourselves, “What is the most sustainable way to design X?” and then consider how the function of X can be met while ensuring it remains non-harmful to people and the planet.

Tuesday, May 30th, 2023

“Artificial Intelligence & Humanity,” an article by Dan Mall

AI is great anything quantity-related and bad and anything quality-related.

Sensible thinking from Dan here, that mirrors what we’re thinking at Clearleft.

In other words, it leans heavily on averages; the closer the training data matches an average, the higher degree of confidence that the result is more “correct,” or at least desirable.

The problem is that this is the polar opposite of what we consider creativity to be. Creativity isn’t about averages. It’s about the outliers, sometimes the one thing that’s different than all the rest.

Saturday, May 27th, 2023