Tags: ar

5811

sparkline

Friday, March 24th, 2023

Hello, internet | Sam O’Neill

I have been reminded time and time again of the utility of writing. How it is a way to turn messy thoughts into coherent ideas, and how – as we all know – practice makes perfect. So I’m going to give it a go.

Welcome to the indie web, Sam!

Thursday, March 23rd, 2023

Steam

Picture someone tediously going through a spreadsheet that someone else has filled in by hand and finding yet another error.

“I wish to God these calculations had been executed by steam!” they cry.

The year was 1821 and technically the spreadsheet was a book of logarithmic tables. The frustrated cry came from Charles Babbage, who channeled his frustration into a scheme to create the world’s first computer.

His difference engine didn’t work out. Neither did his analytical engine. He’d spend his later years taking his frustrations out on street musicians, which—as a former busker myself—earns him a hairy eyeball from me.

But we’ve all been there, right? Some tedious task that feels soul-destroying in its monotony. Surely this is exactly what machines should be doing?

I have a hunch that this is where machine learning and large language models might turn out to be most useful. Not in creating breathtaking works of creativity, but in menial tasks that nobody enjoys.

Someone was telling me earlier today about how they took a bunch of haphazard notes in a client meeting. When the meeting was done, they needed to organise those notes into a coherent summary. Boring! But ChatGPT handled it just fine.

I don’t think that use-case is going to appear on the cover of Wired magazine anytime soon but it might be a truer glimpse of the future than any of the breathless claims being eagerly bandied about in Silicon Valley.

You know the way we no longer remember phone numbers, because, well, why would we now that we have machines to remember them for us? I’d be quite happy if machines did that for the annoying little repetitive tasks that nobody enjoys.

I’ll give you an example based on my own experience.

Regular expressions are my kryptonite. I’m rubbish at them. Any time I have to figure one out, the knowledge seeps out of my brain before long. I think that’s because I kind of resent having to internalise that knowledge. It doesn’t feel like something a human should have to know. “I wish to God these regular expressions had been calculated by steam!”

Now I can get a chatbot with a large language model to write the regular expression for me. I still need to describe what I want, so I need to write the instructions clearly. But all the gobbledygook that I’m writing for a machine now gets written by a machine. That seems fair.

Mind you, I wouldn’t blindly trust the output. I’d take that regular expression and run it through a chatbot, maybe a different chatbot running on a different large language model. “Explain what this regular expression does,” would be my prompt. If my input into the first chatbot matches the output of the second, I’d have some confidence in using the regular expression.

A friend of mine told me about using a large language model to help write SQL statements. He described his database structure to the chatbot, and then described what he wanted to select.

Again, I wouldn’t use that output without checking it first. But again, I might use another chatbot to do that checking. “Explain what this SQL statement does.”

Playing chatbots off against each other like this is kinda how machine learning works under the hood: generative adverserial networks.

Of course, the task of having to validate the output of a chatbot by checking it with another chatbot could get quite tedious. “I wish to God these large language model outputs had been validated by steam!”

Sounds like a job for machines.

Learn Privacy

Stuart has written this fantastic concise practical guide to privacy for developers and designers. A must-read!

  1. Use just the data you need
  2. Third parties
  3. Fingerprinting
  4. Encryption
  5. Best practices

Why ChatGPT Won’t Replace Coders Just Yet

I’ve been using Copilot for over a year now, and this is more or less how I use it: To help me quickly blast through boilerplate code so I can more quickly get to the tricky bits.

There’s a more subtle problem with ChatGPT’s code generation, which is that it suffers from ChatGPT’s general “bullshit” problem.

Smoke screen | A Working Library

The story that “artificial intelligence” tells is a smoke screen. But smoke offers only temporary cover. It fades if it isn’t replenished.

Wednesday, March 22nd, 2023

Checked in at Jolly Brewer. Wednesday night session 🎻🎻🎻🎶 — with Jessica map

Checked in at Jolly Brewer. Wednesday night session 🎻🎻🎻🎶 — with Jessica

Disclosure

You know how when you’re on hold to any customer service line you hear a message that thanks you for calling and claims your call is important to them. The message always includes a disclaimer about calls possibly being recorded “for training purposes.”

Nobody expects that any training is ever actually going to happen—surely we would see some improvement if that kind of iterative feedback loop were actually in place. But we most certainly want to know that a call might be recorded. Recording a call without disclosure would be unethical and illegal.

Consider chatbots.

If you’re having a text-based (or maybe even voice-based) interaction with a customer service representative that doesn’t disclose its output is the result of large language models, that too would be unethical. But, at the present moment in time, it would be perfectly legal.

That needs to change.

I suspect the necessary legislation will pass in Europe first. We’ll see if the USA follows.

In a way, this goes back to my obsession with seamful design. With something as inherently varied as the output of large language models, it’s vital that people have some way of evaluating what they’re told. I believe we should be able to see as much of the plumbing as possible.

The bare minimum amount of transparency is revealing that a machine is in the loop.

This shouldn’t be a controversial take. But I guarantee we’ll see resistance from tech companies trying to sell their “AI” tools as seamless, indistinguishable drop-in replacements for human workers.

Tuesday, March 21st, 2023

Monday, March 20th, 2023

The AI hype bubble is the new crypto hype bubble

A handy round-up of recent wrtings on artificial insemination.

Pixel Pioneers Bristol 2023 Speaker Spotlight: Jeremy Keith

Oliver asked me some questions about my upcoming talk at Pixel Pioneers in Bristol in June. Here are my answers.

Sunday, March 19th, 2023

Checked in at The Bugle Inn. Sunday session 🎶🎻 map

Checked in at The Bugle Inn. Sunday session 🎶🎻

Artificial Guessing

Artificial Intelligence sounds much more impressive than Artificial Guessing in a slide deck.

Robin picks up on my framing.

Instead of brainstorming, discussing, iterating, closely inspecting a product to understand it and figure out what to show on a page, well, we can just let the machines figure it out for us! This big guessing machine can do our homework and we can all pack up and go to the beach.

ongoing by Tim Bray · The LLM Problem

It doesn’t bother me much that bleeding-edge ML technology sometimes gets things wrong. It bothers me a lot when it gives no warnings, cites no sources, and provides no confidence interval.

Yes! Like I said:

Expose the wires. Show the workings-out.

Thursday, March 16th, 2023

The stupidity of AI | The Guardian

A great piece by James, adapted from the new edition of his book New Dark Age.

The lesson of the current wave of “artificial” “intelligence”, I feel, is that intelligence is a poor thing when it is imagined by corporations. If your view of the world is one in which profit maximisation is the king of virtues, and all things shall be held to the standard of shareholder value, then of course your artistic, imaginative, aesthetic and emotional expressions will be woefully impoverished. We deserve better from the tools we use, the media we consume and the communities we live within, and we will only get what we deserve when we are capable of participating in them fully. And don’t be intimidated by them either – they’re really not that complicated. As the science-fiction legend Ursula K Le Guin wrote: “Technology is what we can learn to do.”

Wednesday, March 15th, 2023

Checked in at Jolly Brewer. Wednesday night session 🎻🎶 — with Jessica map

Checked in at Jolly Brewer. Wednesday night session 🎻🎶 — with Jessica

Another three speakers for UX London 2023

I know I’m being tease, doling out these UX London speaker announcements in batches rather than one big reveal. Indulge me in my suspense-ratcheting behaviour.

Today I’d like to unveil three speakers whose surnames start with the letter H…

  • Stephen Hay, Creative Director at Rabobank,
  • Asia Hoe, Senior Product Designer, and
  • Amy Hupe, Design Systems consultant at Frankly Design.
A professional portrait of a smiling white man in a turtleneck jumper and suit jacket with close-cut dark curly hair that's beginning to show signs of grey. An outdoor portrait of a smiling dark-skinned woman smiling with shoulder-length black hair. A smiling white woman with long dark hair sitting on the sofa in a cosy room with a nice cup of tea.

Just look at how that line-up is coming together! There’ll be just one more announcement and then the roster will be complete.

But don’t wait for that. Grab your ticket now and I’ll see you in London on June 22nd and 23rd!

Stochastic Parrots Day Tickets, Fri, Mar 17, 2023 at 8:00 AM | Eventbrite

This free event is running online from 3pm to 7pm UK time this Friday. The line-up features Emily Bender, Safiya Noble, Timnit Gebru and more.

Since the publication of On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?🦜 two years ago, many of the harms the paper has warned about and more, have unfortunately occurred. From exploited workers filtering hateful content, to an engineer claiming that chatbots are sentient, the harms are only accelerating.

Join the co-authors of the paper and various guests to reflect on what has happened in the last two years, what the large language model landscape currently look like, and where we are headed vs where we should be headed.