Tags: bots

41

sparkline

Thursday, March 23rd, 2023

Steam

Picture someone tediously going through a spreadsheet that someone else has filled in by hand and finding yet another error.

“I wish to God these calculations had been executed by steam!” they cry.

The year was 1821 and technically the spreadsheet was a book of logarithmic tables. The frustrated cry came from Charles Babbage, who channeled his frustration into a scheme to create the world’s first computer.

His difference engine didn’t work out. Neither did his analytical engine. He’d spend his later years taking his frustrations out on street musicians, which—as a former busker myself—earns him a hairy eyeball from me.

But we’ve all been there, right? Some tedious task that feels soul-destroying in its monotony. Surely this is exactly what machines should be doing?

I have a hunch that this is where machine learning and large language models might turn out to be most useful. Not in creating breathtaking works of creativity, but in menial tasks that nobody enjoys.

Someone was telling me earlier today about how they took a bunch of haphazard notes in a client meeting. When the meeting was done, they needed to organise those notes into a coherent summary. Boring! But ChatGPT handled it just fine.

I don’t think that use-case is going to appear on the cover of Wired magazine anytime soon but it might be a truer glimpse of the future than any of the breathless claims being eagerly bandied about in Silicon Valley.

You know the way we no longer remember phone numbers, because, well, why would we now that we have machines to remember them for us? I’d be quite happy if machines did that for the annoying little repetitive tasks that nobody enjoys.

I’ll give you an example based on my own experience.

Regular expressions are my kryptonite. I’m rubbish at them. Any time I have to figure one out, the knowledge seeps out of my brain before long. I think that’s because I kind of resent having to internalise that knowledge. It doesn’t feel like something a human should have to know. “I wish to God these regular expressions had been calculated by steam!”

Now I can get a chatbot with a large language model to write the regular expression for me. I still need to describe what I want, so I need to write the instructions clearly. But all the gobbledygook that I’m writing for a machine now gets written by a machine. That seems fair.

Mind you, I wouldn’t blindly trust the output. I’d take that regular expression and run it through a chatbot, maybe a different chatbot running on a different large language model. “Explain what this regular expression does,” would be my prompt. If my input into the first chatbot matches the output of the second, I’d have some confidence in using the regular expression.

A friend of mine told me about using a large language model to help write SQL statements. He described his database structure to the chatbot, and then described what he wanted to select.

Again, I wouldn’t use that output without checking it first. But again, I might use another chatbot to do that checking. “Explain what this SQL statement does.”

Playing chatbots off against each other like this is kinda how machine learning works under the hood: generative adverserial networks.

Of course, the task of having to validate the output of a chatbot by checking it with another chatbot could get quite tedious. “I wish to God these large language model outputs had been validated by steam!”

Sounds like a job for machines.

Wednesday, March 22nd, 2023

Disclosure

You know how when you’re on hold to any customer service line you hear a message that thanks you for calling and claims your call is important to them. The message always includes a disclaimer about calls possibly being recorded “for training purposes.”

Nobody expects that any training is ever actually going to happen—surely we would see some improvement if that kind of iterative feedback loop were actually in place. But we most certainly want to know that a call might be recorded. Recording a call without disclosure would be unethical and illegal.

Consider chatbots.

If you’re having a text-based (or maybe even voice-based) interaction with a customer service representative that doesn’t disclose its output is the result of large language models, that too would be unethical. But, at the present moment in time, it would be perfectly legal.

That needs to change.

I suspect the necessary legislation will pass in Europe first. We’ll see if the USA follows.

In a way, this goes back to my obsession with seamful design. With something as inherently varied as the output of large language models, it’s vital that people have some way of evaluating what they’re told. I believe we should be able to see as much of the plumbing as possible.

The bare minimum amount of transparency is revealing that a machine is in the loop.

This shouldn’t be a controversial take. But I guarantee we’ll see resistance from tech companies trying to sell their “AI” tools as seamless, indistinguishable drop-in replacements for human workers.

Monday, January 16th, 2023

Mars distracts

A few years ago, I wrote about how much I enjoyed the book Aurora by Kim Stanley Robinson.

Not everyone liked that book. A lot of people were put off by its structure, in which the dream of interstellar colonisation meets the harsh truth of reality and the book follows where that leads. It pours cold water over the very idea of humanity becoming interplanetary.

But our own solar system is doable, right? I mean, Kim Stanley Robinson is the guy who wrote the Mars trilogy and 2312, both of which depict solar system colonisation in just a few centuries.

I wonder if the author might regret the way that some have taken his Mars trilogy as a sort of manual, Torment Nexus style. Kim Stanley Robinson is very much concerned with this planet in this time period, but others use his work to do the opposite.

But the backlash to Mars has begun.

Maciej wrote Why Not Mars:

The goal of this essay is to persuade you that we shouldn’t send human beings to Mars, at least not anytime soon. Landing on Mars with existing technology would be a destructive, wasteful stunt whose only legacy would be to ruin the greatest natural history experiment in the Solar System. It would no more open a new era of spaceflight than a Phoenician sailor crossing the Atlantic in 500 B.C. would have opened up the New World. And it wouldn’t even be that much fun.

Manu Saadia is writing a book about humanity in space, and he has a corresponding newsletter called Against Mars: Space Colonization and its Discontents:

What if space colonization was merely science-fiction, a narrative, or rather a meta-narrative, a myth, an ideology like any other? And therefore, how and why did it catch on? What is so special and so urgent about space colonization that countless scientists, engineers, government officials, billionaire oligarchs and indeed, entire nations, have committed work, ingenuity and treasure to make it a reality.

What if, and hear me out, space colonization was all bullshit?

I mean that quite literally. No hyperbole. Once you peer under the hood, or the nose, of the rocket ship, you encounter a seemingly inexhaustible supply of ghoulish garbage.

Two years ago, Shannon Stirone went into the details of why Mars Is a Hellhole

The central thing about Mars is that it is not Earth, not even close. In fact, the only things our planet and Mars really have in common is that both are rocky planets with some water ice and both have robots (and Mars doesn’t even have that many).

Perhaps the most damning indictment of the case for Mars colonisation is that its most ardent advocate turns out to be an idiotic small-minded eugenicist who can’t even run a social media company, much less a crewed expedition to another planet.

But let’s be clear: we’re talking here about the proposition of sending humans to Mars—ugly bags of mostly water that probably wouldn’t survive. Robots and other uncrewed missions in our solar system …more of that, please!

Saturday, April 24th, 2021

things are a little crazy rn

Adversarial chatbots engaged in an endless back-and-forth:

This piece simulates scheduling hell by generating infinite & unique combinations of meeting conflicts between two friends.

Tuesday, October 20th, 2020

My chatbot is dead · Why yours should probably be too · Adrian Z

The upside to being a terrible procrastinator is that certain items on my to-do list, like, say, “build a chatbot”, will—given enough time—literally take care of themselves.

I ultimately feel like it has slowly turned into a fad. I got fooled by the trend, and as a by-product became part of the trend itself.

Saturday, April 27th, 2019

Friday, December 28th, 2018

Malicious AI Report

Well, this an interesting format experiment—the latest Black Mirror just dropped, and it’s a PDF.

Thursday, December 6th, 2018

WALL·E | Typeset In The Future

A deep dive into Pixar’s sci-fi masterpiece, featuring entertaining detours to communist propaganda and Disney theme parks.

Thursday, October 4th, 2018

Infovore » Pouring one out for the Boxmakers

This is a rather beautiful piece of writing by Tom (especially the William Gibson bit at the end). This got me right in the feels:

Web 2.0 really, truly, is over. The public APIs, feeds to be consumed in a platform of your choice, services that had value beyond their own walls, mashups that merged content and services into new things… have all been replaced with heavyweight websites to ensure a consistent, single experience, no out-of-context content, and maximising the views of advertising. That’s it: back to single-serving websites for single-serving use cases.

A shame. A thing I had always loved about the internet was its juxtapositions, the way it supported so many use-cases all at once. At its heart, a fundamental one: it was a medium which you could both read and write to. From that flow others: it’s not only work and play that coexisted on it, but the real and the fictional; the useful and the useless; the human and the machine.

Tuesday, June 26th, 2018

Untold AI: The Untold | Sci-fi interfaces

Prompted by his time at Clearleft’s AI gathering in Juvet, Chris has been delving deep into the stories we tell about artificial intelligence …and what stories are missing.

And here we are at the eponymous answer to the question that I first asked at Juvet around 7 months ago: What stories aren’t we telling ourselves about AI?

Tuesday, February 27th, 2018

Andy Budd - De l’imaginaire à la réalité : panorama de la robotique on Vimeo

A thoroughly entertaining talk by Andy looking at the past, present, and future of robots, AI, and automation.

Monday, June 12th, 2017

Design in the Era of the Algorithm | Big Medium

The transcript of Josh’s fantastic talk on machine learning, voice, data, APIs, and all the other tools of algorithmic design:

The design and presentation of data is just as important as the underlying algorithm. Algorithmic interfaces are a huge part of our future, and getting their design right is critical—and very, very hard to do.

Josh put together ten design principles for conceiving, designing, and managing data-driven products. I’ve added them to my collection.

  1. Favor accuracy over speed
  2. Allow for ambiguity
  3. Add human judgment
  4. Advocate sunshine
  5. Embrace multiple systems
  6. Make it easy to contribute (accurate) data
  7. Root out bias and bad assumptions
  8. Give people control over their data
  9. Be loyal to the user
  10. Take responsibility

Wednesday, March 15th, 2017

Systems Smart Enough To Know When They’re Not Smart Enough | Big Medium

I can forgive our answer machines if they sometimes get it wrong. It’s less easy to forgive the confidence with which the bad answer is presented, giving the impression that the answer is definitive. That’s a design problem.

Wednesday, December 7th, 2016

After the flood | Projects | Robot Life Survey

Lovely prints!

The Robot Life Survey is an alternative-history from design company After the flood, where mechanical intelligence is discovered by man, noted and painted for posterity and science.

Monday, May 9th, 2016

Bots | A Working Library

Absolutely brilliant stuff from Mandy (again). A long hard at today’s tech industry’s narrow approach to bots and artificial intelligence compared to some far more interesting and imaginative approaches in fiction:

  • Ann Leckie’s superb Imperial Radch series,
  • Kim Stanley Robinson’s Aurora, and
  • Alex Garland’s Ex Machina.

So in addition to frightening ramifications for privacy and information discovery, they also reinforce gendered stereotypes about women as servants. The neutral politeness that infects them all furthers that convention: women should be utilitarian, performing their duties on command without fuss or flourish. This is a vile, harmful, and dreadfully boring fantasy; not the least because there is so much extraordinary art around AI that both deconstructs and subverts these stereotypes. It takes a massive failure of imagination to commit yourself to building an artificial intelligence and then name it “Amy.”

Sunday, April 24th, 2016

Conversational interfaces

Psst… Jeremy! Right now you’re getting notified every time something is posted to Slack. That’s great at first, but now that activity is increasing you’ll probably prefer dialing that down.

Slackbot, 2015

What’s happening?

Twitter, 2009

Why does everyone always look at me? I know I’m a chalkboard and that’s my job, I just wish people would ask before staring at me. Sometimes I don’t have anything to say.

Existentialist chalkboard, 2007

I’m Little MOO - the bit of software that will be managing your order with us. It will shortly be sent to Big MOO, our print machine who will print it for you in the next few days. I’ll let you know when it’s done and on it’s way to you.

Little MOO, 2006

It looks like you’re writing a letter.

Clippy, 1997

Your quest is to find the Warlock’s treasure, hidden deep within a dungeon populated with a multitude of terrifying monsters. You will need courage, determination and a fair amount of luck if you are to survive all the traps and battles, and reach your goal — the innermost chambers of the Warlock’s domain.

The Warlock Of Firetop Mountain, 1982

Welcome to Adventure!! Would you like instructions?

Colossal Cave, 1976

I am a lead pencil—the ordinary wooden pencil familiar to all boys and girls and adults who can read and write.

I, Pencil, 1958

ÆLFRED MECH HET GEWYRCAN
Ælfred ordered me to be made

Ashmolean Museum, Oxford

The Ælfred Jewel, ~880

Technical note

I have marked up the protagonist of each conversation using the cite element. There is a long-running dispute over the use of this element. In HTML 4.01 it was perfectly fine to use cite to mark up a person being quoted. In the HTML Living Standard, usage has been narrowed:

The cite element represents the title of a work (e.g. a book, a paper, an essay, a poem, a score, a song, a script, a film, a TV show, a game, a sculpture, a painting, a theatre production, a play, an opera, a musical, an exhibition, a legal case report, a computer program, etc). This can be a work that is being quoted or referenced in detail (i.e. a citation), or it can just be a work that is mentioned in passing.

A person’s name is not the title of a work — even if people call that person a piece of work — and the element must therefore not be used to mark up people’s names.

I disagree.

In the examples above, it’s pretty clear that I, Pencil and Warlock Of Firetop Mountain are valid use cases for the cite element according to the HTML5 definition; they are titles of works. But what about Clippy or Little Moo or Slackbot? They’re not people …but they’re not exactly titles of works either.

If I were to mark up a dialogue between Eliza and a human being, should I only mark up Eliza’s remarks with cite? In text transcripts of conversations with Alexa, Siri, or Cortana, should only their side of the conversation get attributed as a source? Or should they also be written without the cite element because it must not be used to mark up people’s names …even though they are not people, according to conventional definition.

It’s downright botist.

Monday, February 29th, 2016

Tuesday, August 25th, 2015

dConstruct 2015 podcast: Brian David Johnson

The newest dConstruct podcast episode features the indefatigable and effervescent Brian David Johnson. Together we pick apart the futures we are collectively making, probe the algorithmic structures of science fiction narratives, and pay homage to Asimovian robotic legal codes.

Brian’s enthusiasm is infectious. I have a strong hunch that his dConstruct talk will be both thought-provoking and inspiring.

dConstruct 2015 is getting close now. Our future approaches. Interviewing the speakers ahead of time has only increased my excitement and anticipation. I think this is going to be a truly unmissable event. So, uh, don’t miss it.

Grab your ticket today and use the code ‘ansible’ to take advantage of the 10% discount for podcast listeners.

Thursday, August 20th, 2015

dConstruct 2015 podcast: Carla Diana

The dConstruct podcast episodes are coming thick and fast. The latest episode is a thoroughly enjoyable natter I had with the brilliant Carla Diana.

We talk about robots, smart objects, prototyping, 3D printing, and the world of teaching design.

Remember, you can subscribe to the podcast feed in any podcast software you like, or if iTunes is your thing, you can also subscribe directly in iTunes.

And don’t forget to use the discount code ‘ansible’ when you’re buying your dConstruct ticket …because you are coming to dConstruct, right?

Wednesday, December 17th, 2014

TARS, CASE & KIPP from Interstellar

Print out the plans, fold and glue/sellotape the paper together, and you’ve got yourself the best sci-fi robots in recent cinema history.