Tags: mac

166

sparkline

Thursday, March 23rd, 2023

Steam

Picture someone tediously going through a spreadsheet that someone has filled in by hand and finding yet another error.

“I wish to God these calculations had been executed by steam!” they cry.

The year was 1821 and technically the spreadsheet was a book of logarithmic tables. The frustrated cry came from Charles Babbage, who channeled his frustration into a scheme to create the world’s first computer.

His difference engine didn’t work out. Neither did his analytical engine. He’d spend his later years taking his frustrations out on street musicians, which—as a former busker myself—earns him a hairy eyeball from me.

But we’ve all been there, right? Some tedious task that feels soul-destroying in its monotony. Surely this exactly what machines should be doing?

I have a hunch that this is where machine learning and large language models might turn out to be most useful. Not in creating breathtaking works of creativity, but in menial tasks that nobody enjoys.

Someone was telling me earlier today about how they took a bunch of haphazard notes in a client meeting. When the meeting was done, they needed to organise those notes into a coherent summary. Boring! But ChatGPT handled it just fine.

I don’t think that use-case is going to appear on the cover of Wired magazine anytime soon but it might be a truer glimpse of the future than any of the breathless claims being eagerly bandied about in Silicon Valley.

You know the way we no longer remember phone numbers, because, well, why would we now that we have machines to remember them for us? I’d be quite happy if machines did that for the annoying little repetitive tasks that nobody enjoys.

I’ll give you an example based on my own experience.

Regular expressions are my kryptonite. I’m rubbish at them. Any time I have to figure one out, the knowledge seeps out of my brain before long. I think that’s because I kind of resent having to internalise that knowledge. It doesn’t feel like something a human should have to know. “I wish to God these regular expressions had been calculated by steam!”

Now I can get a chatbot with a large language model to write the regular expression for me. I still need to describe what I want, so I need to write the instructions clearly. But all the gobbledygook that I’m writing for a machine now gets written by a machine. That seems fair.

Mind you, I wouldn’t blindly trust the output. I’d take that regular expression and run it through a chatbot, maybe a different chatbot running on a different large language model. “Explain what this regular expression does,” would be my prompt. If my input into the first chatbot matches the output of the second, I’d have some confidence in using the regular expression.

A friend of mine told me about using a large language model to help write SQL statements. He described his database structure to the chatbot, and then described what he wanted to select.

Again, I wouldn’t use that output without checking it first. But again, I might use another chatbot to do that checking. “Explain what this SQL statement does.”

Playing chatbots off against each other like this is kinda how machine learning works under the hood: generative adverserial networks.

Of course, the task of having to validate the output of a chatbot by checking it with another chatbot could get quite tedious. “I wish to God these large language model outputs had been validated by steam!”

Sounds like a job for machines.

Why ChatGPT Won’t Replace Coders Just Yet

I’ve been using Copilot for over a year now, and this is more or less how I use it: To help me quickly blast through boilerplate code so I can more quickly get to the tricky bits.

There’s a more subtle problem with ChatGPT’s code generation, which is that it suffers from ChatGPT’s general “bullshit” problem.

Smoke screen | A Working Library

The story that “artificial intelligence” tells is a smoke screen. But smoke offers only temporary cover. It fades if it isn’t replenished.

Wednesday, March 22nd, 2023

Disclosure

You know how when you’re on hold to any customer service line you hear a message that thanks you for calling and claims your call is important to them. The message always includes a disclaimer about calls possibly being recorded “for training purposes.”

Nobody expects that any training is ever actually going to happen—surely we would see some improvement if that kind of iterative feedback loop were actually in place. But we most certainly want to know that a call might be recorded. Recording a call without disclosure would be unethical and illegal.

Consider chatbots.

If you’re having a text-based (or maybe even voice-based) interaction with a customer service representative that doesn’t disclose its output is the result of large language models, that too would be unethical. But, at the present moment in time, it would be perfectly legal.

That needs to change.

I suspect the necessary legislation will pass in Europe first. We’ll see if the USA follows.

In a way, this goes back to my obsession with seamful design. With something as inherently varied as the output of large language models, it’s vital that people have some way of evaluating what they’re told. I believe we should be able to see as much of the plumbing as possible.

The bare minimum amount of transparency is revealing that a machine is in the loop.

This shouldn’t be a controversial take. But I guarantee we’ll see resistance from tech companies trying to sell their “AI” tools as seamless, indistinguishable drop-in replacements for human workers.

Monday, March 20th, 2023

The AI hype bubble is the new crypto hype bubble

A handy round-up of recent wrtings on artificial insemination.

Sunday, March 19th, 2023

Artificial Guessing

Artificial Intelligence sounds much more impressive than Artificial Guessing in a slide deck.

Robin picks up on my framing.

Instead of brainstorming, discussing, iterating, closely inspecting a product to understand it and figure out what to show on a page, well, we can just let the machines figure it out for us! This big guessing machine can do our homework and we can all pack up and go to the beach.

ongoing by Tim Bray · The LLM Problem

It doesn’t bother me much that bleeding-edge ML technology sometimes gets things wrong. It bothers me a lot when it gives no warnings, cites no sources, and provides no confidence interval.

Yes! Like I said:

Expose the wires. Show the workings-out.

Thursday, March 16th, 2023

The stupidity of AI | The Guardian

A great piece by James, adapted from the new edition of his book New Dark Age.

The lesson of the current wave of “artificial” “intelligence”, I feel, is that intelligence is a poor thing when it is imagined by corporations. If your view of the world is one in which profit maximisation is the king of virtues, and all things shall be held to the standard of shareholder value, then of course your artistic, imaginative, aesthetic and emotional expressions will be woefully impoverished. We deserve better from the tools we use, the media we consume and the communities we live within, and we will only get what we deserve when we are capable of participating in them fully. And don’t be intimidated by them either – they’re really not that complicated. As the science-fiction legend Ursula K Le Guin wrote: “Technology is what we can learn to do.”

Wednesday, March 15th, 2023

Stochastic Parrots Day Tickets, Fri, Mar 17, 2023 at 8:00 AM | Eventbrite

This free event is running online from 3pm to 7pm UK time this Friday. The line-up features Emily Bender, Safiya Noble, Timnit Gebru and more.

Since the publication of On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?🦜 two years ago, many of the harms the paper has warned about and more, have unfortunately occurred. From exploited workers filtering hateful content, to an engineer claiming that chatbots are sentient, the harms are only accelerating.

Join the co-authors of the paper and various guests to reflect on what has happened in the last two years, what the large language model landscape currently look like, and where we are headed vs where we should be headed.

Tuesday, March 14th, 2023

The climate cost of the AI revolution • Wim Vanderbauwhede

As a society we need to treat AI resources as finite and precious, to be utilised only when necessary, and as effectively as possible. We need frugal AI.

Guessing

The last talk at the last dConstruct was by local clever clogs Anil Seth. It was called Your Brain Hallucinates Your Conscious Reality. It’s well worth a listen.

Anil covers a lot of the same ground in his excellent book, Being You. He describes a model of consciousness that inverts our intuitive understanding.

We tend to think of our day-to-day reality in a fairly mechanical cybernetic manner; we receive inputs through our senses and then make decisions about reality informed by those inputs.

As another former dConstruct speaker, Adam Buxton, puts it in his interview with Anil, it feels like that old Beano cartoon, the Numskulls, with little decision-making homonculi inside our head.

But Anil posits that it works the other way around. We make a best guess of what the current state of reality is, and then we receive inputs from our senses, and then we adjust our model accordingly. There’s still a feedback loop, but cause and effect are flipped. First we predict or guess what’s happening, then we receive information. Rinse and repeat.

The book goes further and applies this to our very sense of self. We make a best guess of our sense of self and then adjust that model constantly based on our experiences.

There’s a natural tendency for us to balk at this proposition because it doesn’t seem rational. The rational model would be to make informed calculations based on available data …like computers do.

Maybe that’s what sets us apart from computers. Computers can make decisions based on data. But we can make guesses.

Enter machine learning and large language models. Now, for the first time, it appears that computers can make guesses.

The guess-making is not at all like what our brains do—large language models require enormous amounts of inputs before they can make a single guess—but still, this should be the breakthrough to be shouted from the rooftops: we’ve taught machines how to guess!

And yet. Almost every breathless press release touting some revitalised service that uses AI talks instead about accuracy. It would be far more honest to tout the really exceptional new feature: imagination.

Using AI, we will guess who should get a mortgage.

Using AI, we will guess who should get hired.

Using AI, we will guess who should get a strict prison sentence.

Reframed like that, it’s easy to see why technologists want to bury the lede.

Alas, this means that large language models are being put to use for exactly the wrong kind of scenarios.

(This, by the way, is also true of immersive “virtual reality” environments. Instead of trying to accurately recreate real-world places like meeting rooms, we should be leaning into the hallucinatory power of a technology that can generate dream-like situations where the pleasure comes from relinquishing control.)

Take search engines. They’re based entirely on trust and accuracy. Introducing a chatbot that confidentally conflates truth and fiction doesn’t bode well for the long-term reputation of that service.

But what if this is an interface problem?

Currently facts and guesses are presented with equal confidence, hence the accurate descriptions of the outputs as bullshit or mansplaining as a service.

What if the more fanciful guesses were marked as such?

As it is, there’s a “temperature” control that can be adjusted when generating these outputs; the more the dial is cranked, the further the outputs will stray from the safest predictions. What if that could be reflected in the output?

I don’t know what that would look like. It could be typographic—some markers to indicate which bits should be taken with pinches of salt. Or it could be through content design—phrases like “Perhaps…”, “Maybe…” or “It’s possible but unlikely that…”

I’m sure you’ve seen the outputs when people request that ChatGPT write their biography. Perfectly accurate statements are generated side-by-side with complete fabrications. This reinforces our scepticism of these tools. But imagine how differently the fabrications would read if they were preceded by some simple caveats.

A little bit of programmed humility could go a long way.

Right now, these chatbots are attempting to appear seamless. If 80% or 90% of their output is accurate, then blustering through the other 10% or 20% should be fine, right? But I think the experience for the end user would be immensely more empowering if these chatbots were designed seamfully. Expose the wires. Show the workings-out.

Mind you, that only works if there is some way to distinguish between fact and fabrication. If there’s no way to tell how much guessing is happening, then that’s a major problem. If you can’t tell me whether something is 50% true or 75% true or 25% true, then the only rational response is to treat the entire output as suspect.

I think there’s a fundamental misunderstanding behind the design of these chatbots that goes all the way back to the Turing test. There’s this idea that the way to make a chatbot believable and trustworthy is to make it appear human, attempting to hide the gears of the machine. But the real way to gain trust is through honesty.

I want a machine to tell me when it’s guessing. That won’t make me trust it less. Quite the opposite.

After all, to guess is human.

Monday, March 6th, 2023

Like

We use metaphors all the time. To quote George Lakoff, we live by them.

We use analogies some of the time. They’re particularly useful when we’re wrapping our heads around something new. By comparing something novel to something familiar, we can make a shortcut to comprehension, or at least, categorisation.

But we need a certain amount of vigilance when it comes to analogies. Just because something is like something else doesn’t mean it’s the same.

With that in mind, here are some ways that people are describing generative machine learning tools. Large language models are like…

Thursday, February 16th, 2023

Montaigne

This is an interesting little blogging tool: it turns a folder of notes on your Mac into a website.

  1. Create dedicated folder in the Apple Notes.
  2. Connect it to Montaigne.
  3. Add notes with your content.
  4. Everything will be published to the web automatically.

Monday, November 21st, 2022

COLOR anything | colouring pages of absolutely anything for kids or grown ups

This is a genuinely lovely use of machine learning models: provide a prompt for an illustration to print out and colour in.

Mike explains his motivation for building this:

My son’s super into colouring at the moment and I’ve been struggling to find new stuff for him.

Thursday, September 8th, 2022

Modern alternatives to BEM - daverupert.com

Dave rounds up some of the acronymtastic ways of scoping your CSS now that we’ve got a whole new toolkit at our disposal.

If your goal is to reduce specificity, new native CSS tools make reducing specificity a lot easier. You can author your CSS with near-zero specificity and even control the order in which your rules cascade.

Thursday, August 18th, 2022

system.css | A design system for building retro Apple-inspired interfaces

A stylesheet for when you’re nostalgic for the old Mac OS.

Wednesday, July 13th, 2022

How normal am I?

A fascinating interactive journey through biometrics using your face.

Sunday, March 27th, 2022

Artifice and Intelligence

Whatever the merit of the scientific aspirations originally encompassed by the term “artificial intelligence,” it’s a phrase that now functions in the vernacular primarily to obfuscate, alienate, and glamorize.

Do “cloud” next!

Thursday, February 10th, 2022

Why Safari does not need any protection from Chromium – Niels Leenheer

Safari is very opinionated about which features they will support and which they won’t. And that is fine for their browser. But I don’t want the Safari team to choose for all browsers on the iOS platform.

A terrific piece from Niels pushing back on the ridiculous assertion that Apple’s ban on rival rendering engines in iOS is somehow a noble battle against a monopoly (rather than the abuse of monopoly power it actually is). If there were any truth to the idea that Apple’s browser ban is the only thing stopping everyone from switching to Chrome, then nobody would be using Safari on MacOS where users are free to choose whichever rendering engine they want.

The Safari team is capable enough not to let their browser become irrelevant. And Apple has enough money to support the Safari team to take on other browsers. It does not need some artificial App Store rule to protect it from the competition.

WebKit-only proponents are worried about losing control and Google becoming too powerful. And they feel preventing Google from controlling the web is more important than giving more power to users. They believe they are protecting users against themselves. But that is misguided.

Users need to be in control because if you take power away from users, you are creating the future you want to prevent, where one company sets the rules for everybody else. It is just somebody else who is pulling the strings.

Tuesday, December 7th, 2021

morals in the machine | The Roof is on Phire

We are so excited by the idea of machines that can write, and create art, and compose music, with seemingly little regard for how many wells of creativity sit untapped because many of us spend the best hours of our days toiling away, and even more can barely fulfill basic needs for food, shelter, and water. I can’t help but wonder how rich our lives could be if we focused a little more on creating conditions that enable all humans to exercise their creativity as much as we would like robots to be able to.