Journal

2954 sparkline

Wednesday, March 22nd, 2023

Disclosure

You know how when you’re on hold to any customer service line you hear a message that thanks you for calling and claims your call is important to them. The message always includes a disclaimer about calls possibly being recorded “for training purposes.”

Nobody expects that any training is ever actually going to happen—surely we would see some improvement if that kind of iterative feedback loop were actually in place. But we most certainly want to know that a call might be recorded. Recording a call without disclosure would be unethical and illegal.

Consider chatbots.

If you’re having a text-based (or maybe even voice-based) interaction with a customer service representative that doesn’t disclose its output is the result of large language models, that too would be unethical. But, at the present moment in time, it would be perfectly legal.

That needs to change.

I suspect the necessary legislation will pass in Europe first. We’ll see if the USA follows.

In a way, this goes back to my obsession with seamful design. With something as inherently varied as the output of large language models, it’s vital that people have some way of evaluating what they’re told. I believe we should be able to see as much of the plumbing as possible.

The bare minimum amount of transparency is revealing that a machine is in the loop.

This shouldn’t be a controversial take. But I guarantee we’ll see resistance from tech companies trying to sell their “AI” tools as seamless, indistinguishable drop-in replacements for human workers.

Wednesday, March 15th, 2023

Another three speakers for UX London 2023

I know I’m being tease, doling out these UX London speaker announcements in batches rather than one big reveal. Indulge me in my suspense-ratcheting behaviour.

Today I’d like to unveil three speakers whose surnames start with the letter H…

  • Stephen Hay, Creative Director at Rabobank,
  • Asia Hoe, Senior Product Designer, and
  • Amy Hupe, Design Systems consultant at Frankly Design.
A professional portrait of a smiling white man in a turtleneck jumper and suit jacket with close-cut dark curly hair that's beginning to show signs of grey. An outdoor portrait of a smiling dark-skinned woman smiling with shoulder-length black hair. A smiling white woman with long dark hair sitting on the sofa in a cosy room with a nice cup of tea.

Just look at how that line-up is coming together! There’ll be just one more announcement and then the roster will be complete.

But don’t wait for that. Grab your ticket now and I’ll see you in London on June 22nd and 23rd!

Tuesday, March 14th, 2023

Guessing

The last talk at the last dConstruct was by local clever clogs Anil Seth. It was called Your Brain Hallucinates Your Conscious Reality. It’s well worth a listen.

Anil covers a lot of the same ground in his excellent book, Being You. He describes a model of consciousness that inverts our intuitive understanding.

We tend to think of our day-to-day reality in a fairly mechanical cybernetic manner; we receive inputs through our senses and then make decisions about reality informed by those inputs.

As another former dConstruct speaker, Adam Buxton, puts it in his interview with Anil, it feels like that old Beano cartoon, the Numskulls, with little decision-making homonculi inside our head.

But Anil posits that it works the other way around. We make a best guess of what the current state of reality is, and then we receive inputs from our senses, and then we adjust our model accordingly. There’s still a feedback loop, but cause and effect are flipped. First we predict or guess what’s happening, then we receive information. Rinse and repeat.

The book goes further and applies this to our very sense of self. We make a best guess of our sense of self and then adjust that model constantly based on our experiences.

There’s a natural tendency for us to balk at this proposition because it doesn’t seem rational. The rational model would be to make informed calculations based on available data …like computers do.

Maybe that’s what sets us apart from computers. Computers can make decisions based on data. But we can make guesses.

Enter machine learning and large language models. Now, for the first time, it appears that computers can make guesses.

The guess-making is not at all like what our brains do—large language models require enormous amounts of inputs before they can make a single guess—but still, this should be the breakthrough to be shouted from the rooftops: we’ve taught machines how to guess!

And yet. Almost every breathless press release touting some revitalised service that uses AI talks instead about accuracy. It would be far more honest to tout the really exceptional new feature: imagination.

Using AI, we will guess who should get a mortgage.

Using AI, we will guess who should get hired.

Using AI, we will guess who should get a strict prison sentence.

Reframed like that, it’s easy to see why technologists want to bury the lede.

Alas, this means that large language models are being put to use for exactly the wrong kind of scenarios.

(This, by the way, is also true of immersive “virtual reality” environments. Instead of trying to accurately recreate real-world places like meeting rooms, we should be leaning into the hallucinatory power of a technology that can generate dream-like situations where the pleasure comes from relinquishing control.)

Take search engines. They’re based entirely on trust and accuracy. Introducing a chatbot that confidentally conflates truth and fiction doesn’t bode well for the long-term reputation of that service.

But what if this is an interface problem?

Currently facts and guesses are presented with equal confidence, hence the accurate descriptions of the outputs as bullshit or mansplaining as a service.

What if the more fanciful guesses were marked as such?

As it is, there’s a “temperature” control that can be adjusted when generating these outputs; the more the dial is cranked, the further the outputs will stray from the safest predictions. What if that could be reflected in the output?

I don’t know what that would look like. It could be typographic—some markers to indicate which bits should be taken with pinches of salt. Or it could be through content design—phrases like “Perhaps…”, “Maybe…” or “It’s possible but unlikely that…”

I’m sure you’ve seen the outputs when people request that ChatGPT write their biography. Perfectly accurate statements are generated side-by-side with complete fabrications. This reinforces our scepticism of these tools. But imagine how differently the fabrications would read if they were preceded by some simple caveats.

A little bit of programmed humility could go a long way.

Right now, these chatbots are attempting to appear seamless. If 80% or 90% of their output is accurate, then blustering through the other 10% or 20% should be fine, right? But I think the experience for the end user would be immensely more empowering if these chatbots were designed seamfully. Expose the wires. Show the workings-out.

Mind you, that only works if there is some way to distinguish between fact and fabrication. If there’s no way to tell how much guessing is happening, then that’s a major problem. If you can’t tell me whether something is 50% true or 75% true or 25% true, then the only rational response is to treat the entire output as suspect.

I think there’s a fundamental misunderstanding behind the design of these chatbots that goes all the way back to the Turing test. There’s this idea that the way to make a chatbot believable and trustworthy is to make it appear human, attempting to hide the gears of the machine. But the real way to gain trust is through honesty.

I want a machine to tell me when it’s guessing. That won’t make me trust it less. Quite the opposite.

After all, to guess is human.

Monday, March 6th, 2023

The past is a foreign country

I tried watching a classic Western this weekend, How The West Was Won. I did not make it far. Let’s just say that in the first few minutes, the Spencer Tracy voiceover that accompanies the sweeping vistas sets out an attitude toward the indigenous population that would not fly today.

It’s one thing to be repulsed by a film from another era, but it’s even more uncomfortable to revisit the films from your own teenage years.

Tim Carmody has written about the real hero of Top Gun:

Iceman’s concern for Maverick and the safety of his fighter unit is totally understandable. He tries, however awkwardly, to discuss Goose’s death with Maverick. There’s no discussion of blame. And when they’re assigned to fly into combat together, Iceman briefly and discreetly raises the issue of Maverick’s fitness to fly with his superior officer and withdraws his concern once a decision is made.

I know someone who didn’t watch Ferris Bueller’s Day Off until they were well into adulthood. Their sympathies lay squarely with Dean Rooney.

And I think we can all agree in hindsight that Walter Peck was completely correct in his assessment of the dangers in Ghostbusters.

Oh, and The Karate Kid was the real bully.

This week, George wrote I’ve fallen out of love with Indiana Jones. Indy’s attitude of “it belongs in a museum” is the same worldview that got the Parthenon Marbles into the British Museum (instead of, y’know, the Parthenon where they belong).

Adrian Hon invites us to imagine what it would be like if the tables were turned. He wrote a short piece of speculative fiction called The Taking of Stonehenge:

We selected these archaeological sites based on their importance to our collective understanding of human and galactic history, and their immediate risk of irreparable harm from pollution, climate change, neglect, and looting. We are sympathetic to claims that preserving these sites in their “original” context is important, but our duty of care outweighs such emotional considerations.

Like

We use metaphors all the time. To quote George Lakoff, we live by them.

We use analogies some of the time. They’re particularly useful when we’re wrapping our heads around something new. By comparing something novel to something familiar, we can make a shortcut to comprehension, or at least, categorisation.

But we need a certain amount of vigilance when it comes to analogies. Just because something is like something else doesn’t mean it’s the same.

With that in mind, here are some ways that people are describing generative machine learning tools. Large language models are like…

Tuesday, February 28th, 2023

The next four speakers for UX London 2023

I am positively giddy with excitement to tell you about some more speakers you can look forward to at UX London 2023:

A smiling dark-skinned young woman with long hair wearing a black T-shirt and a green pendant in front of a light background. A smiling light-skinned woman with long dark hair wearing a comfy-looking blue top. A smiling light-skinned man with a shaved head illuminated in front of a pitch black background. A smiling woman with wavy blonde hair, pale skin and light blue eyes wearing a dark outfit in front of a light background.

I have more confirmed speakers but I’m going to be a tease and save them for a separate announcement soon. You can expect more of the same: smart, fabulous people with all kinds of design experience that they’re going to share with you at UX London.

But why wait for another speaker announcement? Get your ticket to UX London 2023 now!

Wednesday, February 22nd, 2023

Web Audio API update on iOS

I documented a weird bug with web audio on iOS a while back:

On some pages of The Session, as well as the audio player for tunes (using the Web Audio API) there are also embedded YouTube videos (using the video element). Press play on the audio player; no sound. Press play on the YouTube video; you get sound. Now go back to the audio player and suddenly you do get sound!

It’s almost like playing a video or audio element “kicks” the browser into realising it should be playing the sound from the Web Audio API too.

This was happening on iOS devices set to mute, but I was also getting reports of it happening on devices with the sound on. But it’s that annoyingly intermittent kind of bug that’s really hard to reproduce consistently. Sometimes the sound doesn’t play. Sometimes it does.

I found a workaround but it was really hacky. By playing a one-second long silent mp3 file using audio, you could “kick” the sound into behaving. Then you can use the Web Audio API and it would play consistently.

Well, that’s all changed with the latest release of Mobile Safari. Now what happens is that the Web Audio stuff plays …for one second. And then stops.

I removed the hacky workaround and the Web Audio API started behaving itself again …but your device can’t be set to silent.

The good news is that the Web Audio behaviour seems to be consistent now. It only plays if the device isn’t muted. This restriction doesn’t apply to video and audio elements; they will still play even if your device is set to silent.

This descrepancy between the two different ways of playing audio is kind of odd, but at least now the Web Audio behaviour is predictable.

You can hear the Web Audio API in action by going to any tune on The Session and pressing the “play audio” button.

Tuesday, February 21st, 2023

UX London 2023 scholarship programme

If you’re a western white guy like me, you’re playing life on its easiest setting. If you’re also a designer, then you should get a ticket to UX London. You can probably get work to pay for it. Share this list of reasons to attend with your boss if you have to.

If, on the other hand, you don’t benefit from the same level of privilege as me, you might still be able to attend UX London 2023. We’re running a scholarship programme.

“We” in this case is Clearleft. But as we also need to at least break even on this event, there are only a limited number of scholarship spots available.

Now, if your company were in a position to pony up some moolah to sponsor more diversity scholarship places, we would dearly love to hear from you—get in touch!

If you think you might qualify for a diversity scholarship, fill in this form before May 19th. We’ll then notify you by May 26th, whether you application is successful or not. And if you’re worried about the additional costs of travel and accommodation, I’m sure we can figure something out.

Wondering if you should apply? It’s hard to define exactly who qualifies for a diversity scholarship, but basically, the more your life experience matches mine, the less qualified you are. If you are a fellow able-bodied middle-aged heterosexual white dude with a comfortable income, do me a favour and don’t apply. Everyone else, go for it.

Monday, February 20th, 2023

Redesigning UX London

I’ve been redesigning UX London. I don’t mean the website. I mean the event itself.

Don’t worry, it’s nothing too radical. It’s not like we’re changing the focus of the event, which remains a nerdfest for all things design-related.

But there are plenty of other opportunities for tweaking a conference like this: the format, the timings, the location.

For 2023 we’re not changing the location. Tobacco Dock worked out well for last year’s event, although it is very expensive (then again, so is anywhere decent in London). Last year there were a lot of unknowns in play because it was our first time using the venue. It feels good that this year we don’t have to go through quite as much uncertainty.

The most obvious change to UX London this year is the length. The event will last for two days instead of three.

Running a three-day event was a lot of work, so this helps relieve the pressure. It was also asking a lot of attendees. That’s why we also offered one-day tickets. For the people who couldn’t commit to three days at a conference, there was the option to pick and choose.

But that brought its own issues. Instead of everyone having the same shared experience, the audience was a bit fractured.

Now that we’ve slimmed it down to two days, we’re selling the same two-day tickets for everyone. No more single-day tickets; no more partial attendance. Judging by the way ticket sales have been going, this is a very welcome move.

(Even before announcing any speakers, we had already sold a healthy amount of tickets. That’s probably testament to the great reputation that UX London has built up over the years. I need to make sure I don’t squander that good will. No pressure.)

On the subject of everyone having a shared experience, there’s something about the format of UX London that’s bothered me for a while…

Each day is split into two halves. In the morning, you’ve got inspirational talks. That’s one single track. Then in the afternoon, you’ve got hands-on practical workshops. They happen in parallel.

That makes for a great mix, but the one downside is that the day ends with the audience split across the different workshops.

This year I’m tweaking the format slightly. We’ll still have a single track of talks in the morning followed by multiple workshops in the afternoon, but I’m shortening the workshop length slightly to fit in one last talk at the end of the day. That way, everybody will come back together again after their workshops to participate in a shared experience.

The audience will converge at the beginning of the day, diverge in the afternoon, and this time we’ll converge again at day’s end.

The workshops are a big part of what makes UX London stand out. But they also pose a big design challenge. How do you ensure that everyone gets to attend the workshops they want?

We could make people pick their workshops in advance. But then you end up with the office Christmas dinner party problem—you know the one; everyone has to choose their meal way in advance, and then on the day, no one remembers what they ordered.

Besides, if we make people choose in advance, it’s not fair on people who buy their ticket close to the event.

In the end, using a first-come, first-served strategy on the day has worked out best. But it’s not ideal. You could miss out on attending your first choice of workshop if you’re not fast enough.

This year we’re trying something new. Each afternoon there’ll be a choice of workshops, as always. But this time, it’ll be the same workshops on both days. That way, every attendee gets a second chance to get to the workshops they want. And it’ll help reduce the FOMO—Fear Of Missing Out. It still won’t be possible to attend all the workshops without cloning yourself, but this way, you get to attend half of them.

To recap, here’s the redesigned format for UX London 2023:

  • It’s a two-day event on June 22nd and 23rd—there are no individual day tickets.
  • There are talks in the morning, workshops in the afternoon, and one final talk at the end of the day.
  • The workshops will be repeated each day so nobody misses out on the workshop they want.

The line-up is coming together nicely. I’ve got more confirmed speakers, who I don’t want to reveal just yet. But trust me, you won’t want to miss this!

Oh, and you should probably grab your ticket this week if you haven’t already: early-bird pricing ends on midnight on Friday, February 24th.

Sunday, February 19th, 2023

These were my jams

This Is My Jam was a lovely website. Created by Hannah and Matt in 2011, it ran until 2015, at which point they had to shut it down. But they made sure to shut it down with care and consideration.

In many ways, This Is My Jam was the antithesis of the prevailing Silicon Valley mindset. Instead of valuing growth and scale above all else, it was deliberately thoughtful. Rather than “maximising engagement”, it asked you to slow down and just share one thing: what piece of music are you really into right now? It was up to you to decide whether “right now” meant this year, this month, this week, or this day.

I used to post songs there sporadically. Here’s a round-up of the twelve songs I posted in 2013. There was always some reason for posting a particular piece of music.

I was reminded of This Is My Jam recently when I logged into Spotify (not something I do that often). As part of the site’s shutdown, you could export all your jams into a Spotify playlist. Here’s mine.

Listening back to these 50 songs all these years later gave me the warm fuzzies.