Smoke screen | A Working Library
The story that “artificial intelligence” tells is a smoke screen. But smoke offers only temporary cover. It fades if it isn’t replenished.
The story that “artificial intelligence” tells is a smoke screen. But smoke offers only temporary cover. It fades if it isn’t replenished.
How do we write, design, and code a link that works for everyone on every device? Let’s dive into the world of creating the perfect link, without making a pig’s breakfast of it.
Check out the demo that Rich has put together to go with Amelia’s proposed syntax.
Ethan highlights a classic case of the McNamara Fallacy—measuring adoption of design system components.
This is handy—a collection of font stacks using system fonts. You can see which ones are currently installed on your machine too.
The most performant web font is no web font.
This free event is running online from 3pm to 7pm UK time this Friday. The line-up features Emily Bender, Safiya Noble, Timnit Gebru and more.
Since the publication of On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?🦜 two years ago, many of the harms the paper has warned about and more, have unfortunately occurred. From exploited workers filtering hateful content, to an engineer claiming that chatbots are sentient, the harms are only accelerating.
Join the co-authors of the paper and various guests to reflect on what has happened in the last two years, what the large language model landscape currently look like, and where we are headed vs where we should be headed.
I love print stylesheets but I was today years old when I found out that print-color-adjust
exists.
Good question.
I think it’s mostly inertia.
The last talk at the last dConstruct was by local clever clogs Anil Seth. It was called Your Brain Hallucinates Your Conscious Reality. It’s well worth a listen.
Anil covers a lot of the same ground in his excellent book, Being You. He describes a model of consciousness that inverts our intuitive understanding.
We tend to think of our day-to-day reality in a fairly mechanical cybernetic manner; we receive inputs through our senses and then make decisions about reality informed by those inputs.
As another former dConstruct speaker, Adam Buxton, puts it in his interview with Anil, it feels like that old Beano cartoon, the Numskulls, with little decision-making homonculi inside our head.
But Anil posits that it works the other way around. We make a best guess of what the current state of reality is, and then we receive inputs from our senses, and then we adjust our model accordingly. There’s still a feedback loop, but cause and effect are flipped. First we predict or guess what’s happening, then we receive information. Rinse and repeat.
The book goes further and applies this to our very sense of self. We make a best guess of our sense of self and then adjust that model constantly based on our experiences.
There’s a natural tendency for us to balk at this proposition because it doesn’t seem rational. The rational model would be to make informed calculations based on available data …like computers do.
Maybe that’s what sets us apart from computers. Computers can make decisions based on data. But we can make guesses.
Enter machine learning and large language models. Now, for the first time, it appears that computers can make guesses.
The guess-making is not at all like what our brains do—large language models require enormous amounts of inputs before they can make a single guess—but still, this should be the breakthrough to be shouted from the rooftops: we’ve taught machines how to guess!
And yet. Almost every breathless press release touting some revitalised service that uses AI talks instead about accuracy. It would be far more honest to tout the really exceptional new feature: imagination.
Using AI, we will guess who should get a mortgage.
Using AI, we will guess who should get hired.
Using AI, we will guess who should get a strict prison sentence.
Reframed like that, it’s easy to see why technologists want to bury the lede.
Alas, this means that large language models are being put to use for exactly the wrong kind of scenarios.
(This, by the way, is also true of immersive “virtual reality” environments. Instead of trying to accurately recreate real-world places like meeting rooms, we should be leaning into the hallucinatory power of a technology that can generate dream-like situations where the pleasure comes from relinquishing control.)
Take search engines. They’re based entirely on trust and accuracy. Introducing a chatbot that confidentally conflates truth and fiction doesn’t bode well for the long-term reputation of that service.
But what if this is an interface problem?
Currently facts and guesses are presented with equal confidence, hence the accurate descriptions of the outputs as bullshit or mansplaining as a service.
What if the more fanciful guesses were marked as such?
As it is, there’s a “temperature” control that can be adjusted when generating these outputs; the more the dial is cranked, the further the outputs will stray from the safest predictions. What if that could be reflected in the output?
I don’t know what that would look like. It could be typographic—some markers to indicate which bits should be taken with pinches of salt. Or it could be through content design—phrases like “Perhaps…”, “Maybe…” or “It’s possible but unlikely that…”
I’m sure you’ve seen the outputs when people request that ChatGPT write their biography. Perfectly accurate statements are generated side-by-side with complete fabrications. This reinforces our scepticism of these tools. But imagine how differently the fabrications would read if they were preceded by some simple caveats.
A little bit of programmed humility could go a long way.
Right now, these chatbots are attempting to appear seamless. If 80% or 90% of their output is accurate, then blustering through the other 10% or 20% should be fine, right? But I think the experience for the end user would be immensely more empowering if these chatbots were designed seamfully. Expose the wires. Show the workings-out.
Mind you, that only works if there is some way to distinguish between fact and fabrication. If there’s no way to tell how much guessing is happening, then that’s a major problem. If you can’t tell me whether something is 50% true or 75% true or 25% true, then the only rational response is to treat the entire output as suspect.
I think there’s a fundamental misunderstanding behind the design of these chatbots that goes all the way back to the Turing test. There’s this idea that the way to make a chatbot believable and trustworthy is to make it appear human, attempting to hide the gears of the machine. But the real way to gain trust is through honesty.
I want a machine to tell me when it’s guessing. That won’t make me trust it less. Quite the opposite.
After all, to guess is human.
This is a terrific walkthrough from Andy showing how smart fundamentals in your CSS can give you a beautiful readable document without much work.
Rich explains what text-wrap:balance
does …and what it doesn’t.
We use metaphors all the time. To quote George Lakoff, we live by them.
We use analogies some of the time. They’re particularly useful when we’re wrapping our heads around something new. By comparing something novel to something familiar, we can make a shortcut to comprehension, or at least, categorisation.
But we need a certain amount of vigilance when it comes to analogies. Just because something is like something else doesn’t mean it’s the same.
With that in mind, here are some ways that people are describing generative machine learning tools. Large language models are like…
I agree with the reasoning here—a new display
value would be ideal.
Here’s the video of the talk I gave at Monday’s meet-up here in Brighton—it’s a very condensed version of my longer conference talk on declarative design.
Container queries can’t be used in the sizes
attribute for responsive images. Here, Jason breaks down why that is (spoiler: it’s the lookahead pre-parser) and segues into a truly long term solution: a “magical” image format.
If you’ve ever thought it felt weird to put media conditions inside the HTML for responsive images, this will resonate.
Instead of thinking about responsive design in terms of media queries, I like to think of responsive design in these categories.
- Responsive to the content
- Responsive to the viewport
- Responsive to the container
- Responsive to the user preferences
A person seeking help in a time of crisis does not care about TypeScript, tree shaking, hot module replacement, A/B tests, burndown charts, NPS, OKRs, KPIs, or other startup jargon. Developer experience does not count for shit if the person using the thing they built can’t actually get what they need.
I feel like we need a name for this era, when CSS started getting real good.
I think this is what I’ve been calling declarative design.
Like a little mini Utopia:
Handy little tool for calculating viewport-based clamped values.