Tags: la

2419

sparkline

Thursday, June 1st, 2023

Automate the CEOs - by Hamilton Nolan - How Things Work

Let’s be rational here. If I were to imagine a job that was a perfect candidate for replacement by AI, it would be one that consists of measurable tasks that can be learned—allocation of capital, creation and execution of market strategy, selection of candidates for top roles—and one that costs the company a shitload of money. In other words: executives.

The logic is sound. However…

The CEOs will be spared from automation not because they should be, but because they are making the decisions about who is spared from automation.

Tuesday, May 30th, 2023

Five questions

In just a couple of weeks, I’ll be heading to Bristol for Pixel Pioneers. The line-up looks really, really good …with the glaring exception of the opening talk, which I’ll be delivering. But once that’s done, I’m very much looking forward to enjoying the rest of the day’s talks.

There are still tickets available if you fancy joining me.

This will be my second time speaking at this conference. I spoke at the inaugural conference back in 2017 when I gave a talk called Evaluating Technology. This time my talk is called Declarative Design.

A few weeks back, Oliver asked me some questions about my upcoming talk. I figured I’d post my answers here…

Welcome back to Pixel Pioneers! You return with another keynote - how do you manage to stay so ever-enthusiastic about designing for the web?

Well, I’d say my enthusiasm is mixed with frustration. And that’s always been the case. Just as I’ve always found new things that excite me about the World Wide Web, there are just as many things that upset me.

But that’s okay. Both forces can be motivating. When I find myself writing a blog post or preparing a talk, the impetus might be “This is so cool! Check this out!” or it might be “This is so maddening! What’s happening!?” …or perhaps a mix of both.

But to answer your question, the World Wide Web never stays still so there’s always something to get excited about. Equally, the longer the web exists, the more sense it makes to examine the fundamental bedrock—HTML, accessibility,progressive enhancement—and see how they’re just as important as ever. And that’s also something to get excited about!

Without too many spoilers, what can we expect to take away from your talk?

I’m hoping to provide people with a lens that they can use to examine their tools, processes, and approaches to designing for the web. It’s a fairly crude lens—it divides the world into a binary split that I’ve borrowed from the world of programming; imperative and declarative languages. But it’s a surprisingly thought-provoking angle.

Along the way I’ll also be pointing out some of the incredible things that we can do with CSS now. In the past few years there’s been an explosion in capabilities.

But this won’t be a code-heavy presentation. It’s mostly about the ideas. I’ll be referencing some projects by other people that I’m very excited by.

What other web design and development tools, techniques and technologies are you currently most excited about?

Outside of the world of CSS—which is definitely where a lot of the most exciting developments are happening—I’m really interested in the View Transitions API. If it delivers on its promise, it could be a very useful nail in the coffin of uneccessary single page apps. But I’m a little nervous. Right now the implementation only works for single page apps, which makes it an incentive to use that model. I really, really hope that the multipage version ships soon.

But honestly, I probably get most excited about discovering some aspect of HTML that I wasn’t aware of. Even after all these years the language can still surprise me.

And on the flipside, what bugs you most about the web at the moment?

How much time have you got?

Seriously though, the thing that’s really bugged me for the past decade is the increasing complexity of “modern” frontend development when it isn’t driven by user needs. Yes, I’m talking about JavaScript frameworks like React and the assumption that everything should be a single page app.

Honestly, the mindset became so ubiquitous that I felt like I must be missing something. But no, the situation really has spiralled out of control, much to the detriment of end users.

Luckily we’re starting to see the pendulum swing back. The proponents of trickle-down developer convenience are having to finally admit that it’s bollocks.

I don’t care if the move back to making websites is re-labelled as “isomorphic server-rendered multi-page apps.” As long as we make sensible architectural decisions, that’s all that matters.

What’s next, Jeremy?

Right now I’m curating the line-up for this year’s UX London conference which is the week after Pixel Pioneers. As you know, conference curation is a lot of work, but it’s also very rewarding. I’m really proud of the line-up.

It’s been a while since the last season of the Clearleft podcast. I hope to remedy that soon. It takes a lot of effort to make even one episode, but again, it’s very rewarding.

“Artificial Intelligence & Humanity,” an article by Dan Mall

AI is great anything quantity-related and bad and anything quality-related.

Sensible thinking from Dan here, that mirrors what we’re thinking at Clearleft.

In other words, it leans heavily on averages; the closer the training data matches an average, the higher degree of confidence that the result is more “correct,” or at least desirable.

The problem is that this is the polar opposite of what we consider creativity to be. Creativity isn’t about averages. It’s about the outliers, sometimes the one thing that’s different than all the rest.

Wednesday, May 24th, 2023

Generative AI: What You Need To Know

Generative AI: What You Need To Know is a free resource that will help you develop an AI-bullshit detector.

You can read all the cards on one page, print them out, or print to PDF.

Friday, May 19th, 2023

ChatGPT is not ‘artificial intelligence.’ It’s theft. | America Magazine

But in calling these programs “artificial intelligence” we grant them a claim to authorship that is simply untrue. Each of those tokens used by programs like ChatGPT—the “language” in their “large language model”—represents a tiny, tiny piece of material that someone else created. And those authors are not credited for it, paid for it or asked permission for its use. In a sense, these machine-learning bots are actually the most advanced form of a chop shop: They steal material from creators (that is, they use it without permission), cut that material into parts so small that no one can trace them and then repurpose them to form new products.

Thursday, May 18th, 2023

How you want me to cover artificial intelligence

Seven principles for journalism in the age of AI

  1. Be rigorous with your definitions.
  2. Predict less, explain more.
  3. Don’t hype things up.
  4. Focus on the people building AI systems — and the people affected by its release.
  5. Offer strategic takes on products.
  6. Emphasize the tradeoffs involved.
  7. Remember that nothing is inevitable.

Wednesday, May 17th, 2023

To have “true AI,” we need much more than ChatGPT - Big Think

LLMs have never experienced anything. They are just programs that have ingested unimaginable amounts of text. LLMs might do a great job at describing the sensation of being drunk, but this is only because they have read a lot of descriptions of being drunk. They have not, and cannot, experience it themselves. They have no purpose other than to produce the best response to the prompt you give them.

This doesn’t mean they aren’t impressive (they are) or that they can’t be useful (they are). And I truly believe we are at a watershed moment in technology. But let’s not confuse these genuine achievements with “true AI.”

Tuesday, May 16th, 2023

Nailspotting

I’m sure you’ve heard the law of the instrument: when all you have is a hammer, everything looks like a nail.

There’s another side to it. If you’re selling hammers, you’ll depict a world full of nails.

Recent hammers include cryptobollocks and virtual reality. It wasn’t enough for blockchains and the metaverse to be potentially useful for some situations; they staked their reputations on being utterly transformative, disrupting absolutely every facet of life.

This kind of hype is a terrible strategy in the long-term. But if you can convince enough people in the short term, you can make a killing on the stock market. In truth, the technology itself is superfluous. It’s the hype that matters. And if the hype is over-inflated enough, you can even get your critics to do your work for you, broadcasting their fears about these supposedly world-changing technologies.

You’d think we’d learn. If an industry cries wolf enough times, surely we’d become less trusting of extraordinary claims. But the tech industry continues to cry wolf—or rather, “hammer!”—at regular intervals.

The latest hammer is machine learning, usually—incorrectly—referred to as Artificial Intelligence. What makes this hype cycle particularly infuriating is that there are genuine use cases. There are some nails for this hammer. They’re just not as plentiful as the breathless hype—both positive and negative—would have you believe.

When I was hosting the DiBi conference last week, there was a little section on generative “AI” tools. Matt Garbutt covered the visual side, demoing tools like Midjourney. Scott Salisbury covered the text side, showing how you can generate code. Afterwards we had a panel discussion.

During the panel I asked some fairly straightforward questions that nobody could answer. Who owns the input (the data used by these generative tools)? Who owns the output?

On the whole, it stayed quite grounded and mercifully free of hyperbole. Both speakers were treating the current crop of technologies as tools. Everyone agreed we were on the hype cycle, probably the peak of inflated expectations, looking forward to reaching the plateau of productivity.

Scott explicitly warned people off using generative tools for production code. His advice was to stick to side projects for now.

Matt took a closer look at where these tools could fit into your day-to-day design work. Mostly it was pretty sensible, except when he suggested that there could be any merit to using these tools as a replacement for user testing. That’s a terrible idea. A classic hammer/nail mismatch.

I think I moderated the panel reasonably well, but I have one regret. I wish I had first read Baldur Bjarnason’s new book, The Intelligence Illusion. I started reading it on the train journey back from Edinburgh but it would have been perfect for the panel.

The Intelligence Illusion is very level-headed. It is neither pro- nor anti-AI. Instead it takes a pragmatic look at both the benefits and the risks of using these tools in your business.

It has excellent advice for spotting genuine nails. For example:

Generative AI has impressive capabilities for converting and modifying seemingly unstructured data, such as prose, images, and audio. Using these tools for this purpose has less copyright risk, fewer legal risks, and is less error prone than using it to generate original output.

Think about transcripts of videos or podcasts—an excellent use of this technology. As Baldur puts it:

The safest and, probably, the most productive way to use generative AI is to not use it as generative AI. Instead, use it to explain, convert, or modify.

He also says:

Prefer internal tools over externally-facing chatbots.

That chimes with what I’ve been seeing. The most interesting uses of this technology that I’ve seen involve a constrained dataset. Like the way Luke trained a language model on his own content to create a useful chat interface.

Anyway, The Intelligence Illusion is full of practical down-to-earth advice based on plenty of research backed up with copious citations. I’m only halfway through it and it’s already helped me separate the hype from the reality.

Monday, May 15th, 2023

Google’s AI Hype Circle

Google has a serious AI problem. That problem isn’t “how to integrate AI into Google products?” That problem is “how to exclude AI-generated nonsense from Google products?”

AI isn’t the app, it’s the UI - Stack Overflow Blog

In some ways, the fervor around AI is reminiscent of blockchain hype, which has steadily cooled since its 2021 peak. In almost all cases, blockchain technology serves no purpose but to make software slower, more difficult to fix, and a bigger target for scammers. AI isn’t nearly as frivolous—it has several novel use cases—but many are rightly wary of the resemblance. And there are concerns to be had; AI bears the deceptive appearance of a free lunch and, predictably, has non-obvious downsides that some founders and VCs will insist on learning the hard way.

This is a good level-headed overview of how generative language model tools work.

If something can be reduced to patterns, however elaborate they may be, AI can probably mimic it. That’s what AI does. That’s the whole story.

There’s very practical advice on deciding where and when these tools make sense:

The sweet spot for AI is a context where its choices are limited, transparent, and safe. We should be giving it an API, not an output box.

Thursday, May 4th, 2023

Innovation

I did an episode of the Clearleft podcast on innovation a while back:

Everyone wants to be innovative …but no one wants to take risks.

The word innovation is often bandied about in an unquestioned positive way. But if we acknowledge that innovation is—by definition—risky, then the exhortations sound less positive.

“We provide innovative solutions for businesses!” becomes “We provide risky solutions for businesses!”

I was reminded of this when I saw the website for the Podcast Standards Project. The original text on the website described the project as:

…a grassroots coalition working to establish modern, open standards, to enable innovation in the podcast industry.

I pushed back on that wording (partly because I’ve seen the word “innovation” used as a smoke screen for user-hostile practices like tracking and surveillance). The wording has since changed to:

…a grassroots coalition dedicated to creating standards and practices that improve the open podcasting ecosystem for both listeners and creators.

That’s better. It’s more precise.

Am I nitpicking? Only if you think that “innovation” and “improvement” are synonyms. I don’t think they are.

Innovation implies change. Improvement implies positive change.

Not all change is positive. Not all innovation is positive.

Innovation goes hand in hand with disruption. Again, disruption involves change. But not necessarily positive change.

Think about the antonyms of change and disruption: stasis and stability. Those words don’t sound very exciting, but in some arenas they’re exactly what you should be aiming for; arenas like infrastructure or standards.

Not to get all pace layers-y here, but it seems to me that every endeavour has a sweet spot for innovation. For some projects, too little innovation is bad. For others, too much innovation is worse.

The trick is knowing which kind of project you’re working on.

(As a side note, I think some people use the word innovation to describe the generative, divergent phase of a design project: “how might we come up with innovative new approaches?” But we already have a word to describe the practice of generating novel and interesting ideas. That word isn’t innovation. It’s creativity.)

Artificial intelligence: who owns the future? - ethical.net

Whether consciously or not, AI manufacturers have decided to prioritise plausibility over accuracy. It means AI systems are impressive, but in a world plagued by conspiracy and disinformation this decision only deepens the problem.

Just Simply | Stop saying how simple things are in our docs

If someone’s been driven to Google something you’ve written, they’re stuck. Being stuck is, to one degree or another, upsetting and annoying. So try not to make them feel worse by telling them how straightforward they should be finding it. It gets in the way of them learning what you want them to learn.

The other side of egoism | A Working Library

Mandy takes a deep dive into the treatment of altruism in Ursula Le Guin’s The Dispossessed.

Tuesday, May 2nd, 2023

Why Chatbots Are Not the Future by Amelia Wattenberger

Of course, users can learn over time what prompts work well and which don’t, but the burden to learn what works still lies with every single user. When it could instead be baked into the interface.

Friday, April 28th, 2023

Talk: The Expanding Dark Forest and Generative AI

Maggie Appleton:

An exploration of the problems and possible futures of flooding the web with generative AI content.

Wednesday, April 26th, 2023

“the secret list of websites” - Chris Coyier

Google is a portal to the web. Google is an amazing tool for finding relevant websites to go to. That was useful when it was made, and it’s nothing but grown in usefulness. Google should be encouraging and fighting for the open web. But now they’re like, actually we’re just going to suck up your website, put it in a blender with all other websites, and spit out word smoothies for people instead of sending them to your website. Instead.

I concur with Chris’s assessment:

I just think it’s fuckin’ rude.

Wednesday, April 19th, 2023

Grease

Grease is a website starter that makes building performant, accessible, aesthetic websites fast & frictionless.

Interestingly, this starter kit uses cascade layers for managing CSS.

Efficiency trades off against resiliency - Made of Bugs

Past some point, making a system more efficient will mean making it less resilient, and, conversely, building in robustness tends to make a system less efficient (at least in the short run).

This is true of software, networks, and organisations.

When we set metrics or goals for a system or a team or an organization that ask for efficiency, let us be aware that, absent countervailing pressures, we are probably also asking for the system to become more brittle and fragile, too.

Tuesday, April 18th, 2023

The one about AI - macwright.com

Writing, both code and prose, for me, is both an end product and an end in itself. I don’t want to automate away the things that give me joy.

And that is something that I’m more and more aware of as I get older – sources of joy. It’s good to diversify them, to keep track of them, because it’s way too easy to run out. Or to end up with just one, and then lose it.

The thing about luddites is that they make good punchlines, but they were all people.