Tags: ai

788

sparkline

Thursday, September 21st, 2023

Wednesday, September 20th, 2023

Jessica at the entrance of the restaurant Atrio. Jessica in the main square where letters spell out: C A C E R E S. Jessica in front of a beautiful old medieval tower.

Jessica around town.

Trabaja remoto

August was a month of travels. You can press play on that month’s map to follow the journey.

But check out the map for September too because the travels continue. This time my adventures are confined to Europe.

I’m in Spain. Jessica and I flew into Madrid on Saturday. The next day we took a train ride across the Extremaduran landscape to Cáceres, our home for the week.

This is like the sequel to our Sicilian trip. We’re both working remotely. We just happen to do be doing it from a beautiful old town with amazing cuisine.

We’re in a nice apartment that—crucially—has good WiFi. It’s right on the main square, but it’s remarkably quiet.

There’s a time difference of one hour with Brighton. Fortunately everything in Spain happens at least an hour later than it does at home. Waking up. Lunch. Dinner. Everything is time-shifted so that I’m on the same schedule as my colleagues.

I swear I’m more productive working this way. Maybe it’s the knowledge that tapas of Iberican ham await me after work, but I’m getting a lot done this week.

And when the working week is done, the fun begins. Cáceres is hosting its annual Irish fleadh this weekend.

I’ve always wanted to go to it, but it’s quite a hassle to get here just for a weekend. Combining it with a week of remote work makes it more doable.

I’m already having a really nice time here and the tunes haven’t even started yet.

Tuesday, September 19th, 2023

Tailwind, and the death of web craftsmanship

CSS is better now. It’s not perfect, but it’s better than its ever been, and it’s better than tailwind. Give it another try. Don’t reach for big globs of libraries to paper over the issues you think it has.

This is why it’s so important to re-evaluate technology decisions.

I’ve seen people, lead and principal engineers, who refuse to learn modern JS, insisting that since it was bad in 2006 its bad today. Worse still is some of these people have used their leadership positions to prevent the use of modern JS.

Simon’s rule

I got a nice email from someone regarding my recent posts about performance on The Session. They said:

I hope this message finds you well. First and foremost, I want to express how impressed I am with the overall performance of https://thesession.org/. It’s a fantastic resource for music enthusiasts like me.

How nice! I responded, thanking them for the kind words.

They sent a follow-up clarification:

Awesome, anyway there was an issue in my message.

The line ‘It’s a fantastic resource for music enthusiasts like me.’ added by chatGPT and I didn’t notice.

I imagine this is what it feels like when you’re on a phone call with someone and towards the end of the call you hear a distinct flushing sound.

I wrote back and told them about Simon’s rule:

I will not publish anything that takes someone else longer to read than it took me to write.

That just feels so rude!

I think that’s a good rule.

Friday, September 15th, 2023

We’re still not innovating with AI-generated UI.

Prototypes and production:

It looks like it will be a great tool for prototyping. A tool to help developers that don’t have experience with CSS and layout to have a starting point. As someone who spent some time building smoke and mirrors prototypes for UX research, I welcome tools like this.

What concerns me is the assertion that this is production-grade code when it simply is not.

Monday, September 11th, 2023

Performative performance

Web Summer Camp in Croatia finished with an interesting discussion. It was labelled a town-hall meeting, but it was more like an Oxford debating club.

Two speakers had two minutes each to speak for or against a particular statement. Their stances were assigned to them so they didn’t necessarily believe what they said.

One of the propositions was something like:

In the future, sustainable design will be as important as UX or performance.

That’s a tough one to argue against! But that’s what Sophia had to do.

She actually made a fairly compelling argument. She said that real impact isn’t going to come from individual websites changing their colour schemes. Real impact is going to come from making server farms run on renewable energy. She advocated for political action to change the system rather than having the responsibility heaped on the shoulders of the individuals making websites.

It’s a fair point. Much like the concept of a personal carbon footprint started life at BP to distract from corporate responsibility, perhaps we’re going to end up navel-gazing into our individual websites when we should be collectively lobbying for real change.

It’s akin to clicktivism—thinking you’re taking action by sharing something on social media, when real action requires hassling your political representative.

I’ve definitely seen some examples of performative sustainability on websites.

For example, at the start of this particular debate at Web Summer Camp we were shown a screenshot of a municipal website that has a toggle. The toggle supposedly enables a low-carbon mode. High resolution images are removed and for some reason the colour scheme goes grayscale. But even if those measures genuinely reduced energy consumption, it’s a bit late to enact them only after the toggle has been activated. Those hi-res images have already been downloaded by then.

Defaults matter. To be truly effective, the toggle needs to work the other way. Start in low-carbon mode, and only download the hi-res images when someone specifically requests them. (Hopefully browsers will implement prefers-reduced-data soon so that we can have our sustainable cake and eat it.)

Likewise I’ve seen statistics bandied about around the energy-savings that could be made if we used dark colour schemes. I’m sure the statistics are correct, but I’d like to see them presented side-by-side with, say, the energy impact of Google Tag Manager or React or any other wasteful dependencies that impact performance invisibly.

That’s the crux. Most of the important work around energy usage on websites is invisible. It’s the work done to not add more images, more JavaScript or more web fonts.

And it’s not just performance. I feel like the more important the work, the more likely it is to be invisible: privacy, security, accessibility …those matter enormously but you can’t see when a website is secure, or accessible, or not tracking you.

I suspect this is why those areas are all frustratingly under-resourced. Why pour time and effort into something you can’t point at?

Now that I think about it, this could explain the rise of web accessibility overlays. If you do the real work of actually making a website accessible, your work will be invisible. But if you slap an overlay on your website, it looks like you’re making a statement about how much you care about accessibility (even though the overlay is total shit and does more harm than good).

I suspect there might be a similar mindset at work when it comes to interface toggles for low-carbon mode. It might make you feel good. It might make you look good. But it’s a poor substitute for making your website carbon-neutral by default.

Sunday, September 10th, 2023

Squish Meets Structure: Designing with Language Models

The slides and transcript from a great talk by Maggie Appleton, including this perfect description of the vibes we get from large language models:

It feels like they’re either geniuses playing dumb or dumb machines playing genius, but we don’t know which.

Wednesday, September 6th, 2023

Making Large Language Models work for you

Another great talk from Simon that explains large language models in a hype-free way.

Travels

He drew a deep breath. ‘Well, I’m back,’ he said.

I know how you feel, Samwise Gamgee.

I have returned from my travels—a week aboard the Queen Mary 2 crossing the Atlantic, followed by a weekend in New York, finishing with a week in Saint Augustine, Florida.

The Atlantic crossing was just as much fun as last time. In fact it was better because this time Jessica and I got to share the experience with our dear friends Dan and Sue.

There was dressing up! There was precarious ballet! There were waves! There were even some dolphins!

The truth is that this kind of Atlantic crossing is a bit like cosplaying a former age of travel. You get out of it what you put it into it. If you’re into LARPing as an Edwardian-era traveller, you’re going to have a good time.

We got very into it. Dressing up for dinner. Putting on a tux for the gala night. Donning masks for the masquerade evening.

Me and Jessica all dressed up wearing eye masks. Dan and Sue in wild outfits wearing eye masks.

It’s actually quite a practical way of travelling if you don’t mind being cut off from all digital communication for a week (this is a feature, not a bug). You adjust your clock by one hour most nights so that by the time you show up in New York, you’re on the right timezone with zero jetlag.

That was just as well because we had a packed weekend of activities in New York. By pure coincidence, two separate groups of friends were also in town from far away. We all met up and had a grand old time. Brunch in Tribeca; a John Cale concert in Prospect Park; the farmer’s market in Union Square; walking the high line …good times with good friends.

A brunch table with me and eight friends all smiling.

New York was hot, but not as hot as what followed in Florida. A week lazing about on Saint Augustine beach. I ate shrimp every single day. I regret nothing.

A sandy beach with gentle waves crashing under a blue sky with wisps of cloud.

We timed our exit just right. We flew out of Florida before the tropical storm hit. Then we landed in Gatwick right before the air-traffic control chaos erupted.

I had one day of rest before going back to work.

Well, I say “work”, but the first item in my calendar was speaking at Web Summer Camp in Croatia. Back to the airport.

The talk went well, and I got to attend a performance workshop by Harry. But best of all was the location. Opatija is an idyllic paradise. Imagine crossing a web conference with White Lotus, but in a good way. It felt like a continuation of Florida, but with more placid clear waters.

A beautiful old town interspersed with lush greenery sweeps down to a tranquil bay with blue/green water.

But now I’m really back. And fortunately the English weather is playing along by being unseasonably warm . It’s as if the warm temperatures are following me around. I like it.

Wednesday, August 9th, 2023

Automation

I just described prototype code as code to be thrown away. On that topic…

I’ve been observing how people are programming with large language models and I’ve seen a few trends.

The first thing that just about everyone agrees on is that the code produced by a generative tool is not fit for public consumption. At least not straight away. It definitely needs to be checked and tested. If you enjoy debugging and doing code reviews, this might be right up your street.

The other option is to not use these tools for production code at all. Instead use them for throwaway code. That could be prototyping. But it could also be the code for those annoying admin tasks that you don’t do very often.

Take content migration. Say you need to grab a data dump, do some operations on the data to transform it in some way, and then pipe the results into a new content management system.

That’s almost certainly something you’d want to automate with bespoke code. Once the content migration is done, the code can be thrown away.

Read Matt’s account of coding up his Braggoscope. The code needed to spider a thousand web pages, extract data from those pages, find similarities, and output the newly-structured data in a different format.

I’ve noticed that these are just the kind of tasks that large language models are pretty good at. In effect you’re training the tool on your own very specific data and getting it to do your drudge work for you.

To me, it feels right that the usefulness happens on your own machine. You don’t put the machine-generated code in front of other humans.

Monday, August 7th, 2023

Documentation for GPTBot - OpenAI API

Now that the horse has bolted—and ransacked the web—you can shut the barn door:

To disallow GPTBot to access your site you can add the GPTBot to your site’s robots.txt:

User-agent: GPTBot
Disallow: /

Saturday, August 5th, 2023

“If It Sounds Like Sci-Fi, It Probably Is”

Emily M. Bender:

I dislike the term because “artificial intelligence” suggests that there’s more going on than there is, that these things are autonomous thinking entities rather than tools and simply kinds of automation. If we focus on them as autonomous thinking entities or we spin out that fantasy, it is easier to lose track of the people in the picture, both the people who should be accountable for what the systems are doing and the people whose labor and data are being exploited to create them in the first place.

Alternative terms:

  • Stochastic parrots
  • Spicy autocomplete
  • Mad Libs
  • Magic Eight Ball

And this is worth shouting from the rooftops:

The threat is not the generative “AI” itself. It’s the way that management might choose to use it.

Catching up on the weird world of LLMs

This is a really clear, practical, level-headed explanatory talk from Simon. You can read the transcript or watch the video.

Wednesday, July 12th, 2023

Pulling my site from Google over AI training – Tracy Durnell

I’m not down with Google swallowing everything posted on the internet to train their generative AI models.

A principled approach to evolving choice and control for web content

This would mean a lot more if it happened before the wholesale harvesting of everyone’s work.

But I’m sure Google will put a mighty fine lock on that stable door that the horse bolted from.

Tuesday, July 11th, 2023

Permission

Back when the web was young, it wasn’t yet clear what the rules were. Like, could you really just link to something without asking permission?

Then came some legal rulings to establish that, yes, on the web you can just link to anything without checking if it’s okay first.

What about search engines and directories? Technically they’re rifling through all the stuff we publish and reposting snippets of it. Is that okay?

Again, through some legal precedents—but mostly common agreement—everyone decided that on balance it was fine. After all, those snippets they publish are helping your site get traffic.

In short order, search came to rule the web. And Google came to rule search.

The mutually beneficial arrangement persisted uneasily. Despite Google’s search results pages getting worse and worse in recent years, the company’s huge market share of search means you generally want to be in their good books.

Google’s business model relies on us publishing web pages so that they can put ads around the search results linking to that content, and we rely on Google to send people to our websites by responding smartly to search queries.

That has now changed. Instead of responding to search queries by linking to the web pages we’ve made, Google is instead generating dodgy summaries rife with hallucina… lies (a psychic hotline, basically).

Google still benefits from us publishing web pages. We no longer benefit from Google slurping up those web pages.

With AI, tech has broken the web’s social contract:

Google has steadily been manoeuvring their search engine results to more and more replace the pages in the results.

As Chris puts it:

Me, I just think it’s fuckin’ rude.

Google is a portal to the web. Google is an amazing tool for finding relevant websites to go to. That was useful when it was made, and it’s nothing but grown in usefulness. Google should be encouraging and fighting for the open web. But now they’re like, actually we’re just going to suck up your website, put it in a blender with all other websites, and spit out word smoothies for people instead of sending them to your website. Instead.

Ben proposes an update to robots.txt that would allow us to specify licensing information:

Robots.txt needs an update for the 2020s. Instead of just saying what content can be indexed, it should also grant rights.

Like crawl my site only to provide search results not train your LLM.

It’s a solid proposal. But Google has absolutely no incentive to implement it. They hold all the power.

Or do they?

There is still the nuclear option in robots.txt:

User-agent: Googlebot
Disallow: /

That’s what Vasilis is doing:

I have been looking for ways to not allow companies to use my stuff without asking, and so far I coulnd’t find any. But since this policy change I realised that there is a simple one: block google’s bots from visiting your website.

The general consensus is that this is nuts. “If you don’t appear in Google’s results, you might as well not be on the web!” is the common cry.

I’m not so sure. At least when it comes to personal websites, search isn’t how people get to your site. They get to your site from RSS, newsletters, links shared on social media or on Slack.

And isn’t it an uncomfortable feeling to think that there’s a third party service that you absolutely must appease? It’s the same kind of justification used by people who are still on Twitter even though it’s now a right-wing transphobic cesspit. “If I’m not on Twitter, I might as well not be on the web!”

The situation with Google reminds me of what Robin said about Twitter:

The speed with which Twitter recedes in your mind will shock you. Like a demon from a folktale, the kind that only gains power when you invite it into your home, the platform melts like mist when that invitation is rescinded.

We can rescind our invitation to Google.

Monday, July 10th, 2023

Fruit Of The Poisonous LLaMA? – Terence Eden’s Blog

I want to live in a future where Artificial Intelligences can relieve humans of the drudgery of labour. But I don’t want to live in a future which is built by ripping-off people against their will.

Saturday, July 8th, 2023

How to report better on artificial intelligence - Columbia Journalism Review

  • Be skeptical of PR hype
  • Question the training data
  • Evaluate the model
  • Consider downstream harms

Wednesday, July 5th, 2023

The LLMentalist Effect: how chat-based Large Language Models replicate the mechanisms of a psychic’s con

Taken together, these flaws make LLMs look less like an information technology and more like a modern mechanisation of the psychic hotline.

Delegating your decision-making, ranking, assessment, strategising, analysis, or any other form of reasoning to a chatbot becomes the functional equivalent to phoning a psychic for advice.

Imagine Google or a major tech company trying to fix their search engine by adding a psychic hotline to their front page? That’s what they’re doing with Bard.