Tags: sea

283

sparkline

Tuesday, August 29th, 2023

How Google made the world go viral - The Verge

On the sad state of Google search today:

How did a site that captured the imagination of the internet and fundamentally changed the way we communicate turn into a burned-out Walmart at the edge of town?

Thursday, August 10th, 2023

Undersea Cables by Rishi Sunak [PDF]

Years before becoming Prime Minister of the UK, Rishi Sunak wrote this report, Undersea Cables: Indispensable, insecure.

Wednesday, July 12th, 2023

Pulling my site from Google over AI training – Tracy Durnell

I’m not down with Google swallowing everything posted on the internet to train their generative AI models.

A principled approach to evolving choice and control for web content

This would mean a lot more if it happened before the wholesale harvesting of everyone’s work.

But I’m sure Google will put a mighty fine lock on that stable door that the horse bolted from.

Tuesday, July 11th, 2023

Permission

Back when the web was young, it wasn’t yet clear what the rules were. Like, could you really just link to something without asking permission?

Then came some legal rulings to establish that, yes, on the web you can just link to anything without checking if it’s okay first.

What about search engines and directories? Technically they’re rifling through all the stuff we publish and reposting snippets of it. Is that okay?

Again, through some legal precedents—but mostly common agreement—everyone decided that on balance it was fine. After all, those snippets they publish are helping your site get traffic.

In short order, search came to rule the web. And Google came to rule search.

The mutually beneficial arrangement persisted uneasily. Despite Google’s search results pages getting worse and worse in recent years, the company’s huge market share of search means you generally want to be in their good books.

Google’s business model relies on us publishing web pages so that they can put ads around the search results linking to that content, and we rely on Google to send people to our websites by responding smartly to search queries.

That has now changed. Instead of responding to search queries by linking to the web pages we’ve made, Google is instead generating dodgy summaries rife with hallucina… lies (a psychic hotline, basically).

Google still benefits from us publishing web pages. We no longer benefit from Google slurping up those web pages.

With AI, tech has broken the web’s social contract:

Google has steadily been manoeuvring their search engine results to more and more replace the pages in the results.

As Chris puts it:

Me, I just think it’s fuckin’ rude.

Google is a portal to the web. Google is an amazing tool for finding relevant websites to go to. That was useful when it was made, and it’s nothing but grown in usefulness. Google should be encouraging and fighting for the open web. But now they’re like, actually we’re just going to suck up your website, put it in a blender with all other websites, and spit out word smoothies for people instead of sending them to your website. Instead.

Ben proposes an update to robots.txt that would allow us to specify licensing information:

Robots.txt needs an update for the 2020s. Instead of just saying what content can be indexed, it should also grant rights.

Like crawl my site only to provide search results not train your LLM.

It’s a solid proposal. But Google has absolutely no incentive to implement it. They hold all the power.

Or do they?

There is still the nuclear option in robots.txt:

User-agent: Googlebot
Disallow: /

That’s what Vasilis is doing:

I have been looking for ways to not allow companies to use my stuff without asking, and so far I coulnd’t find any. But since this policy change I realised that there is a simple one: block google’s bots from visiting your website.

The general consensus is that this is nuts. “If you don’t appear in Google’s results, you might as well not be on the web!” is the common cry.

I’m not so sure. At least when it comes to personal websites, search isn’t how people get to your site. They get to your site from RSS, newsletters, links shared on social media or on Slack.

And isn’t it an uncomfortable feeling to think that there’s a third party service that you absolutely must appease? It’s the same kind of justification used by people who are still on Twitter even though it’s now a right-wing transphobic cesspit. “If I’m not on Twitter, I might as well not be on the web!”

The situation with Google reminds me of what Robin said about Twitter:

The speed with which Twitter recedes in your mind will shock you. Like a demon from a folktale, the kind that only gains power when you invite it into your home, the platform melts like mist when that invitation is rescinded.

We can rescind our invitation to Google.

Friday, July 7th, 2023

Monday, May 15th, 2023

Google’s AI Hype Circle

Google has a serious AI problem. That problem isn’t “how to integrate AI into Google products?” That problem is “how to exclude AI-generated nonsense from Google products?”

Tuesday, May 9th, 2023

Google AMP: how Google tried to fix the web by taking it over - The Verge

AMP succeeded spectacularly. Then it failed. And to anyone looking for a reason not to trust the biggest company on the internet, AMP’s story contains all the evidence you’ll ever need.

This is a really good oral history of how AMP soured Google’s reputation.

Full disclosure: I’m briefly cited:

“When it suited them, it was open-source,” says Jeremy Keith, a web developer and a former member of AMP’s advisory council. “But whenever there were any questions about direction and control… it was Google’s.”

As an aside, this article contains a perfect description of the company cultures of Facebook, Apple, and Google:

“You meet with a Facebook person and you see in their eyes they’re psychotic,” says one media executive who’s dealt with all the major platforms. “The Apple person kind of listens but then does what it wants to do. The Google person honestly thinks what they’re doing is the best thing.”

Spot. On.

Tuesday, May 2nd, 2023

Dark Patterns: We Asked 500+ People How Brands Deceive Them

User research into deceptive design patterns:

Despite our technological advancement and understanding of human nature, it’s still common to see deception in digital products. To understand digital deception (and better define it), we surveyed 536 people to measure and discuss people’s understanding of this matter.

Wednesday, April 26th, 2023

“the secret list of websites” - Chris Coyier

Google is a portal to the web. Google is an amazing tool for finding relevant websites to go to. That was useful when it was made, and it’s nothing but grown in usefulness. Google should be encouraging and fighting for the open web. But now they’re like, actually we’re just going to suck up your website, put it in a blender with all other websites, and spit out word smoothies for people instead of sending them to your website. Instead.

I concur with Chris’s assessment:

I just think it’s fuckin’ rude.

Wednesday, April 19th, 2023

Site Search in Arc Browser — For Your Own Site - Jim Nielsen’s Blog

I can now have adactio.com on speed dial for search — a valid use case indeed.

😊

Funnily enough, I tend to use tags more often than my own site search. I type adactio.com/tags/… followed by whatever I think past me would’ve used. So my browser’s address bar is kind of a sitesearch interface already.

Wednesday, March 29th, 2023

The search element | scottohara.me

I’ve already add the search element to thesession.org, but while browser support is still rolling out, I’m being extra verbose:

<search role="search">
 ...
</search>

Brought to you by the department of redunancy department.

I’ll remove the ARIA role once browsers are all on board. As Scott says:

Please be aware that this element landing in the HTML spec today does not mean it is available in browsers today. Issues have been filed to implement the search element in the major browsers, including the necessary accessibility mappings. Keep this in mind before you get all super excited and willy nilly add this new element to your pages.

Wednesday, March 22nd, 2023

Disclosure

You know how when you’re on hold to any customer service line you hear a message that thanks you for calling and claims your call is important to them. The message always includes a disclaimer about calls possibly being recorded “for training purposes.”

Nobody expects that any training is ever actually going to happen—surely we would see some improvement if that kind of iterative feedback loop were actually in place. But we most certainly want to know that a call might be recorded. Recording a call without disclosure would be unethical and illegal.

Consider chatbots.

If you’re having a text-based (or maybe even voice-based) interaction with a customer service representative that doesn’t disclose its output is the result of large language models, that too would be unethical. But, at the present moment in time, it would be perfectly legal.

That needs to change.

I suspect the necessary legislation will pass in Europe first. We’ll see if the USA follows.

In a way, this goes back to my obsession with seamful design. With something as inherently varied as the output of large language models, it’s vital that people have some way of evaluating what they’re told. I believe we should be able to see as much of the plumbing as possible.

The bare minimum amount of transparency is revealing that a machine is in the loop.

This shouldn’t be a controversial take. But I guarantee we’ll see resistance from tech companies trying to sell their “AI” tools as seamless, indistinguishable drop-in replacements for human workers.

Sunday, March 19th, 2023

ongoing by Tim Bray · The LLM Problem

It doesn’t bother me much that bleeding-edge ML technology sometimes gets things wrong. It bothers me a lot when it gives no warnings, cites no sources, and provides no confidence interval.

Yes! Like I said:

Expose the wires. Show the workings-out.

Tuesday, March 14th, 2023

Guessing

The last talk at the last dConstruct was by local clever clogs Anil Seth. It was called Your Brain Hallucinates Your Conscious Reality. It’s well worth a listen.

Anil covers a lot of the same ground in his excellent book, Being You. He describes a model of consciousness that inverts our intuitive understanding.

We tend to think of our day-to-day reality in a fairly mechanical cybernetic manner; we receive inputs through our senses and then make decisions about reality informed by those inputs.

As another former dConstruct speaker, Adam Buxton, puts it in his interview with Anil, it feels like that old Beano cartoon, the Numskulls, with little decision-making homonculi inside our head.

But Anil posits that it works the other way around. We make a best guess of what the current state of reality is, and then we receive inputs from our senses, and then we adjust our model accordingly. There’s still a feedback loop, but cause and effect are flipped. First we predict or guess what’s happening, then we receive information. Rinse and repeat.

The book goes further and applies this to our very sense of self. We make a best guess of our sense of self and then adjust that model constantly based on our experiences.

There’s a natural tendency for us to balk at this proposition because it doesn’t seem rational. The rational model would be to make informed calculations based on available data …like computers do.

Maybe that’s what sets us apart from computers. Computers can make decisions based on data. But we can make guesses.

Enter machine learning and large language models. Now, for the first time, it appears that computers can make guesses.

The guess-making is not at all like what our brains do—large language models require enormous amounts of inputs before they can make a single guess—but still, this should be the breakthrough to be shouted from the rooftops: we’ve taught machines how to guess!

And yet. Almost every breathless press release touting some revitalised service that uses AI talks instead about accuracy. It would be far more honest to tout the really exceptional new feature: imagination.

Using AI, we will guess who should get a mortgage.

Using AI, we will guess who should get hired.

Using AI, we will guess who should get a strict prison sentence.

Reframed like that, it’s easy to see why technologists want to bury the lede.

Alas, this means that large language models are being put to use for exactly the wrong kind of scenarios.

(This, by the way, is also true of immersive “virtual reality” environments. Instead of trying to accurately recreate real-world places like meeting rooms, we should be leaning into the hallucinatory power of a technology that can generate dream-like situations where the pleasure comes from relinquishing control.)

Take search engines. They’re based entirely on trust and accuracy. Introducing a chatbot that confidentally conflates truth and fiction doesn’t bode well for the long-term reputation of that service.

But what if this is an interface problem?

Currently facts and guesses are presented with equal confidence, hence the accurate descriptions of the outputs as bullshit or mansplaining as a service.

What if the more fanciful guesses were marked as such?

As it is, there’s a “temperature” control that can be adjusted when generating these outputs; the more the dial is cranked, the further the outputs will stray from the safest predictions. What if that could be reflected in the output?

I don’t know what that would look like. It could be typographic—some markers to indicate which bits should be taken with pinches of salt. Or it could be through content design—phrases like “Perhaps…”, “Maybe…” or “It’s possible but unlikely that…”

I’m sure you’ve seen the outputs when people request that ChatGPT write their biography. Perfectly accurate statements are generated side-by-side with complete fabrications. This reinforces our scepticism of these tools. But imagine how differently the fabrications would read if they were preceded by some simple caveats.

A little bit of programmed humility could go a long way.

Right now, these chatbots are attempting to appear seamless. If 80% or 90% of their output is accurate, then blustering through the other 10% or 20% should be fine, right? But I think the experience for the end user would be immensely more empowering if these chatbots were designed seamfully. Expose the wires. Show the workings-out.

Mind you, that only works if there is some way to distinguish between fact and fabrication. If there’s no way to tell how much guessing is happening, then that’s a major problem. If you can’t tell me whether something is 50% true or 75% true or 25% true, then the only rational response is to treat the entire output as suspect.

I think there’s a fundamental misunderstanding behind the design of these chatbots that goes all the way back to the Turing test. There’s this idea that the way to make a chatbot believable and trustworthy is to make it appear human, attempting to hide the gears of the machine. But the real way to gain trust is through honesty.

I want a machine to tell me when it’s guessing. That won’t make me trust it less. Quite the opposite.

After all, to guess is human.

Friday, February 17th, 2023

Openly Licensed Images, Audio and More | Openverse

A search engine for images and audio that’s either under a Creative Commons license or is in the public domain.

Monday, October 31st, 2022

Do You Like Rock Music?

I spent Friday morning in band practice with Salter Cane. It was productive. We’ve got some new songs that are coming together nicely. We’re still short a drummer though, so if you know anyone in Brighton who might be interested, let me know.

As we were packing up, we could here the band next door. They were really good. Just the kind of alt-country rock that would go nicely with Salter Cane.

On the way out, Jessica asked at the front desk who that band was. They’re called The Roebucks.

When I got home I Ducked, Ducked and Went to find out more information. There’s a Bandcamp page with one song. Good stuff. I also found their Facebook page. That’s where I saw this little tidbit:

Hello, we are supporting @seapowerband at @chalk_venue on the 30th of October. Hope you can make it!

Wait, that’s this very weekend! And I love Sea Power (formerly British Sea Power—they changed their name, which was a move that only annoyed the very people who’s worldviews prompted the name change in the first place). How did I not know about this gig? And how are there tickets still available?

And that’s how I came to spend my Sunday evening rocking out to two great bands.

Wednesday, October 26th, 2022

Programming Portals

A terrific piece by Maggie Appleton that starts with a comparison of graphical user interfaces and command line tools—which reminds me of the trade-offs between seamless and seamful design—and then moves into a proposed paradigm for declarative design tools:

Small, scoped areas within a graphical interface that allow users to read and write simple programmes

Thursday, October 13th, 2022

The Customer Onboarding Handbook [PDF]

A free PDF with five articles:

  • The Brand Experience Gap by Emma Parnell
  • Ongoing Onboarding: Taking Customers from First Use to Product Mastery by Chris How
  • Just Add ICE: 3 Ways to Make Your Customer Experience More Meaningful by Candi Williams
  • Effectively Researching and Using Customer Journey Maps by Lauren Isaacson
  • Using and Abusing Surveys in Customer Onboarding by Caroline Jarrett

Three of those authors spoke at this year’s UX London!

Tuesday, August 30th, 2022

Improving the information architecture of the Smart Pension member app | Design and tech | Smart – retirement, savings and financial wellbeing

Here’s a really excellent, clearly-written case study that unfortunately includes this accurate observation:

In recent years the practice of information architecture has fallen out of fashion, which is a shame as you can’t design something successfully without it. If a user can’t find a feature, it’s game over - the feature may as well not exist as far as they’re concerned.

I also like this insight:

Burger menus are effective… at hiding things.