Tags: algorithm

41

sparkline

Saturday, May 15th, 2021

The cage

I subscribe to Peter Gasston’s newsletter, The Tech Landscape. It’s good. Peter’s a smart guy with his finger on the pulse of many technologies that are beyond my ken. I recommend subscribing.

But I was very taken aback by what he wrote in issue 202. It was to do with algorithmic recommendation engines.

This week I want to take a little dump on a tweet I read. I’m not going to link to it (I’m not that person), but it basically said something like: “I’m afraid to Google something because I don’t want the algorithm to think I like it, and I’m afraid to click a link because I don’t want the algorithm to show me more like it… what a cage.”

I saw the same tweet. It resonated with me. I had responded with a link to a post I wrote a while back called Get safe. That post made two points:

  1. GET requests shouldn’t have side effects. Adding to a dossier on someone’s browsing habits definitely counts as a side effect.
  2. It is literally a fundamental principle of the web platform that it should be safe to visit a web page.

But Peter describes ubiquitous surveillance as a feature, not a bug:

It’s observing what someone likes or does, then trying to make recommendations for more things like it—whether that’s books, TV shows, clothes, advertising, or whatever. It works on probability, so it’s going to make better guesses the more it knows you; if you like ten things of type A, then liking one thing of type B shouldn’t be enough to completely change its recommendations. The problem is, we don’t like “the algorithm” if it doesn’t work, and we don’t like it if works too well (“creepy!”). But it’s not sinister, and it’s not a cage.

He would be correct if the balance of power were tipped towards the person actively looking for recommendations. As I said in my earlier post:

Don’t get me wrong: building a profile of someone based on their actions isn’t inherently wrong. If a user taps on “like” or “favourite” or “bookmark”, they are actively telling the server to perform an update (and so those actions should be POST requests). But do you see the difference in where the power lies?

When Peter says “it’s not sinister, and it’s not a cage” that may be true for him, but that is not a shared feeling, as the original tweet demonstrates. I don’t think it’s fair to dismiss someone else’s psychological pain because you don’t think they “get it”. I’m pretty sure everyone “gets” how recommendation engines are supposed to work. That’s not the issue. Trying to provide relevant content isn’t the problem. It’s the unbelievably heavy-handed methods that make it feel like a cage.

Peter uses the metaphor of a record shop:

“The algorithm” is the best way to navigate a world of infinite choice; imagine you went to a record shop (remember them?) which had every recording ever released; how would you find new music? You’d either buy music by bands you know you already liked, or you’d take a pure gamble on something—which most of the time would be a miss. So you’d ask a store worker, and they’d recommend the music they liked—but that’s no guarantee you’d like it. A good worker would ask what type of music you like, and recommend music based on that—you might not like all the recommendations, but there’s more of a chance you’d like some. That’s just what “the algorithm” does.

But that’s not true. You don’t ask “the algorithm” for a recommendation—it foists them on you whether you want them or not. A more apt metaphor would be that you walked by a record shop once and the store worker came out and followed you down the street, into your home, and watched your every move for the rest of your life.

What Peter describes sounds great—a helpful knowledgable software agent that you ask for recommendations. But that’s not what “the algorithm” is. And that’s why it feels like a cage. That’s why it is a cage.

The original tweet was an open, honest, and vulnerable insight into what online recommendation engines feel like. That’s a valuable insight that should be taken on board, not dismissed.

And what a lack of imagination to look at an existing broken system—that doesn’t even provide good recommendations while making people afraid to click on links—and shrug and say that this is the best we can do. If this really is “is the best way to navigate a world of infinite choice” then it’s no wonder that people feel like they need to go on a digital detox and get away from their devices in order to feel normal. It’s like saying that decapitation is the best way of solving headaches.

Imagine living in a surveillance state like East Germany, and saying “Well, how else is the government supposed to make informed decisions without constantly monitoring its citizens?” I think it’s more likely that you’d feel like you’re in a cage.

Apples to oranges? Kind of. But whether it’s surveillance communism or surveillance capitalism, there’s a shared methodology at work. They’re both systems that disempower people for the supposedly greater good of amassing data. Both are built on the false premise that problems can be solved by getting more and more data. If that results in collateral damage to people’s privacy and mental health, well …it’s all for the greater good, right?

It’s fucking bullshit. I don’t want to live in that cage and I don’t want anyone else to have to live in it either. I’m going to do everything I can to tear it down.

Thursday, July 30th, 2020

TheirTube

Theirtube is a Youtube filter bubble simulator that provides a look into how videos are recommended on other people’s YouTube. Users can experience how the YouTube home page would look for six different personas.

The source code is freely available.

Tuesday, February 18th, 2020

Utopia

Trys and James recently unveiled their Utopia project. They’ve been tinkering away at it behind the scenes for quite a while now.

You can check out the website and read the blog to get the details of how it accomplishes its goal:

Elegantly scale type and space without breakpoints.

I may well be biased, but I really like this project. I’ve been asking myself why I find it so appealing. Here are a few of the attributes of Utopia that strike a chord with me…

It’s collaborative

Collaboration is at the heart of Clearleft’s work. I know everyone says that, but we’ve definitely seen a direct correlation: projects with high levels of collaboration are invariably more successful than projects where people are siloed.

The genesis for Utopia came about after Trys and James worked together on a few different projects. It’s all too easy to let design and development splinter off into their own caves, but on these projects, Trys and James were working (literally) side by side. This meant that they could easily articulate frustrations to one another, and more important, they could easily share their excitement.

The end result of their collaboration is some very clever code. There’s an irony here. This code could be used to discourage collaboration! After all, why would designers and developers sit down together if they can just pass these numbers back and forth?

But I don’t think that Utopia will appeal to designers and developers who work in that way. Born in the spirit of collaboration, I suspect that it will mostly benefit people who value collaboration.

It’s intrinsic

If you’re a control freak, you may not like Utopia. The idea is that you specify the boundaries of what you’re trying to accomplish—minimum/maximum font sizes, minumum/maximum screen sizes, and some modular scales. Then you let the code—and the browser—do all the work.

On the one hand, this feels like surrending control. But on the other hand, because the underlying system is so robust, it’s a way of guaranteeing quality, even in situations you haven’t accounted for.

If someone asks you, “What size will the body copy be when the viewport is 850 pixels wide?”, your answer would have to be “I don’t know …but I do know that it will be appropriate.”

This feels like a very declarative way of designing. It reminds me of the ethos behind Andy and Heydon’s site, Every Layout. They call it algorithmic layout design:

Employing algorithmic layout design means doing away with @media breakpoints, “magic numbers”, and other hacks, to create context-independent layout components. Your future design systems will be more consistent, terser in code, and more malleable in the hands of your users and their devices.

See how breakpoints are mentioned as being a very top-down approach to layout? Remember the tagline for Utopia, which aims for fluid responsive design?

Elegantly scale type and space without breakpoints.

Unsurprisingly, Andy really likes Utopia:

As the co-author of Every Layout, my head nearly fell off from all of the nodding when reading this because this is the exact sort of approach that we preach: setting some rules and letting the browser do the rest.

Heydon describes this mindset as automating intent. I really like that. I think that’s what Utopia does too.

As Heydon said at Patterns Day:

Be your browser’s mentor, not its micromanager.

The idea is that you give it rules, you give it axioms or principles to work on, and you let it do the calculation. You work with the in-built algorithms of the browser and of CSS itself.

This is all possible thanks to improvements to CSS like calc, flexbox and grid. Jen calls this approach intrinsic web design. Last year, I liveblogged her excellent talk at An Event Apart called Designing Intrinsic Layouts.

Utopia feels like it has the same mindset as algorithmic layout design and intrinsic web design. Trys and James are building on the great work already out there, which brings me to the final property of Utopia that appeals to me…

It’s iterative

There isn’t actually much that’s new in Utopia. It’s a combination of existing techniques. I like that. As I said recently:

I’m a great believer in the HTML design principle, Evolution Not Revolution:

It is better to evolve an existing design rather than throwing it away.

First of all, Utopia uses the idea of modular scales in typography. Tim Brown has been championing this idea for years.

Then there’s the idea of typography being fluid and responsive—just like Jason Pamental has been speaking and writing about.

On the code side, Utopia wouldn’t be possible without the work of Mike Reithmuller and his breakthroughs on responsive and fluid typography, which led to Tim’s work on CSS locks.

Utopia takes these building blocks and combines them. So if you’re wondering if it would be a good tool for one of your projects, you can take an equally iterative approach by asking some questions…

Are you using fluid type?

Do your font-sizes increase in proportion to the width of the viewport? I don’t mean in sudden jumps with @media breakpoints—I mean some kind of relationship between font size and the vw (viewport width) unit. If so, you’re probably using some kind of mechanism to cap the minimum and maximum font sizes—CSS locks.

I’m using that technique on Resilient Web Design. But I’m not changing the relative difference between different sized elements—body copy, headings, etc.—as the screen size changes.

Are you using modular scales?

Does your type system have some kind of ratio that describes the increase in type sizes? You probably have more than one ratio (unlike Resilient Web Design). The ratio for small screens should probably be smaller than the ratio for big screens. But rather than jump from one ratio to another at an arbitrary breakpoint, Utopia allows the ratio to be fluid.

So it’s not just that font sizes are increasing as the screen gets larger; the comparative difference is also subtly changing. That means there’s never a sudden jump in font size at any time.

Are you using custom properties?

A technical detail this, but the magic of Utopia relies on two powerful CSS features: calc() and custom properties. These two workhorses are used by Utopia to generate some CSS that you can stick at the start of your stylesheet. If you ever need to make changes, all the parameters are defined at the top of the code block. Tweak those numbers and watch everything cascade.

You’ll see that there’s one—and only one—media query in there. This is quite clever. Usually with CSS locks, you’d need to have a media query for every different font size in order to cap its growth at the maximum screen size. With Utopia, the maximum screen size—100vw—is abstracted into a variable (a custom property). The media query then changes its value to be the upper end of your CSS lock. So it doesn’t matter how many different font sizes you’re setting: because they all use that custom property, one single media query takes care of capping the growth of every font size declaration.

If you’re already using CSS locks, modular scales, and custom properties, Utopia is almost certainly going to be a good fit for you.

If you’re not yet using those techniques, but you’d like to, I highly recommend using Utopia on your next project.

Friday, November 22nd, 2019

Sacha Baron Cohen’s Keynote Address at ADL’s 2019 Never Is Now Summit on Anti-Semitism and Hate | Anti-Defamation League

On the internet, everything can appear equally legitimate. Breitbart resembles the BBC. The fictitious Protocols of the Elders of Zion look as valid as an ADL report. And the rantings of a lunatic seem as credible as the findings of a Nobel Prize winner. We have lost, it seems, a shared sense of the basic facts upon which democracy depends.

Wednesday, July 3rd, 2019

inessential: No Algorithms

My hypothesis: these algorithms — driven by the all-consuming need for engagement in order to sell ads — are part of what’s destroying western liberal democracy, and my app will not contribute to that.

Sunday, June 16th, 2019

Relearn CSS layout: Every Layout

A new site from Heydon and Andy that provides CSS algorithms for common layout patterns.

If you find yourself wrestling with CSS layout, it’s likely you’re making decisions for browsers they should be making themselves. Through a series of simple, composable layouts, Every Layout will teach you how to better harness the built-in algorithms that power browsers and CSS.

Sunday, June 24th, 2018

Derek Powazek - AI is Not a Community Management Strategy

A really excellent piece from Derek on the history of community management online.

You have to decide what your platform is for and what it’s not for. And, yeah, that means deciding who it’s for and who it’s not for (hint: it’s not bots, nor nazis). That’s not a job you can outsource. The tech won’t do it for you. Not just because it’s your job, but because outsourcing it won’t work. It never does.

Saturday, June 16th, 2018

Artificial Intelligence for more human interfaces | Christian Heilmann

An even-handed assessment of the benefits and dangers of machine learning.

Tuesday, May 29th, 2018

We are all trapped in the “Feed” – Om on Tech

No matter where I go on the Internet, I feel like I am trapped in the “feed,” held down by algorithms that are like axes trying to make bespoke shirts out of silk. And no one illustrates it better than Facebook and Twitter, two more services that should know better, but they don’t. Fake news, unintelligent information and radically dumb statements are getting more attention than what matters. The likes, retweets, re-posts are nothing more than steroids for noise. Even when you are sarcastic in your retweets or re-shares, the system has the understanding of a one-year-old monkey baby: it is a vote on popularity.

Superfan! — Sacha Judd

The transcript of a talk that is fantastic in every sense.

Fans are organised, motivated, creative, technical, and frankly flat-out awe-inspiring.

Saturday, April 7th, 2018

Future Ethics

Cennydd is writing (and self-publishing) a book on ethics and digital design. It will be released in September.

Technology is never neutral: it has inevitable social, political, and moral impact. The coming era of connected smart technologies, such as AI, autonomous vehicles, and the Internet of Things, demands trust: trust the tech industry has yet to fully earn.

Thursday, March 8th, 2018

New Dark Age: Technology, Knowledge and the End of the Future by James Bridle

James is writing a book. It sounds like a barrel of laughs.

In his brilliant new work, leading artist and writer James Bridle offers us a warning against the future in which the contemporary promise of a new technologically assisted Enlightenment may just deliver its opposite: an age of complex uncertainty, predictive algorithms, surveillance, and the hollowing out of empathy.

Thursday, March 1st, 2018

Fair Is Not the Default - Library - Google Design

Why building inclusive tech takes more than good intentions.

When we run focus groups, we joke that it’s only a matter of seconds before someone mentions Skynet or The Terminator in the context of artificial intelligence. As if we’ll go to sleep one day and wake up the next with robots marching to take over. Few things could be further from the truth. Instead, it’ll be human decisions that we made yesterday, or make today and tomorrow that will shape the future. So let’s make them together, with other people in mind.

Wednesday, December 20th, 2017

Future Historians Probably Won’t Understand Our Internet - The Atlantic

You can’t log into the same Facebook twice.

The world as we experience it seems to be growing more opaque. More of life now takes place on digital platforms that are different for everyone, closed to inspection, and massively technically complex. What we don’t know now about our current experience will resound through time in historians of the future knowing less, too. Maybe this era will be a new dark age, as resistant to analysis then as it has become now.

Thursday, December 14th, 2017

Curation - Snook.ca

In the name of holy engagement, the native experience of products like Twitter, Facebook, and Instagram are moving away from giving people the ability to curate. They do this by taking control away from you, the user. By showing what other people liked, or by showing recommendations, without any way to turn it off, they prevent people from creating a better experience for themselves.

Tuesday, October 3rd, 2017

A good science fiction story… - daverupert.com

Dave applies two quotes from sci-fi authors to the state of today’s web.

A good science fiction story should be able to predict not the automobile but the traffic jam.

—Frederik Pohl

The function of science fiction is not only to predict the future, but to prevent it.

—Ray Bradbury

Friday, September 22nd, 2017

Idle Words: Anatomy of a Moral Panic

The real story in this mess is not the threat that algorithms pose to Amazon shoppers, but the threat that algorithms pose to journalism. By forcing reporters to optimize every story for clicks, not giving them time to check or contextualize their reporting, and requiring them to race to publish follow-on articles on every topic, the clickbait economics of online media encourage carelessness and drama.

Monday, June 12th, 2017

Here are 3 legal cases from the future

  1. People v. Dronimos
  2. Writers v. A.I. Rowling
  3. The Algorithm Defense

Design in the Era of the Algorithm | Big Medium

The transcript of Josh’s fantastic talk on machine learning, voice, data, APIs, and all the other tools of algorithmic design:

The design and presentation of data is just as important as the underlying algorithm. Algorithmic interfaces are a huge part of our future, and getting their design right is critical—and very, very hard to do.

Josh put together ten design principles for conceiving, designing, and managing data-driven products. I’ve added them to my collection.

  1. Favor accuracy over speed
  2. Allow for ambiguity
  3. Add human judgment
  4. Advocate sunshine
  5. Embrace multiple systems
  6. Make it easy to contribute (accurate) data
  7. Root out bias and bad assumptions
  8. Give people control over their data
  9. Be loyal to the user
  10. Take responsibility

Wednesday, March 15th, 2017

Systems Smart Enough To Know When They’re Not Smart Enough | Big Medium

I can forgive our answer machines if they sometimes get it wrong. It’s less easy to forgive the confidence with which the bad answer is presented, giving the impression that the answer is definitive. That’s a design problem.