The other side of egoism | A Working Library
Mandy takes a deep dive into the treatment of altruism in Ursula Le Guin’s The Dispossessed.
Mandy takes a deep dive into the treatment of altruism in Ursula Le Guin’s The Dispossessed.
A great practical website to help you vote tactically in the upcoming local elections.
The story that “artificial intelligence” tells is a smoke screen. But smoke offers only temporary cover. It fades if it isn’t replenished.
I believe strongly in the indieweb principles of distributed ownership, control, and independence. For me, the important thing is that this is how we get to a diverse web. A web where everyone can define not just what they write but how they present is by definition far more expressive, diverse, and interesting than one where most online content and identities must be squished into templates created by a handful of companies based on their financial needs. In other words, the open web is far superior to a medium controlled by corporations in order to sell ads. The former encourages expression; the latter encourages consumerist conformity.
On Tyranny by Timothy Snyder is a very short book. Most of the time, this is a feature, not a bug.
There are plenty of non-fiction books I’ve read that definitely could’ve been much, much shorter. Books that have a good sensible idea, but one that could’ve been written on the back of a napkin instead of being expanded into an arbitrarily long form.
In the world of fiction, there’s the short story. I guess the equivelent in the non-fiction world is the essay. But On Tyranny isn’t an essay. It’s got chapters. They’re just really, really short.
Sometimes that brevity means that nuance goes out the window. What might’ve been a subtle argument that required paragraphs of pros and cons in another book gets reduced to a single sentence here. Mostly that’s okay.
The premise of the book is that Trump’s America is comparable to Europe in the 1930s:
We are no wiser than the Europeans who saw democracy yield to fascism, Nazism, or communism. Our one advantage is that we might learn from their experience.
But in making the comparison, Synder goes all in. There’s very little accounting for the differences between the world of the early 20th century and the world of the early 21st century.
This becomes really apparent when it comes to technology. One piece of advice offered is:
Make an effort to separate yourself from the internet. Read books.
Wait. He’s not actually saying that words on screens are in some way inherently worse than words on paper, is he? Surely that’s just the nuance getting lost in the brevity, right?
Alas, no:
Staring at screens is perhaps unavoidable but the two-dimensional world makes little sense unless we can draw upon a mental armory that we have developed somewhere else. … So get screens out of your room and surround yourself with books.
I mean, I’m all for reading books. But books are about what’s in them, not what they’re made of. To value words on a page more than the same words on a screen is like judging a book by its cover; its judging a book by its atoms.
For a book that’s about defending liberty and progress, On Tyranny is puzzingly conservative at times.
A well-written evisceration of cryptobollocks signed by Bruce Scheier, Tim Bray, Molly White, Cory Doctorow, and more.
If you’re a concerned US computer scientist, technologist or developer, you’ve got till June 10th to add your signature before this is submitted to congress.
An ode to an ode. Both of them beautiful.
Hannah Steinkopf-Frank:
At its core, and despite its appropriation, Solarpunk imagines a radically different societal and economic structure.
Rationality does not work for ethical decisions. It can help you determine means, “what’s the best way to do this” but it can’t determine ends.
It isn’t even that great for means.
My talk on sci-fi and me for Beyond Tellerrand’s Stay Curious event was deliberately designed to be broad and expansive. This was in contrast to Steph’s talk which was deliberately narrow and focused on one topic. Specifically, it was all about solarpunk.
I first heard of solarpunk from Justin Pickard back in 2014 at an event I was hosting. He described it as:
individuals and communities harnessing the power of the photovoltaic solar panel to achieve energy-independence.
The sci-fi subgenre of solarpunk, then, is about these communities. The subgenre sets up to be deliberately positive, even utopian, in contrast to most sci-fi.
Most genres ending with the -punk suffix are about aesthetics. You know the way that cyberpunk is laptops, leather and sunglasses, and steampunk is zeppelins and top hats with goggles. Solarpunk is supposedly free of any such “look.” That said, all the examples I’ve seen seem to converge on the motto of “put a tree on it.” If a depiction of the future looks lush, verdant, fecund and green, chances are it’s solarpunk.
At least, it might be solarpunk. It would have to pass the criteria laid down by the gatekeepers. Solarpunk is manifesto-driven sci-fi. I’m not sure how I feel about that. It’s one thing to apply a category to a piece of writing after it’s been written, but it’s another to start with an agenda-driven category and proceed from there. And as with any kind of classification system, the edges are bound to be fuzzy, leading to endless debates about what’s in and what’s out (see also: UX, UI, service design, content design, product design, front-end development, and most ironically of all, information architecture).
When I met up with Steph to discuss our talk topics and she described the various schools of thought that reside under the umbrella of solarpunk, it reminded me of my college days. You wouldn’t have just one Marxist student group, there’d be multiple Marxist student groups each with their own pillars of identity (Leninist, Trotskyist, anarcho-syndicalist, and so on). From the outside they all looked the same, but woe betide you if you mixed them up. It was exactly the kind of situation that was lampooned in Monty Python’s Life of Brian with its People’s Front of Judea and Judean People’s Front. Steph confirmed that those kind of rifts also exist in solarpunk. It’s just like that bit in Gulliver’s Travels where nations go to war over the correct way to crack an egg.
But there’s general agreement about what broadly constitutes solarpunk. It’s a form of cli-fi (climate fiction) but with an upbeat spin: positive but plausible stories of the future that might feature communities, rewilding, gardening, farming, energy independence, or decentralisation. Centralised authority—in the form of governments and corporations—is not to be trusted.
That’s all well and good but it reminds of another community. Libertarian preppers. Heck, even some of the solarpunk examples feature seasteading (but with more trees).
Politically, preppers and solarpunks couldn’t be further apart. Practically, they seem more similar than either of them would be comfortable with.
Both communities distrust centralisation. For the libertarians, this manifests in a hatred of taxation. For solarpunks, it’s all about getting off the electricity grid. But both want to start their own separate self-sustaining communities.
Independence. Decentralisation. Self-sufficiency.
There’s a fine line between Atlas Shrugged and The Whole Earth Catalog.
I subscribe to Peter Gasston’s newsletter, The Tech Landscape. It’s good. Peter’s a smart guy with his finger on the pulse of many technologies that are beyond my ken. I recommend subscribing.
But I was very taken aback by what he wrote in issue 202. It was to do with algorithmic recommendation engines.
This week I want to take a little dump on a tweet I read. I’m not going to link to it (I’m not that person), but it basically said something like: “I’m afraid to Google something because I don’t want the algorithm to think I like it, and I’m afraid to click a link because I don’t want the algorithm to show me more like it… what a cage.”
I saw the same tweet. It resonated with me. I had responded with a link to a post I wrote a while back called Get safe. That post made two points:
But Peter describes ubiquitous surveillance as a feature, not a bug:
It’s observing what someone likes or does, then trying to make recommendations for more things like it—whether that’s books, TV shows, clothes, advertising, or whatever. It works on probability, so it’s going to make better guesses the more it knows you; if you like ten things of type A, then liking one thing of type B shouldn’t be enough to completely change its recommendations. The problem is, we don’t like “the algorithm” if it doesn’t work, and we don’t like it if works too well (“creepy!”). But it’s not sinister, and it’s not a cage.
He would be correct if the balance of power were tipped towards the person actively looking for recommendations. As I said in my earlier post:
Don’t get me wrong: building a profile of someone based on their actions isn’t inherently wrong. If a user taps on “like” or “favourite” or “bookmark”, they are actively telling the server to perform an update (and so those actions should be POST requests). But do you see the difference in where the power lies?
When Peter says “it’s not sinister, and it’s not a cage” that may be true for him, but that is not a shared feeling, as the original tweet demonstrates. I don’t think it’s fair to dismiss someone else’s psychological pain because you don’t think they “get it”. I’m pretty sure everyone “gets” how recommendation engines are supposed to work. That’s not the issue. Trying to provide relevant content isn’t the problem. It’s the unbelievably heavy-handed methods that make it feel like a cage.
Peter uses the metaphor of a record shop:
“The algorithm” is the best way to navigate a world of infinite choice; imagine you went to a record shop (remember them?) which had every recording ever released; how would you find new music? You’d either buy music by bands you know you already liked, or you’d take a pure gamble on something—which most of the time would be a miss. So you’d ask a store worker, and they’d recommend the music they liked—but that’s no guarantee you’d like it. A good worker would ask what type of music you like, and recommend music based on that—you might not like all the recommendations, but there’s more of a chance you’d like some. That’s just what “the algorithm” does.
But that’s not true. You don’t ask “the algorithm” for a recommendation—it foists them on you whether you want them or not. A more apt metaphor would be that you walked by a record shop once and the store worker came out and followed you down the street, into your home, and watched your every move for the rest of your life.
What Peter describes sounds great—a helpful knowledgable software agent that you ask for recommendations. But that’s not what “the algorithm” is. And that’s why it feels like a cage. That’s why it is a cage.
The original tweet was an open, honest, and vulnerable insight into what online recommendation engines feel like. That’s a valuable insight that should be taken on board, not dismissed.
And what a lack of imagination to look at an existing broken system—that doesn’t even provide good recommendations while making people afraid to click on links—and shrug and say that this is the best we can do. If this really is “is the best way to navigate a world of infinite choice” then it’s no wonder that people feel like they need to go on a digital detox and get away from their devices in order to feel normal. It’s like saying that decapitation is the best way of solving headaches.
Imagine living in a surveillance state like East Germany, and saying “Well, how else is the government supposed to make informed decisions without constantly monitoring its citizens?” I think it’s more likely that you’d feel like you’re in a cage.
Apples to oranges? Kind of. But whether it’s surveillance communism or surveillance capitalism, there’s a shared methodology at work. They’re both systems that disempower people for the supposedly greater good of amassing data. Both are built on the false premise that problems can be solved by getting more and more data. If that results in collateral damage to people’s privacy and mental health, well …it’s all for the greater good, right?
It’s fucking bullshit. I don’t want to live in that cage and I don’t want anyone else to have to live in it either. I’m going to do everything I can to tear it down.
Here’s the thing: we need politics in the workplace. Politics—that is, the act of negotiating our relationships and obligations to each other—is critical to the work of building and sustaining democracy. And the workplace isn’t separate from democracy—it is democracy. It is as much a part of the democratic system as a neighborhood association or a town council, as a library or youth center or food bank. By the very nature of the outsized role that work plays in our lives, it’s where most of us have the potential to make the biggest impact on how we—and our families and communities—live.
Mandy, as always, hits the nail on the head.
When we talk about politics belonging outside the workplace, we reduce democracy to an extracurricular instead of a core part of our lives. Democracy cannot be sustained by annual visits to the ballot box—it isn’t something we have, it’s something we practice. Like all things that require practice, if you don’t practice it often, you lose it.
The introduction to this critique of Keller Easterling’s Medium Design is all about seams:
Imagine the tech utopia of mainstream science fiction. The bustle of self-driving cars, helpful robot assistants, and holograms throughout the sparkling city square immediately marks this world apart from ours, but something else is different, something that can only be described in terms of ambiance. Everything is frictionless here: The streets are filled with commuters, as is the sky, but the vehicles attune their choreography to one another so precisely that there is never any traffic, only an endless smooth procession through space. The people radiate a sense of purpose; they are all on their way somewhere, or else, they have already arrived. There’s an overwhelming amount of activity on display at every corner, but it does not feel chaotic, because there is no visible strife or deprivation. We might appreciate its otherworldly beauty, but we need not question the underlying mechanics of this utopia — everything works because it was designed to work, and in this world, design governs the space we inhabit as surely and exactly as the laws of physics.
A fascinating look at the history of calendrical warfare.
From the very beginning, standardized global time zones were used as a means of demonstrating power. (They all revolve around the British empire’s GMT, after all.) A particularly striking example of this happened in Ireland. In 1880, when the United Kingdom of Great Britain and Ireland declared GMT the official time zone for all of Great Britain, Ireland was given its own time zone. Dublin Mean Time was twenty-five minutes behind GMT, in accordance with the island’s solar time. But in the aftermath of the 1916 Easter Rising, London’s House of Commons abolished the uniquely Irish time zone, folding Ireland into GMT, where it remains to this day.
RFC 8890 maybe the closest thing we’ve got to a Hippocratic oath right now.
A community that agrees to principles that are informed by shared values can use them to navigate hard decisions.
Many discussions influenced this document, both inside and outside of the IETF and IAB. In particular, Edward Snowden’s comments regarding the priority of end users at IETF 93 and the HTML5 Priority of Constituencies were both influential.
Ted Chiang’s hot takes are like his short stories—punchy, powerful, and thought-provoking.
Maciej goes marching.
The protests are intentionally decentralized, using a jury-rigged combination of a popular message board, the group chat app Telegram, and in-person huddles at the protests.
This sounds like it shouldn’t possibly work, but the protesters are too young to know that it can’t work, so it works.
The Decolonial Atlas is a growing collection of maps which, in some way, help us to challenge our relationships with the land, people, and state. It’s based on the premise that cartography is not as objective as we’re made to believe.
For example: Names and Locations of the Top 100 People Killing the Planet — a cartogram showing the location of decision makers in the top 100 climate-hostile companies.
This map is a response to the pervasive myth that we can stop climate change if we just modify our personal behavior and buy more green products. Whether or not we separate our recycling, these corporations will go on trashing the planet unless we stop them.
This is a fascinating story of psychological manipulation and internal politics. It leaves me feeling queasy about the amount of power wielded by individuals in one single organisation.
I have no doubt that showing just the top outrageous tweets leads to more engagement. If you’re constantly hitting people with outlandish news stories they’ll open the app more often and interact and post about what they think so the cycle continues.