The Splintered Mind: The Black Hole Objection to Longtermism and Consequentialism
Stick a singularity in your “effective altruism” pipe and smoke it.
Stick a singularity in your “effective altruism” pipe and smoke it.
A beautiful meditation on Christopher Alexander by Claire L. Evans.
This is a terrific analysis of why frameworks exist, with nods to David Hume’s is-ought problem: the native features are what is, and the framework features are what somebody thinks ought to be.
I’ve been saying at conferences for years now that if you choose to use a framework, you need to understand that you are also taking on the philosophy and worldview of the creators of that framework. This post does a great job of explaining that.
I’ve only read three of these five books, but after reading this rollicking interview with Eric Schwitzgebel, I’ve added the other two recommendations to my wishlist.
I should emphasize that rejecting longtermism does not mean that one must reject long-term thinking. You ought to care equally about people no matter when they exist, whether today, next year, or in a couple billion years henceforth. If we shouldn’t discriminate against people based on their spatial distance from us, we shouldn’t discriminate against them based on their temporal distance, either. Many of the problems we face today, such as climate change, will have devastating consequences for future generations hundreds or thousands of years in the future. That should matter. We should be willing to make sacrifices for their wellbeing, just as we make sacrifices for those alive today by donating to charities that fight global poverty. But this does not mean that one must genuflect before the altar of “future value” or “our potential,” understood in techno-Utopian terms of colonizing space, becoming posthuman, subjugating the natural world, maximizing economic productivity, and creating massive computer simulations stuffed with 1045 digital beings.
Rationality does not work for ethical decisions. It can help you determine means, “what’s the best way to do this” but it can’t determine ends.
It isn’t even that great for means.
The parallels between Alex Garland’s Devs and Tom Stoppard’s Arcadia.
Portrait of the genius as a young man.
It is fortifying to remember that the very idea of artificial intelligence was conceived by one of the more unquantifiably original minds of the twentieth century. It is hard to imagine a computer being able to do what Alan Turing did.
A brilliantly written piece by Laurie Penny. Devestating, funny, and sad, featuring journalistic gold like this:
John McAfee has never been convicted of rape and murder, but—crucially—not in the same way that you or I have never been convicted of rape or murder.
A terrific cautionary look at the history of machine learning and artificial intelligence from the new laugh-a-minute book by James.
A remarkably practical in-depth guide to making ethical design decisions, with enjoyable diversions into the history of philosophy throughout.
A lovely profile of the lovely In Our Time.
In part because “In Our Time” is unconnected to things that are coming out, things happening right this minute, things being promoted, it feels aligned with the eternal rather than the temporal, and is therefore escapist without being junk.
Anyone remember the site After Our Time?
Boxman’s talk about complexity, reasoning, philosophy, and design is soooo good!
The transcript of a presentation on the intersection of ethics and accessibility.
If you subtract the flying cars and the jets of flame shooting out of the top of Los Angeles buildings, it’s not a far-off place. It’s fortunes earned off the backs of slaves, and deciding who gets to count as human. It’s impossible tests with impossible questions and impossible answers. It’s having empathy for the right things if you know what’s good for you. It’s death for those who seek freedom.
A thought-provoking first watch of Blade Runner …with an equally provocative interpretation in the comments:
The tragedy is not that they’re just like people and they’re being hunted down; that’s way too simplistic a reading. The tragedy is that they have been deliberately built to not be just like people, and they want to be and don’t know how.
That’s what really struck me about Kazuo Ishiguro’s Never Let Me Go: the tragedy is that these people can’t take action. “Run! Leave! Go!” you want to scream at them, but you might as well tell someone “Fly! Why don’t you just fly?”
The latest video from Patterns Day is up—Ellen’s superb philosophical presentation: Patterns in Language, Language in Patterns.
There’s so much packed into this one, it might take more than one viewing to take it all in.
Some of the explanations get a little ranty, but Heydon’s collection of observed fallacies rings true:
I’ve definitely had the Luddite fallacy and the scale fallacy thrown in my face as QEDs.
The ‘made at Facebook’ fallacy is pretty much identical to what I’ve been calling the fallacy of assumed competency: copying something that large corporation X is doing just because large corporation X is doing it.
A fascinating detective story of the Enlightenment, told from a very personal perspective.
There’s more than a whiff of Indie Web thinking in this sequel to the Cluetrain Manifesto from Doc Searls and Dave Weinberger.
The Net’s super-power is connection without permission. Its almighty power is that we can make of it whatever we want.
It’s quite lawn-off-getty …but I also happen to agree with pretty much all of it.
Although it’s kind of weird that it’s published on somebody else’s website.
A really great interview with Nick Bostrom about humanity’s long-term future and the odds of extinction.