This looks like an interesting alternative to TinyLetter for writing and sending email newsletters, like all the cool kids are doing.
A thoroughly enjoyable adventure game in your browser. You are the AI of a colony starship. Humanity’s future is in your hands.
The transcript of a talk by Charles Stross on the perils of prediction and the lessons of the past. It echoes Ted Chiang’s observation that runaway AIs are already here, and they’re called corporations.
History gives us the perspective to see what went wrong in the past, and to look for patterns, and check whether those patterns apply to the present and near future. And looking in particular at the history of the past 200-400 years—the age of increasingly rapid change—one glaringly obvious deviation from the norm of the preceding three thousand centuries—is the development of Artificial Intelligence, which happened no earlier than 1553 and no later than 1844.
I’m talking about the very old, very slow AIs we call corporations, of course.
I’m all in favour of HTTPS everywhere, but this kind of strong-arming just feels like blackmail to me.
All new CSS properties won’t work without HTTPS‽ Come on!
I thought Mozilla was better than this.
Making low effort/high impact changes to interfaces.
This reminds me of something we talk about at Clearleft a lot called “tiny lessons”—it’s the idea that insights and learnings don’t always have to be big and groundbreaking; there’s a disproportionate value in sharing the small little things you learn along the way.
Suggestions for small interface tweaks.
Peter looks into his crystal ball for 2018 and sees computers with eyes, computers with ears, and computers with brains.
Slides from a conference talk with a really clear explanation of how
await works with promises.
Hit this URL to give yourself a design constraint (or obstruction). Kind of like Brian Eno’s oblique strategies but with different categories of constraints: formal, methodological, and conceptual.
Spot-on take by Ted Chiang:
I used to find it odd that these hypothetical AIs were supposed to be smart enough to solve problems that no human could, yet they were incapable of doing something most every adult has done: taking a step back and asking whether their current course of action is really a good idea. Then I realized that we are already surrounded by machines that demonstrate a complete lack of insight, we just call them corporations.
Related: if you want to see the paperclip maximiser in action, just look at the humans destroying the planet by mining bitcoin.
This 1993 article by Mark Weiser is relevant to our world today.
Take intelligent agents. The idea, as near as I can tell, is that the ideal computer should be like a human being, only more obedient. Anything so insidiously appealing should immediately give pause. Why should a computer be anything like a human being? Are airplanes like birds, typewriters like pens, alphabets like mouths, cars like horses? Are human interactions so free of trouble, misunderstanding, and ambiguity that they represent a desirable computer interface goal? Further, it takes a lot of time and attention to build and maintain a smoothly running team of people, even a pair of people. A computer I need to talk to, give commands to, or have a relationship with (much less be intimate with), is a computer that is too much the center of attention.
Oregon Trail, updated for our times. There should be appreciably less dysentery in this game.
Great advice on writing sensible comments in your code.
Questions prompted by the Clearleft gathering in Norway to discuss AI.
A really great case study of a code refactor by Mina, with particular emphasis on the benefits of CSS Grid, fluid typography, and accessibility.
JP Rangaswami also examines the rise of the platforms but he’s got some ideas for a more sustainable future:
A part of me wants to evoke Jane Jacobs and Christopher Alexander when it comes to building sustainable platforms. The platform “community” needs to be cared for and looked after, the living spaces they inhabit need to be designed to last. Multipurpose rather than monoculture, diverse rather than homogeneous . Prior industrial models where entire communities would rely on a single industry need to be learnt from and avoided. We shouldn’t be building the rust belts of the future. We should be looking for the death and life of great platforms, for a pattern language for sustainable platforms.
I like Richard’s five reminders:
- Just because the technology feels magic, it doesn’t mean making it understandable requires magic.
- Designers are going to need to get familiar with new materials to make things make sense to people.
- We need to make sure people have an option to object when something isn’t right.
- We should not fall into the trap of assuming the way to make machine learning understandable should be purely individualistic.
- We also need to think about how we design regulators too.