A great tool is not a universal tool it’s a tool well suited to a specific problem.
The more universal a solution someone claims to have to whatever software engineering problem exists, and the more confident they are that it is a fully generalized solution, the more you should question them.
Although some communities have listed journalists as “essential workers,” no one claims that status for the keynote speaker. The “work” of being a keynote speaker feels even more ridiculous than usual these days.
Naomi Kritzer published a short story five years ago called So Much Cooking about a food blogger in lockdown during a pandemic. Prescient.
I left a lot of the details about the disease vague in the story, because what I wanted to talk about was not the science but the individuals struggling to get by as this crisis raged around them. There’s a common assumption that if the shit ever truly hit the fan, people would turn on one another like sharks turning on a wounded shark. In fact, the opposite usually happens: humans in disasters form tight community bonds, help their neighbors, offer what they can to the community.
- Wrong: web workers will take over the world
- Wrong: Safari is the new IE
- Right: developer experience is trumping user experience
- Right: I’m better off without a Twitter account
- Right: the cost of small modules
- Mixed: progressive enhancement isn’t dead, but it smells funny
Maybe I should do one of these.
I am not a believer in the AI singularity — the rapture of the nerds — that is, in the possibility of building a brain-in-a-box that will self-improve its own capabilities until it outstrips our ability to keep up. What CS professor and fellow SF author Vernor Vinge described as “the last invention humans will ever need to make”. But I do think we’re going to keep building more and more complicated, systems that are opaque rather than transparent, and that launder our unspoken prejudices and encode them in our social environment. As our widely-deployed neural processors get more powerful, the decisions they take will become harder and harder to question or oppose. And that’s the real threat of AI — not killer robots, but “computer says no” without recourse to appeal.
I’m really enjoying this end-of-the-year round-up from people speaking their brains. It’s not over yet, but there’s already a lot of thoughtful stuff to read through.
Only a few years ago, I would need a whole team of developers to accomplish what can now be done with just a few amazing tools.
And I like this zinger from Geoff:
What you need to build a great website is restraint.
Old technology seldom just goes away. Whiteboards and LED screens join chalk blackboards, but don’t eliminate them. Landline phones get scarce, but not phones. Film cameras become rarities, but not cameras. Typewriters disappear, but not typing. And the technologies that seem to be the most outclassed may come back as a the cult objects of aficionados—the vinyl record, for example. All this is to say that no one can tell us what will be obsolete in fifty years, but probably a lot less will be obsolete than we think.
A cli-fi short story by Paolo Bacigalupi.
Speculative fiction as a tool for change:
We need to think harder about the future and ask: What if our policies, institutions, and societies didn’t have to be organized as they are now? Good science fiction taps us into a rich seam of radical answers to this question.
This is the best explanation of quantum computing I’ve read. I mean, it’s not like I can judge its veracity, but I could actually understand it.
Given the nature of the long bet I’ve got running, I’m surprised that the Long Now Foundation are publishing on Medium. Wanna bet how long this particular URL will last?
From Frederik Pohl’s 1966 novel:
The remote-access computer transponder called the “joymaker” is your most valuable single possession in your new life. If you can imagine a combination of telephone, credit card, alarm clock, pocket bar, reference library, and full-time secretary, you will have sketched some of the functions provided by your joymaker.
Essentially, it is a transponder connecting you with the central computing facilities of the city in which you reside on a shared-time, self-programming basis.
Here are Luke’s notes from the talk I just gave at An Event Apart in Seattle.
I think our destination is neither utopia nor dystopia nor status quo, but protopia. Protopia is a state that is better than today than yesterday, although it might be only a little better. Protopia is much much harder to visualize. Because a protopia contains as many new problems as new benefits, this complex interaction of working and broken is very hard to predict.
Kevin Kelly’s thoughts at the time of coining of this term seven years ago:
No one wants to move to the future today. We are avoiding it. We don’t have much desire for life one hundred years from now. Many dread it. That makes it hard to take the future seriously. So we don’t take a generational perspective. We’re stuck in the short now. We also adopt the Singularity perspective: that imagining the future in 100 years is technically impossible. So there is no protopia we are reaching for.
James is writing a book. It sounds like a barrel of laughs.
In his brilliant new work, leading artist and writer James Bridle offers us a warning against the future in which the contemporary promise of a new technologically assisted Enlightenment may just deliver its opposite: an age of complex uncertainty, predictive algorithms, surveillance, and the hollowing out of empathy.
Gene Wolfe: A Science Fiction Legend on the Future-Altering Technologies We Forgot to Invent | The Polymath Project
We humans are not good at imagining the future. The future we see ends up looking a lot like the past with a few things tweaked or added on.
The transcript of a talk by Charles Stross on the perils of prediction and the lessons of the past. It echoes Ted Chiang’s observation that runaway AIs are already here, and they’re called corporations.
History gives us the perspective to see what went wrong in the past, and to look for patterns, and check whether those patterns apply to the present and near future. And looking in particular at the history of the past 200-400 years—the age of increasingly rapid change—one glaringly obvious deviation from the norm of the preceding three thousand centuries—is the development of Artificial Intelligence, which happened no earlier than 1553 and no later than 1844.
I’m talking about the very old, very slow AIs we call corporations, of course.
Six excellent mini essays from Lauren Beukes, Kim Stanley Robinson, Ken Liu, Hannu Rajaniemi, Alastair Reynolds and Aliette de Bodard.
I particularly Kim Stanley Robinson’s thoughts on the function of science fiction:
Here’s how I think science fiction works aesthetically. It’s not prediction. It has, rather, a double action, like the lenses of 3D glasses. Through one lens, we make a serious attempt to portray a possible future. Through the other, we see our present metaphorically, in a kind of heroic simile that says, “It is as if our world is like this.” When these two visions merge, the artificial third dimension that pops into being is simply history. We see ourselves and our society and our planet “like giants plunged into the years”, as Marcel Proust put it. So really it’s the fourth dimension that leaps into view: deep time, and our place in it. Some readers can’t make that merger happen, so they don’t like science fiction; it shimmers irreally, it gives them a headache. But relax your eyes, and the results can be startling in their clarity.
Dave applies two quotes from sci-fi authors to the state of today’s web.
A good science fiction story should be able to predict not the automobile but the traffic jam.
The function of science fiction is not only to predict the future, but to prevent it.
Most technologies are overestimated in the short term. They are the shiny new thing. Artificial Intelligence has the distinction of having been the shiny new thing and being overestimated again and again, in the 1960’s, in the 1980’s, and I believe again now.
Rodney Brooks is not bullish on the current “marketing” of Artificial Intelligence. Riffing on Arthur C. Clarke’s third law, he points out that AI—as currently described—is indistinguishable from magic in all the wrong ways.
This is a problem we all have with imagined future technology. If it is far enough away from the technology we have and understand today, then we do not know its limitations. It becomes indistinguishable from magic.
Watch out for arguments about future technology which is magical. It can never be refuted. It is a faith-based argument, not a scientific argument.