What would Wiener think of the current human use of human beings? He would be amazed by the power of computers and the internet. He would be happy that the early neural nets in which he played a role have spawned powerful deep-learning systems that exhibit the perceptual ability he demanded of them—although he might not be impressed that one of the most prominent examples of such computerized Gestalt is the ability to recognize photos of kittens on the World Wide Web.
Thorough (and grim) research from Chris.
A terrific six-part series of short articles looking at the people behind the history of Artificial Intelligence, from Babbage to Turing to JCR Licklider.
- When Charles Babbage Played Chess With the Original Mechanical Turk
- Invisible Women Programmed America’s First Electronic Computer
- Why Alan Turing Wanted AI Agents to Make Mistakes
- The DARPA Dreamer Who Aimed for Cyborg Intelligence
- Algorithmic Bias Was Born in the 1980s
- How Amazon’s Mechanical Turkers Got Squeezed Inside the Machine
The history of AI is often told as the story of machines getting smarter over time. What’s lost is the human element in the narrative, how intelligent machines are designed, trained, and powered by human minds and bodies.
We hoped for a bicycle for the mind; we got a Lazy Boy recliner for the mind.
Nicky Case on how Douglas Engelbart’s vision for human-computer augmentation has taken a turn from creation to consumption.
When you create a Human+AI team, the hard part isn’t the “AI”. It isn’t even the “Human”.
It’s the “+”.
Spot-on take by Ted Chiang:
I used to find it odd that these hypothetical AIs were supposed to be smart enough to solve problems that no human could, yet they were incapable of doing something most every adult has done: taking a step back and asking whether their current course of action is really a good idea. Then I realized that we are already surrounded by machines that demonstrate a complete lack of insight, we just call them corporations.
Related: if you want to see the paperclip maximiser in action, just look at the humans destroying the planet by mining bitcoin.
Questions prompted by the Clearleft gathering in Norway to discuss AI.
I like Richard’s five reminders:
- Just because the technology feels magic, it doesn’t mean making it understandable requires magic.
- Designers are going to need to get familiar with new materials to make things make sense to people.
- We need to make sure people have an option to object when something isn’t right.
- We should not fall into the trap of assuming the way to make machine learning understandable should be purely individualistic.
- We also need to think about how we design regulators too.
Here’s a fun cosmic hypothesis on the scale of an Olaf Stapeldon story. There are even implications for data storage:
By storing its essential data in photons, life could give itself a distributed backup system. And it could go further, manipulating new photons emitted by stars to dictate how they interact with matter. Fronts of electromagnetic radiation could be reaching across the cosmos to set in motion chains of interstellar or planetary chemistry with exquisite timing, exploiting wave interference and excitation energies in atoms and molecules.
I, for one, welcome our slime mould overlords.
The slime mould is being used to explore biological-inspired design, emergence theory, unconventional computing and robot controllers, much of which borders on the world of science fiction.
Vernor Vinge’s original 1993 motherlode of the singularity.
Wonderful musings from Matt on meeting the emerging machine intelligence halfway.
An excellent rebuttal by Steven Pinker to Nicholas Carr's usual trolling.
Crows is smart. And yes, I am using the "Bookmark this..." link at the end of the article.
The Dunbar number gets bandied about a lot in conversations about social networks these days. Here's the original paper that shows the research behind the oft-misused term.
A good, if somewhat dispiriting, overview of Artificial Intelligence. (There's some nice typesetting on this page)