What would Wiener think of the current human use of human beings? He would be amazed by the power of computers and the internet. He would be happy that the early neural nets in which he played a role have spawned powerful deep-learning systems that exhibit the perceptual ability he demanded of them—although he might not be impressed that one of the most prominent examples of such computerized Gestalt is the ability to recognize photos of kittens on the World Wide Web.
Sunday, April 28th, 2019
Sunday, June 12th, 2016
Saturday, March 5th, 2016
A new publication from MIT. It deliberately avoids the jargon that’s often part and parcel of peer-reviewed papers, and all of the articles are published under a Creative Commons attribution licence.
The first issue is dedicated to Marvin Minsky and features these superb articles, all of which are independently excellent but together form an even greater whole…
When the cybernetics movement began, the focus of science and engineering was on things like guiding a ballistic missile or controlling the temperature in an office. These problems were squarely in the man-made domain and were simple enough to apply the traditional divide-and-conquer method of scientific inquiry.
Science and engineering today, however, is focused on things like synthetic biology or artificial intelligence, where the problems are massively complex. These problems exceed our ability to stay within the domain of the artificial, and make it nearly impossible for us to divide them into existing disciplines.
This essay proposes a map for four domains of creative exploration—Science, Engineering, Design and Art—in an attempt to represent the antidisciplinary hypothesis: that knowledge can no longer be ascribed to, or produced within, disciplinary boundaries, but is entirely entangled.
The designers of complex adaptive systems are not strictly designing systems themselves. They are hinting those systems towards anticipated outcomes, from an array of existing interrelated systems. These are designers that do not understand themselves to be in the center of the system. Rather, they understand themselves to be participants, shaping the systems that interact with other forces, ideas, events and other designers. This essay is an exploration of what it means to participate.
As our technological and institutional creations have become more complex, our relationship to them has changed. We now relate to them as we once related to nature. Instead of being masters of our creations, we have learned to bargain with them, cajoling and guiding them in the general direction of our goals. We have built our own jungle, and it has a life of its own.
Friday, July 24th, 2015
It is a sad and beautiful world.
Thanks to their work, there was a moment in history when neuroscience, psychiatry, computer science, mathematical logic, and artificial intelligence were all one thing, following an idea first glimpsed by Leibniz—that man, machine, number, and mind all use information as a universal currency. What appeared on the surface to be very different ingredients of the world—hunks of metal, lumps of gray matter, scratches of ink on a page—were profoundly interchangeable.
Monday, July 14th, 2014
A profile of Norbert Wiener, and how his star was eclipsed by Claude Shannon.
Monday, July 8th, 2013
A wonderful article looking at the influence that Vannevar Bush’s seminal article As We May Think had on the young Douglas Engelbart.
Monday, September 19th, 2011
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
One could easily imagine a similar set of laws being applied to field of user experience and interface design:
- An interface may not injure a user or, through inaction, allow a user to come to harm.
- An interface must obey any orders given to it by users, except where such orders would conflict with the First Law.
- An interface must protect its own existence as long as such protection does not conflict with the First or Second Law.
Okay, that last one’s a bit of a stretch but you get the idea.
In his later works Asimov added the zeroth law that supersedes the initial three laws:
- A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
I think that this can also apply to user experience and interface design.
Take the password anti-pattern (please!). On the level of an individual site, it could be considered a benefit to the current user, allowing them to quickly and easily hand over lots of information about their contacts. But taken on the wider level, it teaches people that it’s okay to hand over their email password to third-party sites. The net result of reinforcing that behaviour is definitely not good for the web as a whole.
I’m proposing a zeroth law of user experience that goes beyond the existing paradigm of user-centred design:
- An interface may not harm the web, or, by inaction, allow the web to come to harm.
Wednesday, September 17th, 2008
Judging from the research information collected on Delicious, Flickr and Last.fm, this book proposalâ€”tying together informatics, music and gamesâ€”could blossom into a great read.