50 Years Later, We’re Still Living in the Xerox Alto’s World - IEEE Spectrum
A profile of the Xerox Alto and the people behind it.
A profile of the Xerox Alto and the people behind it.
But a machine for writing isn’t the same as a machine that writes for you. A machine for viewing photos isn’t the same thing as a machine that travels in your stead. A machine for sketching isn’t the same thing as a machine that designs. I love doing these things and doing them more efficiently. But I have no desire to have them done for me. It’s a key distinction: Do not automate the work you are engaged in, only the materials.
A fascinating look at what it might take to create a truly sunstainable long-term computer.
I don’t think I agree with Don Knuth’s argument here from a 2014 lecture, but I do like how he sets out his table:
Why do I, as a scientist, get so much out of reading the history of science? Let me count the ways:
- To understand the process of discovery—not so much what was discovered, but how it was discovered.
- To understand the process of failure.
- To celebrate the contributions of many cultures.
- Telling historical stories is the best way to teach.
- To learn how to cope with life.
- To become more familiar with the world, and to know how science fits into the overall history of mankind.
I need to seek out this documentary, Top Secret Rosies: The Female Computers of World War II.
It would pair nicely with another film, The Eniac Programmers Project
In 1990, the science fiction writer Douglas Adams produced a “fantasy documentary” for the BBC called Hyperland. It’s a magnificent paleo-futuristic artifact, rich in sideways predictions about the technologies of tomorrow.
I remember coming across a repeating loop of this documentary playing in a dusty corner of a Smithsonian museum in Washington DC. Douglas Adams wasn’t credited but I recognised his voice.
Hyperland aired on the BBC a full year before the World Wide Web. It is a prophecy waylaid in time: the technology it predicts is not the Web. It’s what William Gibson might call a “stub,” evidence of a dead node in the timeline, a three-point turn where history took a pause and backed out before heading elsewhere.
Here, Claire L. Evans uses Adams’s documentary as an opening to dive into the history of hypertext starting with Bush’s Memex, Nelson’s Xanadu and Engelbart’s oNLine System. But then she describes some lesser-known hypertext systems…
In 1985, the students at Brown who encountered Intermedia had never seen anything like it before in their lives. The system laid a world of information at their fingertips, saved them hours at the library, and helped them work through tangles of thought.
Claire L. Evans on computational slime molds and other forms of unconvential computing that look beyond silicon:
In moments of technological frustration, it helps to remember that a computer is basically a rock. That is its fundamental witchcraft, or ours: for all its processing power, the device that runs your life is just a complex arrangement of minerals animated by electricity and language. Smart rocks.
Portrait of the genius as a young man.
It is fortifying to remember that the very idea of artificial intelligence was conceived by one of the more unquantifiably original minds of the twentieth century. It is hard to imagine a computer being able to do what Alan Turing did.
This is a great piece! It starts with a look back at some of the great minds of the nineteenth century: Herschel, Darwin, Babbage and Lovelace. Then it brings us, via JCR Licklider, to the present state of the web before looking ahead to what the future might bring.
So what will the life of an interface designer be like in the year 2120? or 2121 even? A nice round 300 years after Babbage first had the idea of calculations being executed by steam.
I think there are some missteps along the way (I certainly don’t think that inline styles—AKA CSS in JS—are necessarily a move forwards) but I love the idea of applying chaos engineering to web design:
Think of every characteristic of an interface you depend on to not ‘fail’ for your design to ‘work.’ Now imagine if these services were randomly ‘failing’ constantly during your design process. How might we design differently? How would our workflows and priorities change?
What would Wiener think of the current human use of human beings? He would be amazed by the power of computers and the internet. He would be happy that the early neural nets in which he played a role have spawned powerful deep-learning systems that exhibit the perceptual ability he demanded of them—although he might not be impressed that one of the most prominent examples of such computerized Gestalt is the ability to recognize photos of kittens on the World Wide Web.
PIctures of computers (of the human and machine varieties).
I love this idea of comparing human colour choices to those of a computer:
I decided to do two things: the top three most used colours of the photo decided by “a computer” and my hand picked choices. This method ended up revealing a couple of things about me.
I also love that this was the biggest obstacle to finding representative imagery:
I wanted this to be an exciting task but instead I only found repeated photos of my cat.
This could’a, should’a, would’a been a great blog post.
March 1981: Shakin’ Stevens was top of the charts, Tom Baker was leaving Doctor Who and Clive Sinclair was bringing computers to the masses. Britain was moving into a new age, and one object above all would herald its coming.
This is a rather beautiful piece of writing by Tom (especially the William Gibson bit at the end). This got me right in the feels:
Web 2.0 really, truly, is over. The public APIs, feeds to be consumed in a platform of your choice, services that had value beyond their own walls, mashups that merged content and services into new things… have all been replaced with heavyweight websites to ensure a consistent, single experience, no out-of-context content, and maximising the views of advertising. That’s it: back to single-serving websites for single-serving use cases.
A shame. A thing I had always loved about the internet was its juxtapositions, the way it supported so many use-cases all at once. At its heart, a fundamental one: it was a medium which you could both read and write to. From that flow others: it’s not only work and play that coexisted on it, but the real and the fictional; the useful and the useless; the human and the machine.
An ongoing timeline of computer technology in the form of blog posts by Sinclair Target (that’s a person, not a timeslipping transatlantic company merger).
This strikes me as a sensible way of thinking about machine learning: it’s like when we got relational databases—suddenly we could do more, quicker, and easier …but it doesn’t require us to treat the technology like it’s magic.
An important parallel here is that though relational databases had economy of scale effects, there were limited network or ‘winner takes all’ effects. The database being used by company A doesn’t get better if company B buys the same database software from the same vendor: Safeway’s database doesn’t get better if Caterpillar buys the same one. Much the same actually applies to machine learning: machine learning is all about data, but data is highly specific to particular applications. More handwriting data will make a handwriting recognizer better, and more gas turbine data will make a system that predicts failures in gas turbines better, but the one doesn’t help with the other. Data isn’t fungible.
Here’s a treasure trove of eighties nerd nostalgia:
In the 1980s, the BBC explored the world of computing in The Computer Literacy Project. They commissioned a home computer (the BBC Micro) and taught viewers how to program.
The Computer Literacy Project chronicled a decade of information technology and was a milestone in the history of computing in Britain, helping to inspire a generation of coders.
If only all documentation was as great as this old manual for the ZX Spectrum that Remy uncovered:
The manual is an instruction book on how to program the Spectrum. It’s a full book, with detailed directions and information on how the machine works, how the programming language works, includes human readable sentences explaining logic and even goes so far as touching on what hex values perform which assembly functions.
When we talk about things being “inspiring”, it’s rarely in regards to computer manuals. But, damn, if this isn’t inspiring!
This book stirs a passion inside of me that tells me that I can make something new from an existing thing. It reminds me of the 80s Lego boxes: unlike today’s Lego, the back of a Lego box would include pictures of creations that you could make with your Lego set. It didn’t include any instructions to do so, but it always made me think to myself: “I can make something more with these bricks”.
We hoped for a bicycle for the mind; we got a Lazy Boy recliner for the mind.
Nicky Case on how Douglas Engelbart’s vision for human-computer augmentation has taken a turn from creation to consumption.
When you create a Human+AI team, the hard part isn’t the “AI”. It isn’t even the “Human”.
It’s the “+”.
Happy birthday to the first computer I ever used.
It had 1K of storage. 1K! (When I got the brick-like 16K RAM pack to expand the storage, it was like gaining access to infinity.)
Update: Brendan has written down his ZX81 memories.