PIctures of computers (of the human and machine varieties).
I love this idea of comparing human colour choices to those of a computer:
I decided to do two things: the top three most used colours of the photo decided by “a computer” and my hand picked choices. This method ended up revealing a couple of things about me.
I also love that this was the biggest obstacle to finding representative imagery:
I wanted this to be an exciting task but instead I only found repeated photos of my cat.
This could’a, should’a, would’a been a great blog post.
March 1981: Shakin’ Stevens was top of the charts, Tom Baker was leaving Doctor Who and Clive Sinclair was bringing computers to the masses. Britain was moving into a new age, and one object above all would herald its coming.
This is a rather beautiful piece of writing by Tom (especially the William Gibson bit at the end). This got me right in the feels:
Web 2.0 really, truly, is over. The public APIs, feeds to be consumed in a platform of your choice, services that had value beyond their own walls, mashups that merged content and services into new things… have all been replaced with heavyweight websites to ensure a consistent, single experience, no out-of-context content, and maximising the views of advertising. That’s it: back to single-serving websites for single-serving use cases.
A shame. A thing I had always loved about the internet was its juxtapositions, the way it supported so many use-cases all at once. At its heart, a fundamental one: it was a medium which you could both read and write to. From that flow others: it’s not only work and play that coexisted on it, but the real and the fictional; the useful and the useless; the human and the machine.
An ongoing timeline of computer technology in the form of blog posts by Sinclair Target (that’s a person, not a timeslipping transatlantic company merger).
This strikes me as a sensible way of thinking about machine learning: it’s like when we got relational databases—suddenly we could do more, quicker, and easier …but it doesn’t require us to treat the technology like it’s magic.
An important parallel here is that though relational databases had economy of scale effects, there were limited network or ‘winner takes all’ effects. The database being used by company A doesn’t get better if company B buys the same database software from the same vendor: Safeway’s database doesn’t get better if Caterpillar buys the same one. Much the same actually applies to machine learning: machine learning is all about data, but data is highly specific to particular applications. More handwriting data will make a handwriting recognizer better, and more gas turbine data will make a system that predicts failures in gas turbines better, but the one doesn’t help with the other. Data isn’t fungible.
Here’s a treasure trove of eighties nerd nostalgia:
In the 1980s, the BBC explored the world of computing in The Computer Literacy Project. They commissioned a home computer (the BBC Micro) and taught viewers how to program.
The Computer Literacy Project chronicled a decade of information technology and was a milestone in the history of computing in Britain, helping to inspire a generation of coders.
If only all documentation was as great as this old manual for the ZX Spectrum that Remy uncovered:
The manual is an instruction book on how to program the Spectrum. It’s a full book, with detailed directions and information on how the machine works, how the programming language works, includes human readable sentences explaining logic and even goes so far as touching on what hex values perform which assembly functions.
When we talk about things being “inspiring”, it’s rarely in regards to computer manuals. But, damn, if this isn’t inspiring!
This book stirs a passion inside of me that tells me that I can make something new from an existing thing. It reminds me of the 80s Lego boxes: unlike today’s Lego, the back of a Lego box would include pictures of creations that you could make with your Lego set. It didn’t include any instructions to do so, but it always made me think to myself: “I can make something more with these bricks”.
We hoped for a bicycle for the mind; we got a Lazy Boy recliner for the mind.
Nicky Case on how Douglas Engelbart’s vision for human-computer augmentation has taken a turn from creation to consumption.
When you create a Human+AI team, the hard part isn’t the “AI”. It isn’t even the “Human”.
It’s the “+”.
The transcript of a terrific talk on the humane use of technology.
Instead of using technology to replace people, we should use it to augment ourselves to do things that were previously impossible, to help us make our lives better. That is the sweet spot of our technology. We have to accept human behaviour the way it is, not the way we would wish it to be.
This 1993 article by Mark Weiser is relevant to our world today.
Take intelligent agents. The idea, as near as I can tell, is that the ideal computer should be like a human being, only more obedient. Anything so insidiously appealing should immediately give pause. Why should a computer be anything like a human being? Are airplanes like birds, typewriters like pens, alphabets like mouths, cars like horses? Are human interactions so free of trouble, misunderstanding, and ambiguity that they represent a desirable computer interface goal? Further, it takes a lot of time and attention to build and maintain a smoothly running team of people, even a pair of people. A computer I need to talk to, give commands to, or have a relationship with (much less be intimate with), is a computer that is too much the center of attention.
The title is pure clickbait, and the moral panic early in this article repeats the Toyota myth, but then it settles down into a fascinating examination of abstractions in programming. On the one hand, there’s the problem of the not enough abstraction: having to write in code is such a computer-centric way of building things. On the other hand, our world is filled with dangerously abstracted systems:
When your tires are flat, you look at your tires, they are flat. When your software is broken, you look at your software, you see nothing.
So that’s a big problem.
Bret Victor, John Resig and Margaret Hamilton are featured. Doug Engelbart and J.C.R. Licklider aren’t mentioned but their spirits loom large.
Anecdotes about the development of Apple’s original Macintosh, and the people who made it.
Like a real-life Halt And Catch Fire.
The Long Now Foundation has been posting some great stuff on their blog lately. The latest is a look at orreries, clocks, and computers throughout history …and into the future.
A fascinating bit of technological archeology tracing some of the oldest still-running software, from a COBOL program at the Pentagon to the firmware on the Voyager probes.
How computers work:
One day, a man name Alan Turing found a magic lamp, and rubbed it. Out popped a genie, and Turing wished for infinite wishes. Then we killed him for being gay, but we still have the wishes.
Then we networked computers together:
The network is ultimately not doing a favor for those in power, even if they think they’ve mastered it for now. It increases their power a bit, it increases the power of individuals immeasurably. We just have to learn to live in the age of networks.
We are all nodes in many networks. This is a beautiful description of how one of those networks operates.
A wonderful presentation by time-traveller Bret Viktor.
A look at the depiction of computer hardware and peripherals in sci-fi movies over time.
A 1960 advertisement for IBM’s SAGE system …WOPR by another name.
To be ready for the worst so that the worst will never happen…