The newest dConstruct podcast episode features the indefatigable and effervescent Brian David Johnson. Together we pick apart the futures we are collectively making, probe the algorithmic structures of science fiction narratives, and pay homage to Asimovian robotic legal codes.
Brian’s enthusiasm is infectious. I have a strong hunch that his dConstruct talk will be both thought-provoking and inspiring.
dConstruct 2015 is getting close now. Our future approaches. Interviewing the speakers ahead of time has only increased my excitement and anticipation. I think this is going to be a truly unmissable event. So, uh, don’t miss it.
I had a whole day of good talks yesterday at South By Southwest yesterday …and none of them were in the Austin Convention Center. In a very real sense, the good stuff at this event is getting pushed to the periphery.
The day started off in the Driskill Hotel with the New Aesthetic panel that James assembled. It was great, like a mini-conference packed into one hour with wonderfully dense knowledge bombs lobbed from all concerned. Joanne McNeil gave us the literary background, Ben searched for meaning (and humour) in advertising trends, Russell looked at how machines are changing what we read and write, and Aaron …um, talked about the helium-balloon predator drone in the corner of the room.
With our brains primed for the intersections where humans and machines meet, it wasn’t hard to keep pattern-matching for it. In fact, the panel right afterwards on technology and fashion was filled with wonderful wearable expressions of the New Aesthetic.
Alas, I wasn’t able to attend that panel because I had to get to the green room to prepare for my own appearance on Get Excited and Make Things With Science with Ariel and Matt. It was a lot of fun and it was a real pleasure to be on a panel with such smart people.
I basically used the panel as an opportunity to geek out about some of my favourite science-related hacks and websites:
Jon Ronson described the strange experience of interviewing her—how the questions always tended to the profound and meaningful rather than trivial and chatty. Sure enough, once Bina was (literally) unveiled on the panel—a move that was wisely left till halfway through because, as the panelists said, “after that, you’re not going to pay attention to a word we say”—people started asking questions like “Do you dream?” and “What is the meaning of life?”
I asked her “Where were you before you were here?” She calmly answered that she was made in Texas. The New Aesthetic panelists would’ve loved her.
I was surprised by how much discussion of digital preservation there was on the robots/AI panel. Then again, the panel was hosted by a researcher from The Digital Beyond.
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
One could easily imagine a similar set of laws being applied to field of user experience and interface design:
An interface may not injure a user or, through inaction, allow a user to come to harm.
An interface must obey any orders given to it by users, except where such orders would conflict with the First Law.
An interface must protect its own existence as long as such protection does not conflict with the First or Second Law.
Okay, that last one’s a bit of a stretch but you get the idea.
In his later works Asimov added the zeroth law that supersedes the initial three laws:
A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
I think that this can also apply to user experience and interface design.
Take the password anti-pattern (please!). On the level of an individual site, it could be considered a benefit to the current user, allowing them to quickly and easily hand over lots of information about their contacts. But taken on the wider level, it teaches people that it’s okay to hand over their email password to third-party sites. The net result of reinforcing that behaviour is definitely not good for the web as a whole.
I’m proposing a zeroth law of user experience that goes beyond the existing paradigm of user-centred design:
An interface may not harm the web, or, by inaction, allow the web to come to harm.
The word “awesome” is over-used. I’m about to over-use it some more.
The internet is mostly awesome. Some human beings are also awesome. When you combine the two, you get awesome things. Here are just two such awesome things:
Anton Peck is a brilliant illustrator. He’s currently executing a project called 100 Little Robots. Anton will craft postcards for 100 people, each postcard displaying a unique hand-drawn robot. If you want to be one of those 100 people, order your robot card now. I got mine and it is, well …awesome.
I’ve always thought that Brighton has a lot of steampunk appeal. Quite apart from the potential for criminal mastermind lairs within the the Victorian sewers, there are a whole slew of wonderful inventions from the mind of Magnus Volk.
During the autumn of 1932 a group of curious onlookers assembled in Brighton, England to see inventor Harry May’s latest invention, Alpha the robot. The mechanical man was controlled by verbal commands and sat in a chair silently while May carefully placed a gun in Alpha’s hand.
It all goes horribly awry according to contemporary reports, doubtless exaggerated. I, for one, welcome our new metal overlords.
When commanded, the robot lowered its arm, raised the other, lowered it, turned its head from side to side, opened and closed its prognathous jaw, sat down. Then Impresario May asked Alpha a question:
“How old are you?”
From the robot’s interior a cavernous Cockney voice responded:
May: What do you weigh?
Alpha: One ton.
A dozen other questions and answers followed, some elaborately facetious. When May inquired what the automaton liked to eat, it responded with a minute-long discourse on the virtues of toast made with Macy’s automatic electric toaster.
The Flash on the Beach conference is currently underway here in Brighton. I spoke at the conference two years ago so thanks to organiser John Davey’s commitment to giving past speakers guest passes to future events, I’ve been popping in and out of the Dome over the past couple of days to sit in on some talks.
Yesterday I saw Branden Hall talk about Brilliant Ideas that I’ve Blatantly Stolen. Although his specific examples dealt with ActionScript, his overall message was applicable to any developer: look around at other languages and frameworks and scavenge anything you like the look of.
I wanted to make it to Aral’s talk this morning but as he was on first thing and I’m a lazy bugger, that didn’t really work out. I did, however, make it over in time to hear Carla Diana.
Carla made her name in the Flash world a few years ago with her wonderful site Repercussion where you can play around with sounds through a lovely isometric interface. Lately she’s been working with robots. Or rather, one robot in particular: Leo.
Carla’s job was to come up with a skin for Leo that didn’t send children running screaming. Yes, it’s the problem that plagues Japanese robots and Robert Zemeckis CGI movies in equal measure: the uncanny valley.
Want to see something uncanny?
I was at Carla’s talk with Sophie and we were talking about robots afterwards (as you would). She said that watching robots in motion often makes her feel sad. Looking at that video, particularly the bit where the quadruped is kicked to demonstrate its balance, I understand what she means.
Funnily enough, my favourite robot is also a quadruped. All I want for Christmas is a tachikoma.
Or I maybe I should just build my own. The latest project that Carla Diana is working on is something to make the arduino enthusiast drool. It’s called littleBits:
littleBits is an opensource library of discrete electronic components pre-assembled in tiny circuit boards. Just as Legos allow you to create complex structures with very little engineering knowledge, littleBits are simple, intuitive, space-sensitive blocks that make prototyping with sophisticated electronics a matter of snapping small magnets together.
Despite being a huge Pixar fan, I still haven’t seen Wall•E. That’s mostly due to my belief that a typical cinema is not necessarily the best viewing environment for any movie, but particularly for one that you want to get really engrossed in …unless the cinema is empty of humans.
I’m not sure if I can hold out much longer though, especially after reading this wonderful story about how the people at Pixar responded to one blogger’s reaction to seeing the first trailer for the movie last year. Eda Cherry describes herself as having a strong fondness for robots so Wall•E is already pushing all the right buttons. The moment when he says his own name is the moment that pushes her over the edge — it makes her cry every time. Partly it’s the robot’s droopy eyes as he looks up into space but also:
It’s the voice modulation.
That would be Ben Burtt Jr. I remember as a child receiving the quarterly Star Wars fan club newsletter, Bantha Tracks, and reading about the amazing amount of found sounds that went into the soundscape of that galaxy far, far away: animal noises, broken TV sets, tuning forks tapped against high-tension wires. And of course R2D2, voiced by Ben Burtt himself.
Now, with Wall•E, he’s voicing another lovable robot, one capable of moving humans to tears. His involvement is no coincidence. In the initial brainstorming for the project, John Lasseter repeatedly described it as R2D2: The Movie.
The journey involved in turning that initial idea into a finished film is a long one. For a closer look at the process at Pixar, be sure to read Peter Merholz’s chat with Michael B. Johnson. Their storyboarding process sounds a lot like wireframing:
We’d much rather fail with a bunch of sketches that we did (relatively) quickly and cheaply, than once we’ve modeled, rigged, shaded, animated, and lit the film. Fail fast, that’s the mantra.