Thursday, November 12th, 2020
Monday, December 16th, 2019
I am not a believer in the AI singularity — the rapture of the nerds — that is, in the possibility of building a brain-in-a-box that will self-improve its own capabilities until it outstrips our ability to keep up. What CS professor and fellow SF author Vernor Vinge described as “the last invention humans will ever need to make”. But I do think we’re going to keep building more and more complicated, systems that are opaque rather than transparent, and that launder our unspoken prejudices and encode them in our social environment. As our widely-deployed neural processors get more powerful, the decisions they take will become harder and harder to question or oppose. And that’s the real threat of AI — not killer robots, but “computer says no” without recourse to appeal.
Wednesday, April 24th, 2019
A terrific six-part series of short articles looking at the people behind the history of Artificial Intelligence, from Babbage to Turing to JCR Licklider.
- When Charles Babbage Played Chess With the Original Mechanical Turk
- Invisible Women Programmed America’s First Electronic Computer
- Why Alan Turing Wanted AI Agents to Make Mistakes
- The DARPA Dreamer Who Aimed for Cyborg Intelligence
- Algorithmic Bias Was Born in the 1980s
- How Amazon’s Mechanical Turkers Got Squeezed Inside the Machine
The history of AI is often told as the story of machines getting smarter over time. What’s lost is the human element in the narrative, how intelligent machines are designed, trained, and powered by human minds and bodies.
Sunday, February 24th, 2019
Programming lessons from Umberto Eco and Emily Wilson.
Converting the analog into the digital requires discretization, leaving things out. What we filter out—or what we focus on—depends on our biases. How do conventional translators handle issues of bias? What can programmers learn from them?
Friday, January 18th, 2019
Raw data is both an oxymoron and a bad idea; to the contrary, data should be cooked with care.
Thursday, December 6th, 2018
Monday, December 3rd, 2018
Absolutely spot on! And it cuts both ways:
Tuesday, June 19th, 2018
A terrific cautionary look at the history of machine learning and artificial intelligence from the new laugh-a-minute book by James.
Saturday, June 16th, 2018
An even-handed assessment of the benefits and dangers of machine learning.
Monday, May 7th, 2018
It’s upsetting to realize that the reason why you’re in a senior position may be because of the system of privilege that got you there. It’s upsetting to realize that there are people who aren’t in that rank who are more qualified than you, but who haven’t benefited from the same privilege you did.
So here’s what I can do about it:
- Start sponsoring members of underrepresented groups
- Listen to marginalized people, and believe them
- Do “the homework” to be a better mentor
Saturday, April 7th, 2018
Cennydd is writing (and self-publishing) a book on ethics and digital design. It will be released in September.
Technology is never neutral: it has inevitable social, political, and moral impact. The coming era of connected smart technologies, such as AI, autonomous vehicles, and the Internet of Things, demands trust: trust the tech industry has yet to fully earn.
Thursday, March 1st, 2018
Why building inclusive tech takes more than good intentions.
When we run focus groups, we joke that it’s only a matter of seconds before someone mentions Skynet or The Terminator in the context of artificial intelligence. As if we’ll go to sleep one day and wake up the next with robots marching to take over. Few things could be further from the truth. Instead, it’ll be human decisions that we made yesterday, or make today and tomorrow that will shape the future. So let’s make them together, with other people in mind.
Sunday, December 3rd, 2017
Tuesday, August 22nd, 2017
If research on biases has told us anything, it is that humans make better decisions when we learn to recognize and correct for bias.
Sunday, August 6th, 2017
A series of questions to ask on any design project:
- What are my lenses?
- Am I just confirming my assumptions, or am I challenging them?
- What details here are unfair? Unverified? Unused?
- Am I holding onto something that I need to let go of?
- What’s here that I designed for me? What’s here that I designed for other people?
- What would the world look like if my assumptions were wrong?
- Who might disagree with what I’m designing?
- Who might be impacted by what I’m designing?
- What do I believe?
- Who’s someone I’m nervous to talk to about this?
- Is my audience open to change?
- What am I challenging as I create this?
- How can I reframe a mistake in a way that helps me learn?
- How does my approach to this problem today compare to how I might have approached this one year ago?
- If I could learn one thing to help me on this project, what would that one thing be?
- Do I need to slow down?
Friday, July 14th, 2017
A great short talk by Tim. It’s about performance, but so much more too.
Saturday, October 29th, 2016
When it seems like all our online activity is being tracked by Google, Facebook, and co., it comforts me to think of all the untracked usage out there, from shared (or fake) Facebook accounts to the good ol’ sneakernet:
Packets of information can be distributed via SMS and mobile 3G but also pieces of paper, USB sticks and Bluetooth.
Connectivity isn’t binary. Long live the papernet!
Friday, April 22nd, 2011
A look at our inbuilt confirmation biases.
Friday, February 23rd, 2007
Jason Kottke on the still-ludicrous imbalance at most tech conferences. This issue isn't going to go away. Conference organisers need to stop being part of the problem and become part of the solution.