Link tags: machinelearning



Artifice and Intelligence

Whatever the merit of the scientific aspirations originally encompassed by the term “artificial intelligence,” it’s a phrase that now functions in the vernacular primarily to obfuscate, alienate, and glamorize.

Do “cloud” next!

morals in the machine | The Roof is on Phire

We are so excited by the idea of machines that can write, and create art, and compose music, with seemingly little regard for how many wells of creativity sit untapped because many of us spend the best hours of our days toiling away, and even more can barely fulfill basic needs for food, shelter, and water. I can’t help but wonder how rich our lives could be if we focused a little more on creating conditions that enable all humans to exercise their creativity as much as we would like robots to be able to.

A visual introduction to machine learning

I like the split-screen animated format for explaining this topic.

Uppestcase and Lowestcase Letters: Advances in Derp Learning

A genuinely interesting (and droll) deep dive into derp learning …for typography!

Artificial Intelligence: Threat or Menace? - Charlie’s Diary

I am not a believer in the AI singularity — the rapture of the nerds — that is, in the possibility of building a brain-in-a-box that will self-improve its own capabilities until it outstrips our ability to keep up. What CS professor and fellow SF author Vernor Vinge described as “the last invention humans will ever need to make”. But I do think we’re going to keep building more and more complicated, systems that are opaque rather than transparent, and that launder our unspoken prejudices and encode them in our social environment. As our widely-deployed neural processors get more powerful, the decisions they take will become harder and harder to question or oppose. And that’s the real threat of AI — not killer robots, but “computer says no” without recourse to appeal.

AI Weirdness • Play AI Dungeon 2. Become a dragon. Eat the moon.

After reading this account of a wonderfully surreal text adventure game, you’ll probably want to play AI Dungeon 2:

A PhD student named Nathan trained the neural net on classic dungeon crawling games, and playing it is strangely surreal, repetitive, and mesmerizing, like dreaming about playing one of the games it was trained on.

To decarbonize we must decomputerize: why we need a Luddite revolution | Technology | The Guardian

Decomputerization doesn’t mean no computers. It means that not all spheres of life should be rendered into data and computed upon. Ubiquitous “smartness” largely serves to enrich and empower the few at the expense of the many, while inflicting ecological harm that will threaten the survival and flourishing of billions of people.

Norbert Wiener’s Human Use of Human Beings is more relevant than ever.

What would Wiener think of the current human use of human beings? He would be amazed by the power of computers and the internet. He would be happy that the early neural nets in which he played a role have spawned powerful deep-learning systems that exhibit the perceptual ability he demanded of them—although he might not be impressed that one of the most prominent examples of such computerized Gestalt is the ability to recognize photos of kittens on the World Wide Web.

Unchained: A story of love, loss, and blockchain - MIT Technology Review

A near-future sci-fi short by Hannu Rajaniemi that’s right on the zeitgest money.

The app in her AR glasses showed the car icon crawling along the winding forest road. In a few minutes, it would reach the sharp right turn where the road met the lake. The turn was marked by a road sign she had carefully defaced the previous day, with tiny dabs of white paint. Nearly invisible to a human, they nevertheless fooled image recognition nets into classifying the sign as a tree.

Disturbances #16: Digital Dust

From smart dust and spimes, through to online journaling and social media, to machine learning, big data and digital preservation…

Is the archive where information goes to live forever, or where data goes to die?

Ways to think about machine learning — Benedict Evans

This strikes me as a sensible way of thinking about machine learning: it’s like when we got relational databases—suddenly we could do more, quicker, and easier …but it doesn’t require us to treat the technology like it’s magic.

An important parallel here is that though relational databases had economy of scale effects, there were limited network or ‘winner takes all’ effects. The database being used by company A doesn’t get better if company B buys the same database software from the same vendor: Safeway’s database doesn’t get better if Caterpillar buys the same one. Much the same actually applies to machine learning: machine learning is all about data, but data is highly specific to particular applications. More handwriting data will make a handwriting recognizer better, and more gas turbine data will make a system that predicts failures in gas turbines better, but the one doesn’t help with the other. Data isn’t fungible.

Derek Powazek - AI is Not a Community Management Strategy

A really excellent piece from Derek on the history of community management online.

You have to decide what your platform is for and what it’s not for. And, yeah, that means deciding who it’s for and who it’s not for (hint: it’s not bots, nor nazis). That’s not a job you can outsource. The tech won’t do it for you. Not just because it’s your job, but because outsourcing it won’t work. It never does.

[Essay] Known Unknowns | New Dark Age by James Bridle | Harper’s Magazine

A terrific cautionary look at the history of machine learning and artificial intelligence from the new laugh-a-minute book by James.

Artificial Intelligence for more human interfaces | Christian Heilmann

An even-handed assessment of the benefits and dangers of machine learning.

Fair Is Not the Default - Library - Google Design

Why building inclusive tech takes more than good intentions.

When we run focus groups, we joke that it’s only a matter of seconds before someone mentions Skynet or The Terminator in the context of artificial intelligence. As if we’ll go to sleep one day and wake up the next with robots marching to take over. Few things could be further from the truth. Instead, it’ll be human decisions that we made yesterday, or make today and tomorrow that will shape the future. So let’s make them together, with other people in mind.

Turning Design Mockups Into Code With Deep Learning - FloydHub Blog

Training a neural network to do front-end development.

I didn’t understand any of this.

Trends in Digital Tech for 2018 - Peter Gasston

Peter looks into his crystal ball for 2018 and sees computers with eyes, computers with ears, and computers with brains.

Design in the Era of the Algorithm | Big Medium

The transcript of Josh’s fantastic talk on machine learning, voice, data, APIs, and all the other tools of algorithmic design:

The design and presentation of data is just as important as the underlying algorithm. Algorithmic interfaces are a huge part of our future, and getting their design right is critical—and very, very hard to do.

Josh put together ten design principles for conceiving, designing, and managing data-driven products. I’ve added them to my collection.

  1. Favor accuracy over speed
  2. Allow for ambiguity
  3. Add human judgment
  4. Advocate sunshine
  5. Embrace multiple systems
  6. Make it easy to contribute (accurate) data
  7. Root out bias and bad assumptions
  8. Give people control over their data
  9. Be loyal to the user
  10. Take responsibility