Posts by: Asa Calow

Djerassi Field Notes

By

IMG_0583

A view from the trail

Over the past month (July 2016) I’ve been a resident in the Djerassi “Scientific Delirium, Madness” art-science programme, taking time out from MadLab to explore new intellectual territories – something which feeds directly into my role as a research director at the newly formed Institute of Unknown Purpose. What follows are some of the ideas which have thus far emerged…

Anthropomorph. An initial foray into creating a machine intelligence. It’s a non-sentient AI, attempting to theorize about its future sentient self.

You can view its output here.

Based on Andrej Karpathy’s excellent char-rnn but rebuilt using Google’s Tensorflow deep learning framework, it’s a 2-layer LSTM (“long short term memory”) deep learning network, which has been trained on The Bostrom (see below), along with some choice cuts from the Machine Intelligence Research Institute.

Anthropomorph Output

Roko’s Basilisk. A thought experiment, first encountered on the lesswrong forum. Those made aware of the concept* are compelled to dedicate their lives to the building of  ta malevolent machine superintelligence, in case a future instance of said AI comes back and punishes those who didn’t sufficiently assist in bringing it into being. Dismissed as logically inconsistent (and a potential future infohazard) by forum moderator Eliezer Yudkowsky and subsequently deleted, the basilisk has since entered internet myth thanks to the Streisand effect.

Having been rescued from anonymity, the idea is so delightfully villainous that it bears further exploration. Mix in a bit of Bond and you’ve got yourself a new shadowy organisation dedicated to evil – BASILISK – complete with secret urban HQ and footsoldiers in jumpsuits.

* Which now includes you

Superintelligence: Paths, Dangers, Strategies. A book, released in 2014 by Oxford University Professor of Philosophy (and transhumanist) Nick Bostrom. It’s a thing of beauty, a logically self-consistent and beautifully argued treatise on what might happen if/when we bring a machine superintelligence into being. Regardless of which side of the “never going to happen”, or “my god we’re either all going to die, or the machine is going to lift us up into the universe” argument you sit on, it’s all kinds of beautiful.

The Bostrom has professed a disinterest in science fiction (and therefore presumably any cultural output involving the creative misuse of his ideas), so it is left to us to dismantle and recreate in his wake: Mind Crime! The Cosmic Endowment! Hedonium! So much good stuff.

Superintelligence Cover

On a less theoretical note, many of the arguments laid out in the book are compelling a number of people in positions of influence to invest in things like OpenAIVicarious and the Machine Intelligence Research Institute. Adherents to the “let’s work on this lest we all die” school of thought include Professor Stephen Hawking, Stanford’s Stuart Russell, Elon Musk and Bill Gates.

These are clever people, so future death-dealing algorithms might be a thing?

“Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.”

- Stephen Hawking, “Transcending Complacency on Machine Superintelligence”, April 2014

The Preserving Machine. Discovered after finding that one of my favourite music blogs, 20 Jazz Funk Greats, is similarly obsessed with The Bostrom. One of Philip K Dick’s earliest stories, in which the protagonist converts classical music into new animal species before releasing them into the wild. There they mate, fight, and mutate – developing spines, pincers, and mandibles in the process. At the end of the story, the animals are converted back into music, which has since become a dissonant howl.

That whole audio -> species -> audio interface would be a perfect fit for one of those seq-to-seq AI algorithms.

Deep Learning. The new hotness in the world of machine learning and it’s easy to see why. Layers of interconnected neural networks, running on superfast graphics hardware, turn out to be super great at figuring out patterns in data in a more human way – learning how to drive cars, make sense of speech, pick out objects in photos, that sort of thing.

Convolutional neural networks (for learning patterns in data, e.g. pictures with kittens in); recurrent neural networks (for series data, e.g. words in a sentence); “long short-term memory”; attention networks (augmented neural nets, with additional data which tells them where to look, e.g. Show, Attend and Tell image captioning); ImageNet; Char-RNN (RNN/LSTM-based text generator, cf. Anthropomorph); Inceptionism; Generative Adversial Nets. Sooo much good stuff, and the field is advancing at an incredible rate.

Magenta. A project from the Google Brain team, to produce new art and music via deep learning. The initial release takes MIDI as an input, generating new MIDI in the same style as an output. It sounds like this:

(Trained on James Brown “Sex Machine”, Celine Dion “My Heart Will Go On (Techno Remix”, Hanson “Mmm Bop”, and Ace of Bass “All That She Wants”. Resulting output dropped into Garageband and set to the Hip Hop rhythm).

Context. Something that AI algorithms have typically been terrible at. Computers have basically nailed the Turing test, because us weakling humans are so gullible. One of the latest challenges however involves something called the Winograd Schema. Consider the sentence “When the police baton-charged the protesters, they feared for their lives”. Who are they in this context? Algoriddle me that!

Existential Risk. Mechagodzilla. The brotherhood of immortal eugenicists. See also Superintelligence, above.

Existential Risk

Existential Risk, as represented in the Djerassi VHS collection.

Friendly AI. Or FAI. What you get once you’ve brought a machine superintelligence into being, and solved the control problem of the indiscriminate killing. What people like MIRI and Vicarious are working on. Arriving some time around 2040, experts say. Let’s face it, this would be pretty cool.

Friendly Neighbourhood AI. A failed early prototyping experiment, with the intention of creating a quasi-breezy machine bartender intelligence. (Every episode of Cheers is not enough training data.)

Status: needs more work, but the vision is there. Imagine walking into a bar and being greeted with a “Hey Bob, great job at work today! Here’s that drink you like.” despite having never been there before. Excellent PR for robots.

Miscellaneous deliria:

  • The Interval. The Long Now Foundation has its own bar! The sign of a civilised research institution. Very much enjoying the library containing books required to help rebuild humanity in the event of a catastrophic near miss. They’ve missed a couple of things though, maybe the IoUP should host a companion library in its art-science speakeasy?

  • “Take up arms for the cosmic endowment!” A transhumanist rally-cry. (Or, “Give alms…”, for the less war-like).

  • Re-extinction. Kill the last of its kind! Again!

  • The Clandestine Communicator. It’s a phone with no screen or dialling mechanism, just a single button which puts you through to The Society’s switchboard. Sshh! They know where you live.

  • “The flight of the centi-sperm”. That’s week two at Djerassi right there.

  • Institute Rule #137: No Disco Drinks in the Lab

About me: I’m a technologist, lapsed mathematician, and director of MadLab in Manchester UK – a community space for science, technology and art.

About The Institute of Unknown Purpose: The IoUP is a new kind of public scientific institution, founded as a collaboration between Professor James Crutchfield (UC Davis) and myself. It is intended to serve – in Willhelm Gottfried Liebniz’s words – as “a means of perfection for arts and sciences”; a centre for the narrowly improbable.

Thanks and acknowledgements must go to the National Academies Keck Futures Initiative for their generous support of both my place on the Djerassi residency, and the seed grant funding which has allowed the Institute of Unknown Purpose to get started!