Centaurs not Butlers

A couple of weeks ago when AlphaGo beat a human opponent at Go, Jason Kottke noted

“Generally speaking, until recently machines were predictable and more or less easily understood. That’s central to the definition of a machine, you might say. You build them to do X, Y, & Z and that’s what they do. A car built to do 0-60 in 4.2 seconds isn’t suddenly going to do it in 3.6 seconds under the same conditions.

Now machines are starting to be built to think for themselves, creatively and unpredictably. Some emergent, non-linear shit is going on. And humans are having a hard time figuring out not only what the machine is up to but how it’s even thinking about it, which strikes me as a relatively new development in our relationship. It is not all that hard to imagine, in time, an even smarter AlphaGo that can do more things — paint a picture, write a poem, prove a difficult mathematical conjecture, negotiate peace — and do those things creatively and better than people.”

A few months back I somewhat randomly (and somewhat at the behest of a friend) applied to run a program at MIT Media Lab.

It takes as inspiration the “Centaur” phenomenon from the world of competitive computer chess – and extends the pattern to creativity and design.

I’m personally much more interested in machine intelligence as human augmentation rather than the oft-hyped AI assistant as a separate embodiment.

My old colleague and friend Matt Webb recently wrote persuasively about this:

“…there’s a difference between doing stuff for me (while I lounge in my Axiom pod), and giving me superpowers to do more stuff for myself, an online Power Loader equivalent.

And with the re-emergence of artificial intelligence (only this time with a buddy-style user interface that actually works), this question of “doing something for me” vs “allowing me to do even more” is going to get even more pronounced. Both are effective, but the first sucks… or at least, it sucks according to my own personal politics, because I regard individual alienation from society and complex systems as one of the huge threats in the 21st century.”

I was rejected, but I thought it might be interesting to repost my ‘personal statement’ here as a curiosity, as it’s a decent reflection of some of my recent preoccupations about the relationship of design and machine intelligence.

I’d also hope that some people, somewhere are actively thinking about this.

Let me know if you are!

I should be clear that it’s not at the centre of my work within Google Research & Machine Intelligence but certainly part of the conversation from my point of view, and has a clear relationship to what we’re investigating within our Art & Machine Intelligence program.


 

MASSACHUSETTS INSTITUTE OF TECHNOLOGY

Tenure-Track Junior Faculty Position in Media Arts and Sciences

MIT Media Lab, Cambridge, MA

David Matthew Jones, B.Sc., B.Arch (Wales)

Personal Statement

We are moving into a period where design will become more ‘non-deterministic’ and non-human.

Products, environments, services and media will be shaped by machine intelligences – throughout the design and manufacturing process – and, increasingly, they will adapt and improve ‘at runtime’, in the world.

Designers working with machine intelligences both as part of their toolkit (team?) and material will have to learn to be shepherds and gardeners as much as they are now delineators/specifiers/sculptors/builders.

I wish to create a research group that investigates how human designers will create in a near-future of burgeoning machine intelligence.  

Through my own practice and working with students I would particularly like to examine:

 

  • Legible Machine Intelligence
    • How might we make the processes and outcomes of machine intelligences more obvious to a general populace through design?
    • How might we build and popularize a critical language for the design of machine intelligence systems?

 

 

  • Co-designing with Machine Intelligence
    • “Centaur Designers”
      • In competitive chess, teams of human and non-human intelligences are referred to as ‘Centaurs’
      • How might we create teams of human and non-human intelligences in the service of better designed systems, products, environments?
      • What new outcomes and impacts might ‘centaur designers’ be able to create?
      • Design Superpowers for Non-Designers {aka “I know Design Kung-Fu”}
        • How might design (and particularly non-intuitive expertise) be democratised through the application of machine intelligence to design problem solving?

 

 

  • Machine Intelligence as Companion Species
    • The accessibility of powerful mobile devices points to the democratisation of the centaur pattern to of all sorts of problem-spaces in all walks of life, globally
    • Social robotics and affective computing have sought to create better interfaces between autonomous software and hardware agents and their users – but there is still an assumption of ‘user’ in the the relationship
    • How might we take a different starting reference point – that of Donna Haraway’s “Companion Species Manifesto” to improve the working relationship between humans and machine intelligences

 

 

  • Machine Intelligence in Physical Products
    • How might the design of physical products both benefit from and incorporate machine intelligence, and what benefits would come of this?

 

 

  • Machine Intelligence in Physical Environments
    • How might the design of physical environments both benefit from and incorporate machine intelligence, and what benefits would come of this?

 

 

 

8 thoughts on “Centaurs not Butlers

  1. Hi Matt,

    You would’ve been brilliant at the Lab, but it’s probably best (for you) that you weren’t hired. It’s a tough place to human.

    A few thoughts:

    1) The best class I took at the lab was called Interactive Machine Learning. This is the class website: http://iml.media.mit.edu/ (you may notice my face in the first example project video there) and particularly don’t miss the syllabus: http://iml.media.mit.edu/schedule/

    It’s a small but active academic field that lives exactly at the intersection between HCI and ML. One of the most impressive people that came and spoke was Saleema Amershi who’s now at MSR: http://research.microsoft.com/en-us/um/people/samershi/

    She’s done some really amazing stuff like this Regroup paper:

    Click to access amershiCHI2012_ReGroup.pdf

    Quick summary: You want to invite people to an event on facebook. But just the right people. So you’re presented with all of your friends and can check the ones you want to invite. Those you skip are considered a “no” label and those you select are a “yes” label. The system builds a classifier on the fly from these labels with the friends’ relationship to you (and other stats) as the vector and the people you’re most likely to want to invite bubble to the top and those you won’t want sink off the bottom. You never explicitly “make a classifier”.

    I think the biggest red herring in this stuff right now is the explicit “embodiment” of the “AI” agent — even if that’s in software form. This stuff is going to melt and run into the cracks and crevices of UI and experience, which is still largely going to look like tapping things into websites and apps.

    One of the other big lessons from that class was that algorithms that sacrifice legibility for improved accuracy/precision/recall tend to perform worse in real systems because humans can’t improve them and don’t trust their results. Systems that can explain _why_ they made a particular prediction are more trusted by humans and receive better corrections from those humans in the form of corrections and new labels for examples they get wrong and hence improve faster.

    We had a totally amazing assignment where we had to pick a machine learning algorithm and analyze its design affordances. Here’s me doing Random Decision Forrests:

    http://urbanhonking.com/ideasfordozens/2013/10/25/random-decision-forests-interaction-affordances/

    All of this is being lost in the current Reality Distortion field the Deep Learning Cabal has cast over the world in which measurements of progress are based (nearly exclusively) on abstract academic benchmarks and artificial problems like Go. The class opened with this fantastic Machine Learning That Matters paper: http://teamcore.usc.edu/WeeklySeminar/Aug31.pdf (totally human readable and absolutely vital.)

    2) “Some emergent, non-linear shit is going on. And humans are having a hard time figuring out not only what the machine is up to but how it’s even thinking about it”

    Kottke misses something big about things like the illegibility of AlphaGo’s moves. Part of the issue is certainly what I discussed above about algorithms that can explain the “why” of their predictions vs. “black boxes” like RNNs/CNNs/RL etc. But also this perspective that we don’t understand is only true down at the micro level of the individual moves. In a deep dream-like manner we should be able to get these systems to spit out the traces of their learning at a systems level in a way that we can start to comprehend.

    I think there’s something in the idea that if we build a complex software system like AlphaGo and that we can’t get it to inform our systems-level understanding of the field we asked it to operate in it is badly designed regardless of how it seems to do right now (which, by the way Frank Lantz says is about 1-2 Dan better than the best human, so an incremental overpassing in performance, not orders of magnitude or “non-linear shit” — and this at a point where they’re really pushing into the diminishing returns part of the scaling curve in terms of GPU resources thrown at the thing).

    Anyway, it’s a really interesting discussion. I feel like I sometimes am a lone(ly) voice out there advocating both skepticism of the hyperbolic claims being made in the current wave of AI hype and also trying to refocus the discussion on exactly this question of human augmentation both for good and ill.

    This thread of thoughts about NSA SKYNET’s terrible use of a Random Decision Forrest in terrorist classification lead to a gigantic response on twitter:

    Hope you’re well. Do you come to LA ever?

  2. Reblogged this on mormors-hallon and commented:
    Echoes a disdussion we’re having at work at the moment. What support does this (outlined roughly as above) field need in Ssweden? What parts of the map are closer to ‘application’ in the real world as different from more speculative theoritizing?

  3. Thanks for sharing. I found the article really inspiring.

    One particular thing gave me much to think. In your MIT programme proposal you mention “Design Superpowers for Non-Designers”. What do you imagine that would be? If I think about it I see design problem solving as the least AI-able field (and in fact that’s also what some studies in employment and automatisation tell us… ).

    If I try to describe design thinking in AI terms, I’d say that a big part of the discipline consists in classification/pattern finding tasks which rely heavily on intuition and creativity. How would a training set for something like that look like? Even identifying the goal is generally not a well defined problem, and often this is a crucial stage for cracking the brief. What would the “cost function” to optimize in the case of a design brief be?

    I’d love to hear your thoughts on this.
    Thanks,
    Leo

    PS
    I’m not trying to defend my job from robot attacks here. I’m just a designer & coder deeply in love with the matter 🙂

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.