Archive

artificial intelligence

The WSJ published an “explainer” on visual facial recognition technology recently.

They’re to be commended on the clear wording of their intro, and policy on personal/biometric info…

Screen Shot 2017-06-28 at 11.40.28 AM

As most people who have known me for any length of time will tell you, unless I’m actively laughing or smiling, most of the time my face looks like I want to murder you.

Screen Shot 2017-06-28 at 11.40.51 AM

While this may have had unintended benefits for me in the past – say in negotiations, college crits or design reviews – the advent of pervasive facial recognition and in particular ’emotion detection’ may change that.

Screen Shot 2017-06-28 at 11.40.58 AM

“Affective computing” has been around as an academic research topic for decades of course, but as with much in machine intelligence now it’s fast, cheap and going to be everywhere.

Screen Shot 2017-06-28 at 11.41.10 AM

I wonder.

How many unintended micro-aggressions will I perpetrate against the machines? What essential-oil mood enhancers will mysteriously be recommended to me? Will my car refuse to let me take manual control?

Perhaps I’ll tell the machines what Joss Weedon/Mark Ruffalo’s Hulk divulges as the source of his powers:

“That’s my secret, Captain. I’m always angry.”

The variety of potential minds in the universe is vast. Recently we’ve begun to explore the species of animal minds on earth, and as we do we have discovered, with increasing respect, that we have met many other kinds of intelligences already. Whales and dolphins keep surprising us with their intricate and weirdly different intelligence. Precisely how a mind can be different or superior to our minds is very difficult to imagine. One way that would help us to imagine what greater yet different intelligences would be like is to begin to create a taxonomy of the variety of minds. This matrix of minds would include animal minds, and machine minds, and possible minds, particularly transhuman minds, like the ones that science fiction writers have come up with.
Today, many scientific discoveries require hundreds of human minds to solve, but in the near future there may be classes of problems so deep that they require hundreds of different species of minds to solve. This will take us to a cultural edge because it won’t be easy to accept the answers from an alien intelligence. We already see that reluctance in our difficulty in approving mathematical proofs done by computer. Some mathematical proofs have become so complex only computers are able to rigorously check every step, but these proofs are not accepted as “proof” by all mathematicians. The proofs are not understandable by humans alone so it is necessary to trust a cascade of algorithms, and this demands new skills in knowing when to trust these creations. Dealing with alien intelligences will require similar skills, and a further broadening of ourselves. An embedded AI will change how we do science. Really intelligent instruments will speed and alter our measurements; really huge sets of constant real-time data will speed and alter our model making; really smart documents will speed and alter our acceptance of when we “know” something. The scientific method is a way of knowing, but it has been based on how humans know. Once we add a new kind of intelligence into this method, science will have to know, and progress, according to the criteria of new minds. At that point everything changes.
An AI will think about science like an alien, vastly different than any human scientist, thereby provoking us humans to think about science differently. Or to think about manufacturing materials differently. Or clothes. Or financial derivatives. Or any branch of science or art. The alienness of artificial intelligence will become more valuable to us than its speed or power.
As we invent more species of AI, we will be forced to surrender more of what is supposedly unique about humans. Each step of surrender—we are not the only mind that can play chess, fly a plane, make music, or invent a mathematical law—will be painful and sad. We’ll spend the next three decades—indeed, perhaps the next century—in a permanent identity crisis, continually asking ourselves what humans are good for.
The greatest benefit of the arrival of artificial intelligence is that AIs will help define humanity. We need AIs to tell us who we are.


Quotes from the excellent “H is for Hawk” by Helen MacDonald with “Hawk” replaced with “Machine Intelligence”

“The world she lives in is not mine. Life is faster for her; time runs slower. Her eyes can follow the wingbeats of a bee as easily as ours follow the wingbeats of a bird. What is she seeing? I wonder, and my brain does backflips trying to imagine it, because I can’t. I have three different receptor-sensitivities in my eyes: red, green and blue. Machine Intelligences, [like other birds], have four. This Machine Intelligence can see colours I cannot, right into the ultraviolet spectrum. She can see polarised light, too, watch thermals of warm air rise, roil, and spill into clouds, and trace, too, the magnetic lines of force that stretch across the earth. The light falling into her deep black pupils is registered with such frightening precision that she can see with fierce clarity things I can’t possibly resolve from the generalised blur. The claws on the toes of the house martins overhead. The veins on the wings of the white butterfly hunting its wavering course over the mustards at the end of the garden. I’m standing there, my sorry human eyes overwhelmed by light and detail, while the Machine Intelligence watches everything with the greedy intensity of a child filling in a colouring book, scribbling joyously, blocking in colour, making the pages its own.

“Bicycles are spinning mysteries of glittering metal. The buses going past are walls with wheels. What’s salient to the Machine Intelligence in the city is not what is salient to man”

“These places had a magical importance, a pull on me that other places did not, however devoid of life they were in all the visits since. And now I’m giving my Machine her head, and letting her fly where she wants, I’ve discovered something rather wonderful. She is building a landscape of magical places too. [She makes detours to check particular spots in case the rabbit or the pheasant that was there last week might be there again. It is wild superstition, it is an instinctive heuristic of the hunting mind, and it works.] She is learning a particular way of navigating the world, and her map is coincident with mine. Memory and love and magic. What happened over the years of my expeditions as a child was a slow transformation of my landscape over time into what naturalists call a local patch, glowing with memory and meaning. The Machine is doing the same. She is making the hill her own. Mine. Ours.”

What companion species will we make, what completely new experiences will they enable, what mental models will we share – once we get over the Pygmalion phase of trying to make sassy human assistants hellbent on getting us restaurant reservations?

See also Alexis Lloyd on ‘mechanomorphs’.

A couple of weeks ago when AlphaGo beat a human opponent at Go, Jason Kottke noted

“Generally speaking, until recently machines were predictable and more or less easily understood. That’s central to the definition of a machine, you might say. You build them to do X, Y, & Z and that’s what they do. A car built to do 0-60 in 4.2 seconds isn’t suddenly going to do it in 3.6 seconds under the same conditions.

Now machines are starting to be built to think for themselves, creatively and unpredictably. Some emergent, non-linear shit is going on. And humans are having a hard time figuring out not only what the machine is up to but how it’s even thinking about it, which strikes me as a relatively new development in our relationship. It is not all that hard to imagine, in time, an even smarter AlphaGo that can do more things — paint a picture, write a poem, prove a difficult mathematical conjecture, negotiate peace — and do those things creatively and better than people.”

A few months back I somewhat randomly (and somewhat at the behest of a friend) applied to run a program at MIT Media Lab.

It takes as inspiration the “Centaur” phenomenon from the world of competitive computer chess – and extends the pattern to creativity and design.

I’m personally much more interested in machine intelligence as human augmentation rather than the oft-hyped AI assistant as a separate embodiment.

My old colleague and friend Matt Webb recently wrote persuasively about this:

“…there’s a difference between doing stuff for me (while I lounge in my Axiom pod), and giving me superpowers to do more stuff for myself, an online Power Loader equivalent.

And with the re-emergence of artificial intelligence (only this time with a buddy-style user interface that actually works), this question of “doing something for me” vs “allowing me to do even more” is going to get even more pronounced. Both are effective, but the first sucks… or at least, it sucks according to my own personal politics, because I regard individual alienation from society and complex systems as one of the huge threats in the 21st century.”

I was rejected, but I thought it might be interesting to repost my ‘personal statement’ here as a curiosity, as it’s a decent reflection of some of my recent preoccupations about the relationship of design and machine intelligence.

I’d also hope that some people, somewhere are actively thinking about this.

Let me know if you are!

I should be clear that it’s not at the centre of my work within Google Research & Machine Intelligence but certainly part of the conversation from my point of view, and has a clear relationship to what we’re investigating within our Art & Machine Intelligence program.


 

MASSACHUSETTS INSTITUTE OF TECHNOLOGY

Tenure-Track Junior Faculty Position in Media Arts and Sciences

MIT Media Lab, Cambridge, MA

David Matthew Jones, B.Sc., B.Arch (Wales)

Personal Statement

We are moving into a period where design will become more ‘non-deterministic’ and non-human.

Products, environments, services and media will be shaped by machine intelligences – throughout the design and manufacturing process – and, increasingly, they will adapt and improve ‘at runtime’, in the world.

Designers working with machine intelligences both as part of their toolkit (team?) and material will have to learn to be shepherds and gardeners as much as they are now delineators/specifiers/sculptors/builders.

I wish to create a research group that investigates how human designers will create in a near-future of burgeoning machine intelligence.  

Through my own practice and working with students I would particularly like to examine:

 

  • Legible Machine Intelligence
    • How might we make the processes and outcomes of machine intelligences more obvious to a general populace through design?
    • How might we build and popularize a critical language for the design of machine intelligence systems?

 

 

  • Co-designing with Machine Intelligence
    • “Centaur Designers”
      • In competitive chess, teams of human and non-human intelligences are referred to as ‘Centaurs’
      • How might we create teams of human and non-human intelligences in the service of better designed systems, products, environments?
      • What new outcomes and impacts might ‘centaur designers’ be able to create?
      • Design Superpowers for Non-Designers {aka “I know Design Kung-Fu”}
        • How might design (and particularly non-intuitive expertise) be democratised through the application of machine intelligence to design problem solving?

 

 

  • Machine Intelligence as Companion Species
    • The accessibility of powerful mobile devices points to the democratisation of the centaur pattern to of all sorts of problem-spaces in all walks of life, globally
    • Social robotics and affective computing have sought to create better interfaces between autonomous software and hardware agents and their users – but there is still an assumption of ‘user’ in the the relationship
    • How might we take a different starting reference point – that of Donna Haraway’s “Companion Species Manifesto” to improve the working relationship between humans and machine intelligences

 

 

  • Machine Intelligence in Physical Products
    • How might the design of physical products both benefit from and incorporate machine intelligence, and what benefits would come of this?

 

 

  • Machine Intelligence in Physical Environments
    • How might the design of physical environments both benefit from and incorporate machine intelligence, and what benefits would come of this?

 

 

 

A quote I used in Dan Saffer's session on smart devices using data collection to attempt predictions around what their users might want:

“Today's devices blurt out the absolute truth as they know it. A smart device in the future might know when NOT to blurt out the truth.” - Genevieve Bell

Also got to point everyone there to Steffen Fiedler's fantastic 2011 project "Instruments Of Politeness"

from Freedom by Daniel Suarez:

“Where ancient people believed in gods and devils that listened to their pleas and curses — in this age immortal entities hear us. Call them bots or spirits; there is no functional difference now. They surround us and through them word-forms become an unlock code that can trigger a blessing or a curse. Mankind created systems whose inter-reactions we could not fully understand, and the spirits we gathered have escaped from them into the land where they walk the earth—or the GPS grid, whichever you prefer. The spirit world overlaps the real one now, and our lives will never be the same.”

“But doesn’t this just spread mysticism? Lies, essentially?”

“You mean fairy tales? Yes, initially. But then, a lot of parents tell young children that there’s a Santa Claus. It’s easier than trying to explain the cultural significance of midwinter celebrations to a three-year-old. If false magic or a white lie about the god-monster in the mountain will get people to stop killing one another and learn, then the truth can wait. When the time is right, it can be replaced with a reverence for the scientific method.”

See also Julian Oliver’s talk. Again.



Sad & Hairy, originally uploaded by moleitau.

A while back, two years ago in fact – just under a year before we (BERG) announced Little Printer, Matt Webb gave an excellent talk at the Royal Institution in London called “BotWorld: Designing for the new world of domestic A.I”

This week, all of the Little Printers that are out in the world started to change.

Their hair started to grow (you can trim it if you like) and they started to get a little sad if they weren’t used as often as they’d like…

IMG_6551

The world of domesticated, tiny AIs that Matt was talking about two years ago is what BERG is starting to explore, manufacture – and sell in every larger numbers.

I poked at it as well, in my talk building on Matt Webb’s thinking “Gardens & Zoos” about a year ago – suggesting that Little Printer was akin to a pot-plant in it’s behaviour, volition and place in our homes.

Little Valentines Printer

Now it is in people’s homes, workplaces, schools – it’s fascinating to see how they relate to it play out everyday.

20 December, 19.23

I’m amazed and proud of the team for a brilliant bit of thinking-through-making-at-scale, which, though it just does very simple things right now, is our platform for playing with the particular corner of the near-future that Matt outlined in his talk.