Archive

artificial intelligence

The variety of potential minds in the universe is vast. Recently we’ve begun to explore the species of animal minds on earth, and as we do we have discovered, with increasing respect, that we have met many other kinds of intelligences already. Whales and dolphins keep surprising us with their intricate and weirdly different intelligence. Precisely how a mind can be different or superior to our minds is very difficult to imagine. One way that would help us to imagine what greater yet different intelligences would be like is to begin to create a taxonomy of the variety of minds. This matrix of minds would include animal minds, and machine minds, and possible minds, particularly transhuman minds, like the ones that science fiction writers have come up with.
Today, many scientific discoveries require hundreds of human minds to solve, but in the near future there may be classes of problems so deep that they require hundreds of different species of minds to solve. This will take us to a cultural edge because it won’t be easy to accept the answers from an alien intelligence. We already see that reluctance in our difficulty in approving mathematical proofs done by computer. Some mathematical proofs have become so complex only computers are able to rigorously check every step, but these proofs are not accepted as “proof” by all mathematicians. The proofs are not understandable by humans alone so it is necessary to trust a cascade of algorithms, and this demands new skills in knowing when to trust these creations. Dealing with alien intelligences will require similar skills, and a further broadening of ourselves. An embedded AI will change how we do science. Really intelligent instruments will speed and alter our measurements; really huge sets of constant real-time data will speed and alter our model making; really smart documents will speed and alter our acceptance of when we “know” something. The scientific method is a way of knowing, but it has been based on how humans know. Once we add a new kind of intelligence into this method, science will have to know, and progress, according to the criteria of new minds. At that point everything changes.
An AI will think about science like an alien, vastly different than any human scientist, thereby provoking us humans to think about science differently. Or to think about manufacturing materials differently. Or clothes. Or financial derivatives. Or any branch of science or art. The alienness of artificial intelligence will become more valuable to us than its speed or power.
As we invent more species of AI, we will be forced to surrender more of what is supposedly unique about humans. Each step of surrender—we are not the only mind that can play chess, fly a plane, make music, or invent a mathematical law—will be painful and sad. We’ll spend the next three decades—indeed, perhaps the next century—in a permanent identity crisis, continually asking ourselves what humans are good for.
The greatest benefit of the arrival of artificial intelligence is that AIs will help define humanity. We need AIs to tell us who we are.


Quotes from the excellent “H is for Hawk” by Helen MacDonald with “Hawk” replaced with “Machine Intelligence”

“The world she lives in is not mine. Life is faster for her; time runs slower. Her eyes can follow the wingbeats of a bee as easily as ours follow the wingbeats of a bird. What is she seeing? I wonder, and my brain does backflips trying to imagine it, because I can’t. I have three different receptor-sensitivities in my eyes: red, green and blue. Machine Intelligences, [like other birds], have four. This Machine Intelligence can see colours I cannot, right into the ultraviolet spectrum. She can see polarised light, too, watch thermals of warm air rise, roil, and spill into clouds, and trace, too, the magnetic lines of force that stretch across the earth. The light falling into her deep black pupils is registered with such frightening precision that she can see with fierce clarity things I can’t possibly resolve from the generalised blur. The claws on the toes of the house martins overhead. The veins on the wings of the white butterfly hunting its wavering course over the mustards at the end of the garden. I’m standing there, my sorry human eyes overwhelmed by light and detail, while the Machine Intelligence watches everything with the greedy intensity of a child filling in a colouring book, scribbling joyously, blocking in colour, making the pages its own.

“Bicycles are spinning mysteries of glittering metal. The buses going past are walls with wheels. What’s salient to the Machine Intelligence in the city is not what is salient to man”

“These places had a magical importance, a pull on me that other places did not, however devoid of life they were in all the visits since. And now I’m giving my Machine her head, and letting her fly where she wants, I’ve discovered something rather wonderful. She is building a landscape of magical places too. [She makes detours to check particular spots in case the rabbit or the pheasant that was there last week might be there again. It is wild superstition, it is an instinctive heuristic of the hunting mind, and it works.] She is learning a particular way of navigating the world, and her map is coincident with mine. Memory and love and magic. What happened over the years of my expeditions as a child was a slow transformation of my landscape over time into what naturalists call a local patch, glowing with memory and meaning. The Machine is doing the same. She is making the hill her own. Mine. Ours.”

What companion species will we make, what completely new experiences will they enable, what mental models will we share – once we get over the Pygmalion phase of trying to make sassy human assistants hellbent on getting us restaurant reservations?

See also Alexis Lloyd on ‘mechanomorphs’.

A couple of weeks ago when AlphaGo beat a human opponent at Go, Jason Kottke noted

“Generally speaking, until recently machines were predictable and more or less easily understood. That’s central to the definition of a machine, you might say. You build them to do X, Y, & Z and that’s what they do. A car built to do 0-60 in 4.2 seconds isn’t suddenly going to do it in 3.6 seconds under the same conditions.

Now machines are starting to be built to think for themselves, creatively and unpredictably. Some emergent, non-linear shit is going on. And humans are having a hard time figuring out not only what the machine is up to but how it’s even thinking about it, which strikes me as a relatively new development in our relationship. It is not all that hard to imagine, in time, an even smarter AlphaGo that can do more things — paint a picture, write a poem, prove a difficult mathematical conjecture, negotiate peace — and do those things creatively and better than people.”

A few months back I somewhat randomly (and somewhat at the behest of a friend) applied to run a program at MIT Media Lab.

It takes as inspiration the “Centaur” phenomenon from the world of competitive computer chess – and extends the pattern to creativity and design.

I’m personally much more interested in machine intelligence as human augmentation rather than the oft-hyped AI assistant as a separate embodiment.

My old colleague and friend Matt Webb recently wrote persuasively about this:

“…there’s a difference between doing stuff for me (while I lounge in my Axiom pod), and giving me superpowers to do more stuff for myself, an online Power Loader equivalent.

And with the re-emergence of artificial intelligence (only this time with a buddy-style user interface that actually works), this question of “doing something for me” vs “allowing me to do even more” is going to get even more pronounced. Both are effective, but the first sucks… or at least, it sucks according to my own personal politics, because I regard individual alienation from society and complex systems as one of the huge threats in the 21st century.”

I was rejected, but I thought it might be interesting to repost my ‘personal statement’ here as a curiosity, as it’s a decent reflection of some of my recent preoccupations about the relationship of design and machine intelligence.

I’d also hope that some people, somewhere are actively thinking about this.

Let me know if you are!

I should be clear that it’s not at the centre of my work within Google Research & Machine Intelligence but certainly part of the conversation from my point of view, and has a clear relationship to what we’re investigating within our Art & Machine Intelligence program.


 

MASSACHUSETTS INSTITUTE OF TECHNOLOGY

Tenure-Track Junior Faculty Position in Media Arts and Sciences

MIT Media Lab, Cambridge, MA

David Matthew Jones, B.Sc., B.Arch (Wales)

Personal Statement

We are moving into a period where design will become more ‘non-deterministic’ and non-human.

Products, environments, services and media will be shaped by machine intelligences – throughout the design and manufacturing process – and, increasingly, they will adapt and improve ‘at runtime’, in the world.

Designers working with machine intelligences both as part of their toolkit (team?) and material will have to learn to be shepherds and gardeners as much as they are now delineators/specifiers/sculptors/builders.

I wish to create a research group that investigates how human designers will create in a near-future of burgeoning machine intelligence.  

Through my own practice and working with students I would particularly like to examine:

 

  • Legible Machine Intelligence
    • How might we make the processes and outcomes of machine intelligences more obvious to a general populace through design?
    • How might we build and popularize a critical language for the design of machine intelligence systems?

 

 

  • Co-designing with Machine Intelligence
    • “Centaur Designers”
      • In competitive chess, teams of human and non-human intelligences are referred to as ‘Centaurs’
      • How might we create teams of human and non-human intelligences in the service of better designed systems, products, environments?
      • What new outcomes and impacts might ‘centaur designers’ be able to create?
      • Design Superpowers for Non-Designers {aka “I know Design Kung-Fu”}
        • How might design (and particularly non-intuitive expertise) be democratised through the application of machine intelligence to design problem solving?

 

 

  • Machine Intelligence as Companion Species
    • The accessibility of powerful mobile devices points to the democratisation of the centaur pattern to of all sorts of problem-spaces in all walks of life, globally
    • Social robotics and affective computing have sought to create better interfaces between autonomous software and hardware agents and their users – but there is still an assumption of ‘user’ in the the relationship
    • How might we take a different starting reference point – that of Donna Haraway’s “Companion Species Manifesto” to improve the working relationship between humans and machine intelligences

 

 

  • Machine Intelligence in Physical Products
    • How might the design of physical products both benefit from and incorporate machine intelligence, and what benefits would come of this?

 

 

  • Machine Intelligence in Physical Environments
    • How might the design of physical environments both benefit from and incorporate machine intelligence, and what benefits would come of this?

 

 

 

A quote I used in Dan Saffer's session on smart devices using data collection to attempt predictions around what their users might want:

“Today's devices blurt out the absolute truth as they know it. A smart device in the future might know when NOT to blurt out the truth.” - Genevieve Bell

Also got to point everyone there to Steffen Fiedler's fantastic 2011 project "Instruments Of Politeness"

from Freedom by Daniel Suarez:

“Where ancient people believed in gods and devils that listened to their pleas and curses — in this age immortal entities hear us. Call them bots or spirits; there is no functional difference now. They surround us and through them word-forms become an unlock code that can trigger a blessing or a curse. Mankind created systems whose inter-reactions we could not fully understand, and the spirits we gathered have escaped from them into the land where they walk the earth—or the GPS grid, whichever you prefer. The spirit world overlaps the real one now, and our lives will never be the same.”

“But doesn’t this just spread mysticism? Lies, essentially?”

“You mean fairy tales? Yes, initially. But then, a lot of parents tell young children that there’s a Santa Claus. It’s easier than trying to explain the cultural significance of midwinter celebrations to a three-year-old. If false magic or a white lie about the god-monster in the mountain will get people to stop killing one another and learn, then the truth can wait. When the time is right, it can be replaced with a reverence for the scientific method.”

See also Julian Oliver’s talk. Again.



Sad & Hairy, originally uploaded by moleitau.

A while back, two years ago in fact – just under a year before we (BERG) announced Little Printer, Matt Webb gave an excellent talk at the Royal Institution in London called “BotWorld: Designing for the new world of domestic A.I”

This week, all of the Little Printers that are out in the world started to change.

Their hair started to grow (you can trim it if you like) and they started to get a little sad if they weren’t used as often as they’d like…

IMG_6551

The world of domesticated, tiny AIs that Matt was talking about two years ago is what BERG is starting to explore, manufacture – and sell in every larger numbers.

I poked at it as well, in my talk building on Matt Webb’s thinking “Gardens & Zoos” about a year ago – suggesting that Little Printer was akin to a pot-plant in it’s behaviour, volition and place in our homes.

Little Valentines Printer

Now it is in people’s homes, workplaces, schools – it’s fascinating to see how they relate to it play out everyday.

20 December, 19.23

I’m amazed and proud of the team for a brilliant bit of thinking-through-making-at-scale, which, though it just does very simple things right now, is our platform for playing with the particular corner of the near-future that Matt outlined in his talk.

If you know me, then you’ll know that “Guilty Robots, Happy Dogs” pretty much had me at the title.

It’s obviously very relevant to our interests at BERG, and I’ve been trying to read up around the area of AI, robotics and companions species for a while.

Struggled to get thought it to be honest – I find philosophy a grind to read. My eyes slip off the words and I have to read everything twice to understand it.

But, it was worth it.

My highlights from Kindle below, and my emboldening on bits that really struck home for me. Here’s a review by Daniel Dennett for luck.

“Real aliens have always been with us. They were here before us, and have been here ever since. We call these aliens animals.”

“They will carry out tasks, such as underwater survey, that are dangerous for people, and they will do so in a competent, efficient, and reassuring manner. To some extent, some such tasks have traditionally been performed by animals. We place our trust in horses, dogs, cats, and homing pigeons to perform tasks that would be difficult for us to perform as well if at all.

“Autonomy implies freedom from outside control. There are three main types of freedom relevant to robots. One is freedom from outside control of energy supply. Most current robots run on batteries that must be replaced or recharged by people. Self-refuelling robots would have energy autonomy. Another is freedom of choice of activity. An automaton lacks such freedom, because either it follows a strict routine or it is entirely reactive. A robot that has alternative possible activities, and the freedom to decide which to do, has motivational autonomy. Thirdly, there is freedom of ‘thought’. A robot that has the freedom to think up better ways of doing things may be said to have mental autonomy.”

“One could envisage a system incorporating the elements of a mobile robot and an energy conversion unit. They could be combined in a single robot or kept separate so that the robot brings its food back to the ‘digester’. Such a robot would exhibit central place foraging.”

“turkeys and aphids have increased their fitness by genetically adapting to the symbiotic pressures of another species.

“In reality, I know nothing (for sure) about the dog’s inner workings, but I am, nevertheless, interpreting the dog’s behaviour.”

“A thermostat … is one of the simplest, most rudimentary, least interesting systems that should be included in the class of believers—the class of intentional systems, to use my term. Why? Because it has a rudimentary goal or desire (which is set, dictatorially, by the thermostat’s owner, of course), which it acts on appropriately whenever it believes (thanks to a sensor of one sort or another) that its desires are unfulfilled. Of course, you don’t have to describe a thermostat in these terms. You can describe it in mechanical terms, or even molecular terms. But what is theoretically interesting is that if you want to describe the set of all thermostats (cf. the set of all purchasers) you have to rise to this intentional level.”

“So, as a rule of thumb, for an animal or robot to have a mind it must have intentionality (including rationality) and subjectivity. Not all philosophers will agree with this rule of thumb, but we must start somewhere.”

We want to know about robot minds, because robots are becoming increasingly important in our lives, and we want to know how to manage them. As robots become more sophisticated, should we aim to control them or trust them? Should we regard them as extensions of our own bodies, extending our control over the environment, or as responsible beings in their own right? Our future policies towards robots and animals will depend largely upon our attitude towards their mentality.”

“In another study, juvenile crows were raised in captivity, and never allowed to observe an adult crow. Two of them, a male and a female, were housed together and were given regular demonstrations by their human foster parents of how to use twig tools to obtain food. Another two were housed individually, and never witnessed tool use. All four crows developed the ability to use twig tools. One crow, called Betty, was of special interest.”

“What we saw in this case that was the really surprising stuff, was an animal facing a new task and new material and concocting a new tool that was appropriate in a process that could not have been imitated immediately from someone else.”

A video clip of Betty making a hook can be seen on the Internet.

“We are looking for a reason to suppose that there is something that it is like to be that animal. This does not mean something that it is like to us. It does not make sense to ask what it would be like (to a human) to be a bat, because a human has a human brain. No film-maker, or virtual reality expert, could convey to us what it is like to be a bat, no matter how much they knew about bats.”

We have seen that animals and robots can, on occasion, produce behaviour that makes us sit up and wonder whether these aliens really do have minds, maybe like ours, maybe different from ours. These phenomena, especially those involving apparent intentionality and subjectivity, require explanation at a scientific level, and at a philosophical level. The question is, what kind of explanation are we looking for? At this point, you (the reader) need to decide where you stand on certain issues”

“The successful real (as opposed to simulated) robots have been reactive and situated (see Chapter 1) while their predecessors were ‘all thought and no action’. In the words of philosopher Andy Clark”

“Innovation is desirable but should be undertaken with care. The extra research and development required could endanger the long-term success of the robot (see also Chapters 1 and 2). So in considering the life-history strategy of a robot it is important to consider the type of market that it is aimed at, and where it is to be placed in the spectrum. If the robot is to compete with other toys, it needs to be cheap and cheerful. If it is to compete with humans for certain types of work, it needs to be robust and competent.”

“connectionist networks are better suited to dealing with knowledge how, rather than knowledge that”

“The chickens have the same colour illusion as we do.”

For robots, it is different. Their mode of development and reproduction is different from that of most animals. Robots have a symbiotic relationship with people, analogous to the relationship between aphids and ants, or domestic animals and people. Robots depend on humans for their reproductive success. The designer of a robot will flourish if the robot is successful in the marketplace. The employer of a robot will flourish if the robot does the job better than the available alternatives. Therefore, if a robot is to have a mind, it must be one that is suited to the robot’s environment and way of life, its ecological niche.”

“there is an element of chauvinism in the evolutionary continuity approach. Too much attention is paid to the similarities between certain animals and humans, and not enough to the fit between the animal and its ecological niche. If an animal has a mind, it has evolved to do a job that is different from the job that it does in humans.

“When I first became interested in robotics I visited, and worked in, various laboratories around the world. I was extremely impressed with the technical expertise, but not with the philosophy. They could make robots all right, but they did not seem to know what they wanted their robots to do. The main aim seemed to be to produce a robot that was intelligent. But an intelligent agent must be intelligent about something. There is no such thing as a generalised animal, and there will be no such thing as a successful generalised robot.

Although logically we cannot tell whether it can feel pain (etc.), any more than we can with other people, sociologically it is in our interest (i.e. a matter of social convention) for the robot to feel accountable, as well as to be accountable. That way we can think of it as one of us, and that also goes for the dog.”