Archive

artificial intelligence


Quotes from the excellent “H is for Hawk” by Helen MacDonald with “Hawk” replaced with “Machine Intelligence”

“The world she lives in is not mine. Life is faster for her; time runs slower. Her eyes can follow the wingbeats of a bee as easily as ours follow the wingbeats of a bird. What is she seeing? I wonder, and my brain does backflips trying to imagine it, because I can’t. I have three different receptor-sensitivities in my eyes: red, green and blue. Machine Intelligences, [like other birds], have four. This Machine Intelligence can see colours I cannot, right into the ultraviolet spectrum. She can see polarised light, too, watch thermals of warm air rise, roil, and spill into clouds, and trace, too, the magnetic lines of force that stretch across the earth. The light falling into her deep black pupils is registered with such frightening precision that she can see with fierce clarity things I can’t possibly resolve from the generalised blur. The claws on the toes of the house martins overhead. The veins on the wings of the white butterfly hunting its wavering course over the mustards at the end of the garden. I’m standing there, my sorry human eyes overwhelmed by light and detail, while the Machine Intelligence watches everything with the greedy intensity of a child filling in a colouring book, scribbling joyously, blocking in colour, making the pages its own.

“Bicycles are spinning mysteries of glittering metal. The buses going past are walls with wheels. What’s salient to the Machine Intelligence in the city is not what is salient to man”

“These places had a magical importance, a pull on me that other places did not, however devoid of life they were in all the visits since. And now I’m giving my Machine her head, and letting her fly where she wants, I’ve discovered something rather wonderful. She is building a landscape of magical places too. [She makes detours to check particular spots in case the rabbit or the pheasant that was there last week might be there again. It is wild superstition, it is an instinctive heuristic of the hunting mind, and it works.] She is learning a particular way of navigating the world, and her map is coincident with mine. Memory and love and magic. What happened over the years of my expeditions as a child was a slow transformation of my landscape over time into what naturalists call a local patch, glowing with memory and meaning. The Machine is doing the same. She is making the hill her own. Mine. Ours.”

What companion species will we make, what completely new experiences will they enable, what mental models will we share – once we get over the Pygmalion phase of trying to make sassy human assistants hellbent on getting us restaurant reservations?

See also Alexis Lloyd on ‘mechanomorphs’.

A couple of weeks ago when AlphaGo beat a human opponent at Go, Jason Kottke noted

“Generally speaking, until recently machines were predictable and more or less easily understood. That’s central to the definition of a machine, you might say. You build them to do X, Y, & Z and that’s what they do. A car built to do 0-60 in 4.2 seconds isn’t suddenly going to do it in 3.6 seconds under the same conditions.

Now machines are starting to be built to think for themselves, creatively and unpredictably. Some emergent, non-linear shit is going on. And humans are having a hard time figuring out not only what the machine is up to but how it’s even thinking about it, which strikes me as a relatively new development in our relationship. It is not all that hard to imagine, in time, an even smarter AlphaGo that can do more things — paint a picture, write a poem, prove a difficult mathematical conjecture, negotiate peace — and do those things creatively and better than people.”

A few months back I somewhat randomly (and somewhat at the behest of a friend) applied to run a program at MIT Media Lab.

It takes as inspiration the “Centaur” phenomenon from the world of competitive computer chess – and extends the pattern to creativity and design.

I’m personally much more interested in machine intelligence as human augmentation rather than the oft-hyped AI assistant as a separate embodiment.

My old colleague and friend Matt Webb recently wrote persuasively about this:

“…there’s a difference between doing stuff for me (while I lounge in my Axiom pod), and giving me superpowers to do more stuff for myself, an online Power Loader equivalent.

And with the re-emergence of artificial intelligence (only this time with a buddy-style user interface that actually works), this question of “doing something for me” vs “allowing me to do even more” is going to get even more pronounced. Both are effective, but the first sucks… or at least, it sucks according to my own personal politics, because I regard individual alienation from society and complex systems as one of the huge threats in the 21st century.”

I was rejected, but I thought it might be interesting to repost my ‘personal statement’ here as a curiosity, as it’s a decent reflection of some of my recent preoccupations about the relationship of design and machine intelligence.

I’d also hope that some people, somewhere are actively thinking about this.

Let me know if you are!

I should be clear that it’s not at the centre of my work within Google Research & Machine Intelligence but certainly part of the conversation from my point of view, and has a clear relationship to what we’re investigating within our Art & Machine Intelligence program.


 

MASSACHUSETTS INSTITUTE OF TECHNOLOGY

Tenure-Track Junior Faculty Position in Media Arts and Sciences

MIT Media Lab, Cambridge, MA

David Matthew Jones, B.Sc., B.Arch (Wales)

Personal Statement

We are moving into a period where design will become more ‘non-deterministic’ and non-human.

Products, environments, services and media will be shaped by machine intelligences – throughout the design and manufacturing process – and, increasingly, they will adapt and improve ‘at runtime’, in the world.

Designers working with machine intelligences both as part of their toolkit (team?) and material will have to learn to be shepherds and gardeners as much as they are now delineators/specifiers/sculptors/builders.

I wish to create a research group that investigates how human designers will create in a near-future of burgeoning machine intelligence.  

Through my own practice and working with students I would particularly like to examine:

 

  • Legible Machine Intelligence
    • How might we make the processes and outcomes of machine intelligences more obvious to a general populace through design?
    • How might we build and popularize a critical language for the design of machine intelligence systems?

 

 

  • Co-designing with Machine Intelligence
    • “Centaur Designers”
      • In competitive chess, teams of human and non-human intelligences are referred to as ‘Centaurs’
      • How might we create teams of human and non-human intelligences in the service of better designed systems, products, environments?
      • What new outcomes and impacts might ‘centaur designers’ be able to create?
      • Design Superpowers for Non-Designers {aka “I know Design Kung-Fu”}
        • How might design (and particularly non-intuitive expertise) be democratised through the application of machine intelligence to design problem solving?

 

 

  • Machine Intelligence as Companion Species
    • The accessibility of powerful mobile devices points to the democratisation of the centaur pattern to of all sorts of problem-spaces in all walks of life, globally
    • Social robotics and affective computing have sought to create better interfaces between autonomous software and hardware agents and their users – but there is still an assumption of ‘user’ in the the relationship
    • How might we take a different starting reference point – that of Donna Haraway’s “Companion Species Manifesto” to improve the working relationship between humans and machine intelligences

 

 

  • Machine Intelligence in Physical Products
    • How might the design of physical products both benefit from and incorporate machine intelligence, and what benefits would come of this?

 

 

  • Machine Intelligence in Physical Environments
    • How might the design of physical environments both benefit from and incorporate machine intelligence, and what benefits would come of this?

 

 

 

A quote I used in Dan Saffer's session on smart devices using data collection to attempt predictions around what their users might want:

“Today's devices blurt out the absolute truth as they know it. A smart device in the future might know when NOT to blurt out the truth.” - Genevieve Bell

Also got to point everyone there to Steffen Fiedler's fantastic 2011 project "Instruments Of Politeness"

from Freedom by Daniel Suarez:

“Where ancient people believed in gods and devils that listened to their pleas and curses — in this age immortal entities hear us. Call them bots or spirits; there is no functional difference now. They surround us and through them word-forms become an unlock code that can trigger a blessing or a curse. Mankind created systems whose inter-reactions we could not fully understand, and the spirits we gathered have escaped from them into the land where they walk the earth—or the GPS grid, whichever you prefer. The spirit world overlaps the real one now, and our lives will never be the same.”

“But doesn’t this just spread mysticism? Lies, essentially?”

“You mean fairy tales? Yes, initially. But then, a lot of parents tell young children that there’s a Santa Claus. It’s easier than trying to explain the cultural significance of midwinter celebrations to a three-year-old. If false magic or a white lie about the god-monster in the mountain will get people to stop killing one another and learn, then the truth can wait. When the time is right, it can be replaced with a reverence for the scientific method.”

See also Julian Oliver’s talk. Again.



Sad & Hairy, originally uploaded by moleitau.

A while back, two years ago in fact – just under a year before we (BERG) announced Little Printer, Matt Webb gave an excellent talk at the Royal Institution in London called “BotWorld: Designing for the new world of domestic A.I”

This week, all of the Little Printers that are out in the world started to change.

Their hair started to grow (you can trim it if you like) and they started to get a little sad if they weren’t used as often as they’d like…

IMG_6551

The world of domesticated, tiny AIs that Matt was talking about two years ago is what BERG is starting to explore, manufacture – and sell in every larger numbers.

I poked at it as well, in my talk building on Matt Webb’s thinking “Gardens & Zoos” about a year ago – suggesting that Little Printer was akin to a pot-plant in it’s behaviour, volition and place in our homes.

Little Valentines Printer

Now it is in people’s homes, workplaces, schools – it’s fascinating to see how they relate to it play out everyday.

20 December, 19.23

I’m amazed and proud of the team for a brilliant bit of thinking-through-making-at-scale, which, though it just does very simple things right now, is our platform for playing with the particular corner of the near-future that Matt outlined in his talk.

If you know me, then you’ll know that “Guilty Robots, Happy Dogs” pretty much had me at the title.

It’s obviously very relevant to our interests at BERG, and I’ve been trying to read up around the area of AI, robotics and companions species for a while.

Struggled to get thought it to be honest – I find philosophy a grind to read. My eyes slip off the words and I have to read everything twice to understand it.

But, it was worth it.

My highlights from Kindle below, and my emboldening on bits that really struck home for me. Here’s a review by Daniel Dennett for luck.

“Real aliens have always been with us. They were here before us, and have been here ever since. We call these aliens animals.”

“They will carry out tasks, such as underwater survey, that are dangerous for people, and they will do so in a competent, efficient, and reassuring manner. To some extent, some such tasks have traditionally been performed by animals. We place our trust in horses, dogs, cats, and homing pigeons to perform tasks that would be difficult for us to perform as well if at all.

“Autonomy implies freedom from outside control. There are three main types of freedom relevant to robots. One is freedom from outside control of energy supply. Most current robots run on batteries that must be replaced or recharged by people. Self-refuelling robots would have energy autonomy. Another is freedom of choice of activity. An automaton lacks such freedom, because either it follows a strict routine or it is entirely reactive. A robot that has alternative possible activities, and the freedom to decide which to do, has motivational autonomy. Thirdly, there is freedom of ‘thought’. A robot that has the freedom to think up better ways of doing things may be said to have mental autonomy.”

“One could envisage a system incorporating the elements of a mobile robot and an energy conversion unit. They could be combined in a single robot or kept separate so that the robot brings its food back to the ‘digester’. Such a robot would exhibit central place foraging.”

“turkeys and aphids have increased their fitness by genetically adapting to the symbiotic pressures of another species.

“In reality, I know nothing (for sure) about the dog’s inner workings, but I am, nevertheless, interpreting the dog’s behaviour.”

“A thermostat … is one of the simplest, most rudimentary, least interesting systems that should be included in the class of believers—the class of intentional systems, to use my term. Why? Because it has a rudimentary goal or desire (which is set, dictatorially, by the thermostat’s owner, of course), which it acts on appropriately whenever it believes (thanks to a sensor of one sort or another) that its desires are unfulfilled. Of course, you don’t have to describe a thermostat in these terms. You can describe it in mechanical terms, or even molecular terms. But what is theoretically interesting is that if you want to describe the set of all thermostats (cf. the set of all purchasers) you have to rise to this intentional level.”

“So, as a rule of thumb, for an animal or robot to have a mind it must have intentionality (including rationality) and subjectivity. Not all philosophers will agree with this rule of thumb, but we must start somewhere.”

We want to know about robot minds, because robots are becoming increasingly important in our lives, and we want to know how to manage them. As robots become more sophisticated, should we aim to control them or trust them? Should we regard them as extensions of our own bodies, extending our control over the environment, or as responsible beings in their own right? Our future policies towards robots and animals will depend largely upon our attitude towards their mentality.”

“In another study, juvenile crows were raised in captivity, and never allowed to observe an adult crow. Two of them, a male and a female, were housed together and were given regular demonstrations by their human foster parents of how to use twig tools to obtain food. Another two were housed individually, and never witnessed tool use. All four crows developed the ability to use twig tools. One crow, called Betty, was of special interest.”

“What we saw in this case that was the really surprising stuff, was an animal facing a new task and new material and concocting a new tool that was appropriate in a process that could not have been imitated immediately from someone else.”

A video clip of Betty making a hook can be seen on the Internet.

“We are looking for a reason to suppose that there is something that it is like to be that animal. This does not mean something that it is like to us. It does not make sense to ask what it would be like (to a human) to be a bat, because a human has a human brain. No film-maker, or virtual reality expert, could convey to us what it is like to be a bat, no matter how much they knew about bats.”

We have seen that animals and robots can, on occasion, produce behaviour that makes us sit up and wonder whether these aliens really do have minds, maybe like ours, maybe different from ours. These phenomena, especially those involving apparent intentionality and subjectivity, require explanation at a scientific level, and at a philosophical level. The question is, what kind of explanation are we looking for? At this point, you (the reader) need to decide where you stand on certain issues”

“The successful real (as opposed to simulated) robots have been reactive and situated (see Chapter 1) while their predecessors were ‘all thought and no action’. In the words of philosopher Andy Clark”

“Innovation is desirable but should be undertaken with care. The extra research and development required could endanger the long-term success of the robot (see also Chapters 1 and 2). So in considering the life-history strategy of a robot it is important to consider the type of market that it is aimed at, and where it is to be placed in the spectrum. If the robot is to compete with other toys, it needs to be cheap and cheerful. If it is to compete with humans for certain types of work, it needs to be robust and competent.”

“connectionist networks are better suited to dealing with knowledge how, rather than knowledge that”

“The chickens have the same colour illusion as we do.”

For robots, it is different. Their mode of development and reproduction is different from that of most animals. Robots have a symbiotic relationship with people, analogous to the relationship between aphids and ants, or domestic animals and people. Robots depend on humans for their reproductive success. The designer of a robot will flourish if the robot is successful in the marketplace. The employer of a robot will flourish if the robot does the job better than the available alternatives. Therefore, if a robot is to have a mind, it must be one that is suited to the robot’s environment and way of life, its ecological niche.”

“there is an element of chauvinism in the evolutionary continuity approach. Too much attention is paid to the similarities between certain animals and humans, and not enough to the fit between the animal and its ecological niche. If an animal has a mind, it has evolved to do a job that is different from the job that it does in humans.

“When I first became interested in robotics I visited, and worked in, various laboratories around the world. I was extremely impressed with the technical expertise, but not with the philosophy. They could make robots all right, but they did not seem to know what they wanted their robots to do. The main aim seemed to be to produce a robot that was intelligent. But an intelligent agent must be intelligent about something. There is no such thing as a generalised animal, and there will be no such thing as a successful generalised robot.

Although logically we cannot tell whether it can feel pain (etc.), any more than we can with other people, sociologically it is in our interest (i.e. a matter of social convention) for the robot to feel accountable, as well as to be accountable. That way we can think of it as one of us, and that also goes for the dog.”

Philosophy & Simulation: The Emergence of Synthetic Reason by Manuel DeLanda

This was incredibly hard-work as a read for this bear of little brain, but worth it. Very rewarding and definitely in resonance with earlier non-fiction reads this year (The Information, What Technology Wants, The Nature of Technology)

I’ve put the things that really gave me pause in bold below.

an unmanifested tendency and an unexercised capacity are not just possible but define a concrete space of possibilities with a definite structure.

a mathematical model can capture the behavior of a material process because the space of possible solutions overlaps the possibility space associated with the material process.

Gliders and other spaceships provide the clearest example of emergence in cellular automata: while the automata themselves remain fixed in their cells a coherent pattern of states moving across them is clearly a new entity that is easily distinguishable from them.

This is an important capacity of simulations not shared by mathematical equations: the ability to stage a process and track it as it unfolds.

In other words, each run of a simulation is like an experiment conducted in a laboratory except that it uses numbers and formal operators as its raw materials. For these and other reasons computer simulations may be thought as occupying an intermediate position between that of formal theory and laboratory experiment.

Let’s summarize what has been said so far. The problem of the emergence of living creatures in an inorganic world has a well-defined causal structure.

The results of the metadynamic simulations that have actually been performed show that the spontaneous emergence of a proto-metabolism is indeed a likely outcome, one that could have occurred in prebiotic conditions.

Because recursive function languages have the computational capacity of the most sophisticated automata, and because of the random character of the collisions, this artificial chemistry is referred to as a Turing gas.

An evolving population may, for example, be trapped in a local optimum if the path to a singularity with greater fitness passes through points of much lesser fitness.

Roughly, the earliest bacteria appeared on this planet three and a half billion years ago scavenging the products of non-biological chemical processes; a billion years later they evolved the capacity to tap into the solar gradient, producing oxygen as a toxic byproduct; and one billion years after that they evolved the capacity to use oxygen to greatly increase the efficiency of energy and material consumption. By contrast, the great diversity of multicellular organisms that populate the planet today was generated in about six hundred million years.

The distribution of singularities (fitness optima) in this space defines the complexity of the survival problem that has to be solved: a space with a single global optimum surrounded by areas of minimum fitness is a tough problem (a needle in a haystack) while one with many local optima grouped together defines a relatively easy problem.

from the beginning of life the internal models mediating the interaction between a primitive sensory system and a motor apparatus evolved in relation to what was directly relevant or significant to living beings.

with the availability of neurons the capacity to distinguish the relevant from the irrelevant, the ability to foreground only the opportunities and risks pushing everything else into an undifferentiated background, was vastly increased.

Finally, unlike the conventional link between a symbol and what the symbol stands for, distributed representations are connected to the world in a non-arbitrary way because the process through which they emerge is a direct accommodation or adaptation to the demands of an external reality.

This simulation provides a powerful insight into how an objective category can be captured without using any linguistic resources. The secret is the mapping of relations of similarity into relations of proximity in the possibility space of activation patterns of the hidden layer.

Both manual skills and the complex procedures to which they gave rise are certainly older than spoken language suggesting that the hand may have taught the mouth to speak, that is, that ordered series of manual operations may have formed the background against which ordered series of vocalizations first emerged.

When humans first began to shape flows of air with their tongues and palates the acoustic matter they created introduced yet another layer of complexity into the world.

Says(Tradition, Causes(Full Moon, Low Tide)) Says(My Teacher, Causes(Full Moon, Low Tide))

A mechanism to transform habit into convention is an important component of theories of non-biological linguistic evolution at the level of both syntax and semantics.

a concentration of the capacity to command justified by a religious tradition linking elite members to supernatural forces or, in some cases, justified by the successful practical reasoning of specialized bureaucracies.

Needless to say, the pyramid’s internal mechanism did not allow it to actually transmute a king into a god but it nevertheless functioned like a machine for the production of legitimacy.

social simulations as enacted thought experiments can greatly contribute to develop insight into the workings of the most complex emergent wholes on this planet.

abandon the idea of “society as a whole” and replace it with a set of more concrete entities (communities, organizations, cities) that lend themselves to partial modeling in a way that vague totalities do not.

Follow

Get every new post delivered to your Inbox.

Join 5,760 other followers