Archive

artificial intelligence

A quote I used in Dan Saffer's session on smart devices using data collection to attempt predictions around what their users might want:

“Today's devices blurt out the absolute truth as they know it. A smart device in the future might know when NOT to blurt out the truth.” - Genevieve Bell

Also got to point everyone there to Steffen Fiedler's fantastic 2011 project "Instruments Of Politeness"

from Freedom by Daniel Suarez:

“Where ancient people believed in gods and devils that listened to their pleas and curses — in this age immortal entities hear us. Call them bots or spirits; there is no functional difference now. They surround us and through them word-forms become an unlock code that can trigger a blessing or a curse. Mankind created systems whose inter-reactions we could not fully understand, and the spirits we gathered have escaped from them into the land where they walk the earth—or the GPS grid, whichever you prefer. The spirit world overlaps the real one now, and our lives will never be the same.”

“But doesn’t this just spread mysticism? Lies, essentially?”

“You mean fairy tales? Yes, initially. But then, a lot of parents tell young children that there’s a Santa Claus. It’s easier than trying to explain the cultural significance of midwinter celebrations to a three-year-old. If false magic or a white lie about the god-monster in the mountain will get people to stop killing one another and learn, then the truth can wait. When the time is right, it can be replaced with a reverence for the scientific method.”

See also Julian Oliver’s talk. Again.



Sad & Hairy, originally uploaded by moleitau.

A while back, two years ago in fact – just under a year before we (BERG) announced Little Printer, Matt Webb gave an excellent talk at the Royal Institution in London called “BotWorld: Designing for the new world of domestic A.I”

This week, all of the Little Printers that are out in the world started to change.

Their hair started to grow (you can trim it if you like) and they started to get a little sad if they weren’t used as often as they’d like…

IMG_6551

The world of domesticated, tiny AIs that Matt was talking about two years ago is what BERG is starting to explore, manufacture – and sell in every larger numbers.

I poked at it as well, in my talk building on Matt Webb’s thinking “Gardens & Zoos” about a year ago – suggesting that Little Printer was akin to a pot-plant in it’s behaviour, volition and place in our homes.

Little Valentines Printer

Now it is in people’s homes, workplaces, schools – it’s fascinating to see how they relate to it play out everyday.

20 December, 19.23

I’m amazed and proud of the team for a brilliant bit of thinking-through-making-at-scale, which, though it just does very simple things right now, is our platform for playing with the particular corner of the near-future that Matt outlined in his talk.

If you know me, then you’ll know that “Guilty Robots, Happy Dogs” pretty much had me at the title.

It’s obviously very relevant to our interests at BERG, and I’ve been trying to read up around the area of AI, robotics and companions species for a while.

Struggled to get thought it to be honest – I find philosophy a grind to read. My eyes slip off the words and I have to read everything twice to understand it.

But, it was worth it.

My highlights from Kindle below, and my emboldening on bits that really struck home for me. Here’s a review by Daniel Dennett for luck.

“Real aliens have always been with us. They were here before us, and have been here ever since. We call these aliens animals.”

“They will carry out tasks, such as underwater survey, that are dangerous for people, and they will do so in a competent, efficient, and reassuring manner. To some extent, some such tasks have traditionally been performed by animals. We place our trust in horses, dogs, cats, and homing pigeons to perform tasks that would be difficult for us to perform as well if at all.

“Autonomy implies freedom from outside control. There are three main types of freedom relevant to robots. One is freedom from outside control of energy supply. Most current robots run on batteries that must be replaced or recharged by people. Self-refuelling robots would have energy autonomy. Another is freedom of choice of activity. An automaton lacks such freedom, because either it follows a strict routine or it is entirely reactive. A robot that has alternative possible activities, and the freedom to decide which to do, has motivational autonomy. Thirdly, there is freedom of ‘thought’. A robot that has the freedom to think up better ways of doing things may be said to have mental autonomy.”

“One could envisage a system incorporating the elements of a mobile robot and an energy conversion unit. They could be combined in a single robot or kept separate so that the robot brings its food back to the ‘digester’. Such a robot would exhibit central place foraging.”

“turkeys and aphids have increased their fitness by genetically adapting to the symbiotic pressures of another species.

“In reality, I know nothing (for sure) about the dog’s inner workings, but I am, nevertheless, interpreting the dog’s behaviour.”

“A thermostat … is one of the simplest, most rudimentary, least interesting systems that should be included in the class of believers—the class of intentional systems, to use my term. Why? Because it has a rudimentary goal or desire (which is set, dictatorially, by the thermostat’s owner, of course), which it acts on appropriately whenever it believes (thanks to a sensor of one sort or another) that its desires are unfulfilled. Of course, you don’t have to describe a thermostat in these terms. You can describe it in mechanical terms, or even molecular terms. But what is theoretically interesting is that if you want to describe the set of all thermostats (cf. the set of all purchasers) you have to rise to this intentional level.”

“So, as a rule of thumb, for an animal or robot to have a mind it must have intentionality (including rationality) and subjectivity. Not all philosophers will agree with this rule of thumb, but we must start somewhere.”

We want to know about robot minds, because robots are becoming increasingly important in our lives, and we want to know how to manage them. As robots become more sophisticated, should we aim to control them or trust them? Should we regard them as extensions of our own bodies, extending our control over the environment, or as responsible beings in their own right? Our future policies towards robots and animals will depend largely upon our attitude towards their mentality.”

“In another study, juvenile crows were raised in captivity, and never allowed to observe an adult crow. Two of them, a male and a female, were housed together and were given regular demonstrations by their human foster parents of how to use twig tools to obtain food. Another two were housed individually, and never witnessed tool use. All four crows developed the ability to use twig tools. One crow, called Betty, was of special interest.”

“What we saw in this case that was the really surprising stuff, was an animal facing a new task and new material and concocting a new tool that was appropriate in a process that could not have been imitated immediately from someone else.”

A video clip of Betty making a hook can be seen on the Internet.

“We are looking for a reason to suppose that there is something that it is like to be that animal. This does not mean something that it is like to us. It does not make sense to ask what it would be like (to a human) to be a bat, because a human has a human brain. No film-maker, or virtual reality expert, could convey to us what it is like to be a bat, no matter how much they knew about bats.”

We have seen that animals and robots can, on occasion, produce behaviour that makes us sit up and wonder whether these aliens really do have minds, maybe like ours, maybe different from ours. These phenomena, especially those involving apparent intentionality and subjectivity, require explanation at a scientific level, and at a philosophical level. The question is, what kind of explanation are we looking for? At this point, you (the reader) need to decide where you stand on certain issues”

“The successful real (as opposed to simulated) robots have been reactive and situated (see Chapter 1) while their predecessors were ‘all thought and no action’. In the words of philosopher Andy Clark”

“Innovation is desirable but should be undertaken with care. The extra research and development required could endanger the long-term success of the robot (see also Chapters 1 and 2). So in considering the life-history strategy of a robot it is important to consider the type of market that it is aimed at, and where it is to be placed in the spectrum. If the robot is to compete with other toys, it needs to be cheap and cheerful. If it is to compete with humans for certain types of work, it needs to be robust and competent.”

“connectionist networks are better suited to dealing with knowledge how, rather than knowledge that”

“The chickens have the same colour illusion as we do.”

For robots, it is different. Their mode of development and reproduction is different from that of most animals. Robots have a symbiotic relationship with people, analogous to the relationship between aphids and ants, or domestic animals and people. Robots depend on humans for their reproductive success. The designer of a robot will flourish if the robot is successful in the marketplace. The employer of a robot will flourish if the robot does the job better than the available alternatives. Therefore, if a robot is to have a mind, it must be one that is suited to the robot’s environment and way of life, its ecological niche.”

“there is an element of chauvinism in the evolutionary continuity approach. Too much attention is paid to the similarities between certain animals and humans, and not enough to the fit between the animal and its ecological niche. If an animal has a mind, it has evolved to do a job that is different from the job that it does in humans.

“When I first became interested in robotics I visited, and worked in, various laboratories around the world. I was extremely impressed with the technical expertise, but not with the philosophy. They could make robots all right, but they did not seem to know what they wanted their robots to do. The main aim seemed to be to produce a robot that was intelligent. But an intelligent agent must be intelligent about something. There is no such thing as a generalised animal, and there will be no such thing as a successful generalised robot.

Although logically we cannot tell whether it can feel pain (etc.), any more than we can with other people, sociologically it is in our interest (i.e. a matter of social convention) for the robot to feel accountable, as well as to be accountable. That way we can think of it as one of us, and that also goes for the dog.”

Philosophy & Simulation: The Emergence of Synthetic Reason by Manuel DeLanda

This was incredibly hard-work as a read for this bear of little brain, but worth it. Very rewarding and definitely in resonance with earlier non-fiction reads this year (The Information, What Technology Wants, The Nature of Technology)

I’ve put the things that really gave me pause in bold below.

an unmanifested tendency and an unexercised capacity are not just possible but define a concrete space of possibilities with a definite structure.

a mathematical model can capture the behavior of a material process because the space of possible solutions overlaps the possibility space associated with the material process.

Gliders and other spaceships provide the clearest example of emergence in cellular automata: while the automata themselves remain fixed in their cells a coherent pattern of states moving across them is clearly a new entity that is easily distinguishable from them.

This is an important capacity of simulations not shared by mathematical equations: the ability to stage a process and track it as it unfolds.

In other words, each run of a simulation is like an experiment conducted in a laboratory except that it uses numbers and formal operators as its raw materials. For these and other reasons computer simulations may be thought as occupying an intermediate position between that of formal theory and laboratory experiment.

Let’s summarize what has been said so far. The problem of the emergence of living creatures in an inorganic world has a well-defined causal structure.

The results of the metadynamic simulations that have actually been performed show that the spontaneous emergence of a proto-metabolism is indeed a likely outcome, one that could have occurred in prebiotic conditions.

Because recursive function languages have the computational capacity of the most sophisticated automata, and because of the random character of the collisions, this artificial chemistry is referred to as a Turing gas.

An evolving population may, for example, be trapped in a local optimum if the path to a singularity with greater fitness passes through points of much lesser fitness.

Roughly, the earliest bacteria appeared on this planet three and a half billion years ago scavenging the products of non-biological chemical processes; a billion years later they evolved the capacity to tap into the solar gradient, producing oxygen as a toxic byproduct; and one billion years after that they evolved the capacity to use oxygen to greatly increase the efficiency of energy and material consumption. By contrast, the great diversity of multicellular organisms that populate the planet today was generated in about six hundred million years.

The distribution of singularities (fitness optima) in this space defines the complexity of the survival problem that has to be solved: a space with a single global optimum surrounded by areas of minimum fitness is a tough problem (a needle in a haystack) while one with many local optima grouped together defines a relatively easy problem.

from the beginning of life the internal models mediating the interaction between a primitive sensory system and a motor apparatus evolved in relation to what was directly relevant or significant to living beings.

with the availability of neurons the capacity to distinguish the relevant from the irrelevant, the ability to foreground only the opportunities and risks pushing everything else into an undifferentiated background, was vastly increased.

Finally, unlike the conventional link between a symbol and what the symbol stands for, distributed representations are connected to the world in a non-arbitrary way because the process through which they emerge is a direct accommodation or adaptation to the demands of an external reality.

This simulation provides a powerful insight into how an objective category can be captured without using any linguistic resources. The secret is the mapping of relations of similarity into relations of proximity in the possibility space of activation patterns of the hidden layer.

Both manual skills and the complex procedures to which they gave rise are certainly older than spoken language suggesting that the hand may have taught the mouth to speak, that is, that ordered series of manual operations may have formed the background against which ordered series of vocalizations first emerged.

When humans first began to shape flows of air with their tongues and palates the acoustic matter they created introduced yet another layer of complexity into the world.

Says(Tradition, Causes(Full Moon, Low Tide)) Says(My Teacher, Causes(Full Moon, Low Tide))

A mechanism to transform habit into convention is an important component of theories of non-biological linguistic evolution at the level of both syntax and semantics.

a concentration of the capacity to command justified by a religious tradition linking elite members to supernatural forces or, in some cases, justified by the successful practical reasoning of specialized bureaucracies.

Needless to say, the pyramid’s internal mechanism did not allow it to actually transmute a king into a god but it nevertheless functioned like a machine for the production of legitimacy.

social simulations as enacted thought experiments can greatly contribute to develop insight into the workings of the most complex emergent wholes on this planet.

abandon the idea of “society as a whole” and replace it with a set of more concrete entities (communities, organizations, cities) that lend themselves to partial modeling in a way that vague totalities do not.

Imperial

Just been to a talk at Imperial College London, put on as part of the London Games Festival, presenting viewpoints form the games industry (Peter Molyneux and someone from Eidos) and from AI Academia. Very accessible and interesting.

I’ve tried my best to do an Alice, but I’ve not quite got the knack – so far from verbatim notes below:

The future of AI in games
London Games Festival

4.10.06

peter molyneux, prof. mark cavazza., dr. simon colton

intro
john cass, icl

article in the economist from the summer (CF)

next challenge is to develop believable characters and intelligences in game worlds

bring together two communities: the game devlopers from industry and artificial intelligence research community from academia

take industry to a new level

—-

peter molyneux

this is the most interesting area of game design to him

sorry – on behalf of games industry for grabbing the term AI and totally abusing it.

there is very little real AI in games

AI is mistaken for
– navigation
– avoidance
– crude simulations
– scripted behaviour

this is where we are, where do we want to be?

we need a whole raft of REAL AI and we’re starting to get the processing power to do it. next gen consoles could be the key.

- agent AI: need for convincing characters, recognizing what you are doing as a player. we are doing so much more as players – more freedom, more emotion. fable2: friendship, family – relationships… how do this convincingly?

- cloning AI: online is here to stay and this creates big problems… what about having a clone of yourself to remain in a persistent world so you can stay ‘present’ when you should go to sleep (UK vs. australia)

- learning AI – adapting to players and play.

- balancing AI: we’ve failed because we are not mass market – we only appeal to a very small audience… biggest game = 20m should be 200m… one of the reasons we have not got the reach is that we have no way to balance the difficulty of the game – looking at how the player plays and balance the game play accordingly (cf. czymihalyi flow, robin hunicke’s work)

AI future – will change the way that games are designed, create new types of game, create unique experiences… my game experience will be different from yours. far more realistic worlds can be created… visually we are getting close, but need great AI to back this up otherwise they will feel flawed. i will be able to stand up in 5yrs time and say look at how games have changed due to AI.

—-
DR. MARK CAVAZZA, UNIVERSITY OF TEESIDE

AI for interactive storytelling

‘long term endeavor to reconcile linear story and interaction’

reincorporate aesthetic qualities of linear media

character-based storytelling: Hierarchical Task Network Planning (AI technique – look up?) to describe characters roles.

AI maintains consistency of the story, while allowing adaptation… but often driving towards satisfying conclusion (interactive storytelling is not just changing the ending!)

sitcom generator: each characters role is described as a HTN plan. (modelled on ‘Friends’)

dynamic interactions between characters contribute to generating multiple situation not encoded in the original roles.

sitcom chosen to test the theory – as they are essentially/generally simple story forms (not shakespeare!)

we are generating a lot of stories and a lot of them are rubbish… need to filter these… and we can only generate about 6mins…

what’s the diff between this and The Sims? Sims have no narrative drive, they react (narrative is in the eye of the beholder)

every time these characters act.. they have a plan.

silent movies atm, but next step is dialogue.
this is very processing power intensive, but making progress with small scaling demonstrations. (shows one) Scalability is not really there atm.

real challenge is to develop true interactive storytelling capabilities.

The world is an actor: worlds behaviour drives narrative events. blurring the boundaries of physics and AI – the world is ‘plotting against the character’… inspired by the ‘final destination’ movies!

the whole environment ‘has a plan’

its easy to look clever in AI in small exmaples, the real challenge is scability… but we think the principles here are sound.

(doing research project with DTI/Eidos)

Dr. Simon Colton
AI and Games – Do’s and Don’ts

(games industry)unhealthy obsession #1: the modeling of opponents

(AI academia) unhealthy obsession #2: playing board games
From the machine learning journal: ‘learning to bid in bridge’ is a 30 yr project and it’s still going!

multiple mismatches in these two worlds
– what AI in games have low ram, low cycles, low time
– AI agents really want lots of ram, time, cycles

- ‘An AI’ that is referred to in games does not exist as termed by academia… a ‘complete AI’ would have emotional intelligence, reasoning, etc…

we’re developing AI the wrong way round – higher reasoning rather than basic instincts (cf. rodney brooks)

- ‘playing chess is a doddle compared to avoiding a tiger’

- AI researchers think it’s about BEATING the player, whereas games industry want AIs to help engage the player further in the game world.

so, what else can we do

- data mining game-play data
— changing how the game plays
– affective computing (HCI)
— how to tell from a players face what their emotional response is and changing game-play
– automatic avatars (to step in your place for sleep and toilet breaks!)
– but could be most useful in the design stage

comparison to the biotech industry
is designing a game more difficult than designing a drug? maybe? do drug companies have more funds? more IP issues? maybe?
BUT – drug companies absolutely make more use of AI in their design process than the games industry…

picks and shovels (where the money is) – getting the computer to program itself (misused phrase,but.. )
– machine learning
– genetic programming
— combining gives more than the sum of parts

one possible approach

evolutionary approach enables you to generate new entities for games – NPCs, cars, object… program AIs to use middle-ware to create these things

AI makes 100 bad models of a football – choose best 10 then breed… 1000s of generations later get valuable assets…

machine learns your aesthetic as a designer…

AI for game environment design

possible human-computer interaction in the design phase of games

designer creates a few building in his/her style
AI takes over and creates rest of city, designer refines the process…

great at design stage, but possibilities at run-time…

now the hard part: it’s still not easy to use AI/machine learning techniques in the off the shelf manners
– the best techniques come with a human (expert)

majority of AI academics don’t know how games are designed – start of a conversation?

summary: good AI opponents still a way off

AI people should think about engaging rather than conquering opponents

games people should think more about using AI tools in the design phase.

google: “AI bite”

Follow

Get every new post delivered to your Inbox.

Join 5,134 other followers