The variety of potential minds in the universe is vast. Recently we’ve begun to explore the species of animal minds on earth, and as we do we have discovered, with increasing respect, that we have met many other kinds of intelligences already. Whales and dolphins keep surprising us with their intricate and weirdly different intelligence. Precisely how a mind can be different or superior to our minds is very difficult to imagine. One way that would help us to imagine what greater yet different intelligences would be like is to begin to create a taxonomy of the variety of minds. This matrix of minds would include animal minds, and machine minds, and possible minds, particularly transhuman minds, like the ones that science fiction writers have come up with.
Today, many scientific discoveries require hundreds of human minds to solve, but in the near future there may be classes of problems so deep that they require hundreds of different species of minds to solve. This will take us to a cultural edge because it won’t be easy to accept the answers from an alien intelligence. We already see that reluctance in our difficulty in approving mathematical proofs done by computer. Some mathematical proofs have become so complex only computers are able to rigorously check every step, but these proofs are not accepted as “proof” by all mathematicians. The proofs are not understandable by humans alone so it is necessary to trust a cascade of algorithms, and this demands new skills in knowing when to trust these creations. Dealing with alien intelligences will require similar skills, and a further broadening of ourselves. An embedded AI will change how we do science. Really intelligent instruments will speed and alter our measurements; really huge sets of constant real-time data will speed and alter our model making; really smart documents will speed and alter our acceptance of when we “know” something. The scientific method is a way of knowing, but it has been based on how humans know. Once we add a new kind of intelligence into this method, science will have to know, and progress, according to the criteria of new minds. At that point everything changes.
An AI will think about science like an alien, vastly different than any human scientist, thereby provoking us humans to think about science differently. Or to think about manufacturing materials differently. Or clothes. Or financial derivatives. Or any branch of science or art. The alienness of artificial intelligence will become more valuable to us than its speed or power.
As we invent more species of AI, we will be forced to surrender more of what is supposedly unique about humans. Each step of surrender—we are not the only mind that can play chess, fly a plane, make music, or invent a mathematical law—will be painful and sad. We’ll spend the next three decades—indeed, perhaps the next century—in a permanent identity crisis, continually asking ourselves what humans are good for.
The greatest benefit of the arrival of artificial intelligence is that AIs will help define humanity. We need AIs to tell us who we are.

A couple of weeks ago when AlphaGo beat a human opponent at Go, Jason Kottke noted

“Generally speaking, until recently machines were predictable and more or less easily understood. That’s central to the definition of a machine, you might say. You build them to do X, Y, & Z and that’s what they do. A car built to do 0-60 in 4.2 seconds isn’t suddenly going to do it in 3.6 seconds under the same conditions.

Now machines are starting to be built to think for themselves, creatively and unpredictably. Some emergent, non-linear shit is going on. And humans are having a hard time figuring out not only what the machine is up to but how it’s even thinking about it, which strikes me as a relatively new development in our relationship. It is not all that hard to imagine, in time, an even smarter AlphaGo that can do more things — paint a picture, write a poem, prove a difficult mathematical conjecture, negotiate peace — and do those things creatively and better than people.”

A few months back I somewhat randomly (and somewhat at the behest of a friend) applied to run a program at MIT Media Lab.

It takes as inspiration the “Centaur” phenomenon from the world of competitive computer chess – and extends the pattern to creativity and design.

I’m personally much more interested in machine intelligence as human augmentation rather than the oft-hyped AI assistant as a separate embodiment.

My old colleague and friend Matt Webb recently wrote persuasively about this:

“…there’s a difference between doing stuff for me (while I lounge in my Axiom pod), and giving me superpowers to do more stuff for myself, an online Power Loader equivalent.

And with the re-emergence of artificial intelligence (only this time with a buddy-style user interface that actually works), this question of “doing something for me” vs “allowing me to do even more” is going to get even more pronounced. Both are effective, but the first sucks… or at least, it sucks according to my own personal politics, because I regard individual alienation from society and complex systems as one of the huge threats in the 21st century.”

I was rejected, but I thought it might be interesting to repost my ‘personal statement’ here as a curiosity, as it’s a decent reflection of some of my recent preoccupations about the relationship of design and machine intelligence.

I’d also hope that some people, somewhere are actively thinking about this.

Let me know if you are!

I should be clear that it’s not at the centre of my work within Google Research & Machine Intelligence but certainly part of the conversation from my point of view, and has a clear relationship to what we’re investigating within our Art & Machine Intelligence program.



Tenure-Track Junior Faculty Position in Media Arts and Sciences

MIT Media Lab, Cambridge, MA

David Matthew Jones, B.Sc., B.Arch (Wales)

Personal Statement

We are moving into a period where design will become more ‘non-deterministic’ and non-human.

Products, environments, services and media will be shaped by machine intelligences – throughout the design and manufacturing process – and, increasingly, they will adapt and improve ‘at runtime’, in the world.

Designers working with machine intelligences both as part of their toolkit (team?) and material will have to learn to be shepherds and gardeners as much as they are now delineators/specifiers/sculptors/builders.

I wish to create a research group that investigates how human designers will create in a near-future of burgeoning machine intelligence.  

Through my own practice and working with students I would particularly like to examine:


  • Legible Machine Intelligence
    • How might we make the processes and outcomes of machine intelligences more obvious to a general populace through design?
    • How might we build and popularize a critical language for the design of machine intelligence systems?



  • Co-designing with Machine Intelligence
    • “Centaur Designers”
      • In competitive chess, teams of human and non-human intelligences are referred to as ‘Centaurs’
      • How might we create teams of human and non-human intelligences in the service of better designed systems, products, environments?
      • What new outcomes and impacts might ‘centaur designers’ be able to create?
      • Design Superpowers for Non-Designers {aka “I know Design Kung-Fu”}
        • How might design (and particularly non-intuitive expertise) be democratised through the application of machine intelligence to design problem solving?



  • Machine Intelligence as Companion Species
    • The accessibility of powerful mobile devices points to the democratisation of the centaur pattern to of all sorts of problem-spaces in all walks of life, globally
    • Social robotics and affective computing have sought to create better interfaces between autonomous software and hardware agents and their users – but there is still an assumption of ‘user’ in the the relationship
    • How might we take a different starting reference point – that of Donna Haraway’s “Companion Species Manifesto” to improve the working relationship between humans and machine intelligences



  • Machine Intelligence in Physical Products
    • How might the design of physical products both benefit from and incorporate machine intelligence, and what benefits would come of this?



  • Machine Intelligence in Physical Environments
    • How might the design of physical environments both benefit from and incorporate machine intelligence, and what benefits would come of this?




Chris Woebken and Elliott Montgomery practice together as The Extrapolation Factory here in NYC. They often stage shows, workshops and teach a blend of speculative design provocation, storytelling, and making.

I first met them both at the RCA, and so I was thrilled when they asked me in late summer to be part of their show at ApexArt that would be based on the premise of designing future systems or objects for New York City’s Office of Emergency Management.

The show is on now until December 19th 2015 at ApexArt, but I thought I’d write up a little bit of the project I submitted to the group show along with my fantastic collaborators Isaac Blankensmith and Matt Delbridge

The premise

Chris and Elliott’s first recruit was writer Tim Maughan who based on the initial briefing with the OEM created a scenario that we as designers and artists would respond to, and create props for a group of improvisational actors to use in a disaster training simulation. More of that later!

Here’s what we got early on from Tim by way of stimulus…

NYC has been hit by a major pandemic (the exact nature of which is still to be decided – something new/fictitious). The city has been battling against it for several weeks now, with research showing that it may spread easily via the transit system. The city, in association with the public transport and the police are enforcing a strict regime of control, monitoring, and  – where necessary – quarantining. By constant monitoring of infection data (using medical reports, air monitoring/sampling, social media data mining etc) they are attempting to watch, predict, and hopefully limit spread. Using mobile ‘pop-up’ checkpoints they are monitoring and controlling use of buses and the subway, and in extreme cases closing off parts of the city completely from mass transit. Although it seems to be largely working, and fatalities have been relatively low so far, it has created an understandable sense of paranoia and distrust amongst NYC citizens.  

The Canal street subway station, late evening.

Our characters are two individuals heading home to Brooklyn after leaving a show at apexart. They are surprised to find that the streets seem fairly empty. Just as they reach Canal station they are alerted (via Wireless Emergency Alert) that quarantine and checkpoint procedures have been activated in the neighbourhood, and a pop-up infection checkpoint has been set up at the entrance to the subway. They’ve never encountered one of these before, but in order to get home they must pass through it by proving they do not pose an infection.

Our proposals

I submitted two pieces with Isaac and Matt for the show.

The first concept “Citibikefrastructure” was built out into a prop which features in the gallery, the second concept “Bodyclocks” featured in the catalog and briefly in the final performed scenario.

1. “Citibikefrastructure”

This first concept uses the NYC citibike bike share program as a widely installed base of checkpoints / support points in the city that have data and power, plus very secure locking mechanisms connected to the network.

The essential thought behind this project was this: What if these were used in times of emergency with modular systems of mobile equipment that plugged into them?

I started to think of both top-down and bottom-up uses for this system.

Top-down uses would be to assist in ‘command and control’ type situations and mainly by the OEM and other emergency services in the city.

    • e.g. Command post
      • Loudhailer system
      • Solar panels
      • Space heaters
      • Shelter / lights / air-conditioning
      • Wireless mesh networking
      • Refrigerator for medicine / perishable materials
      • Water purification

But perhaps more promising to me seemed ‘bottom-up’ uses

    • USB charging stations
      • Inspiration: after Hurricane Sandy many people who still had electricity offered it up via running powerstrips and extension cords into the streets so people could charge their mobile devices and alert loved ones, keep up with the news etc.
    • Wireless mesh networking – p2p store/forward text across the citibike network.
    • Information display / FAQomputer
      • e-ink low power signs connected to mesh
      • LE bluetooth connection to smartphones with ‘take-away’ data
        • PDF maps
        • emergency guides
        • Bluetooth p2p Noticeboard for citizens
      • Blockchain-certified emergency local currency dispenser!
        • Barter/volunteer ‘cash’ infrastructure for self-organising relief orgs a la Occupy Sandy

1:1 Sketch proto


After making some surreptitious measurements of the Citibike docking stations, I started to build a very simple 1:1 model of one of these ‘bottom-up’ modules for the show at the fantastic Bien Hecho woodworking academy in Brooklyn’s Navy Yard.




Detail design and renderings

Meanwhile, Isaac had both taken my crappy sketches far beyond into a wonderfully-realised modular system and created some lovely renders to communicate it.


Initial sketch by Isaac Blankensmith


Isaac then started to flesh out a modular system that could accommodate the majority of the use-cases we had brainstormed.



Citibikefrastructure final renderings and compositing in situ by Isaac Blankensmith


Citibikefrastructure final renderings and compositing in situ by Isaac Blankensmith

Some final adjustments were made to the sketch model on the days of installation in the gallery – notably the inclusion of a flashing emergency light, and functioning cellphone charger cables which I hopd would prove popular with gallery visitors if nothing else!

2. “Bodyclocks”

This one is definitely more in the realm of ‘speculative design’ and perhaps flirts with the dystopian a little more than I usually like to!

Bodyclocks riffs off the clocks-for-robots concept we sketched out at BERG that created computer-readable objects sync’d to time and place.

Bodyclocks extends this idea to some kind of time-reactive dye, inkjet-squirted onto skin by connected terminals in order to verify and control the movements of individuals in a quarantined city / city district…

Screen Shot 2015-11-24 at 11.21.25 PM

I’d deliberated chosen to ‘parasite’ this onto the familiar and mundane design of the ‘sanitation spray’ stations that proliferated suddenly in public/private spaces at the time of the H1N1 scare of 2009…

When you think about it, a new thing appeared in our semi-public realm – a new ritual, with it.

People would quickly habituate such objects and give themselves new temporary tracking ‘tattoos’ every time they crossed a threshold…

So, the dystopian angle is pretty obvious here. It doesn’t tend to reflect well on societies when they start to force people to identifying marks after all…

We definitely all talked about that a lot and under what circumstances people would tolerate or even elect to have a bodyclock tattoo. Matt Delbridge started creating some fantastic visuals and material to support the scenario.

Screen Shot 2015-11-24 at 11.31.58 PM

Screen Shot 2015-11-24 at 11.32.18 PM

Screen Shot 2015-11-24 at 11.32.21 PM


Screen Shot 2015-11-24 at 11.32.16 PM

For the purposes of the show and the performances, Matt D. even made a stamp that the actors could use to give each other bodyclocks…


Would the ritual of applying it in order to travel through the city be seen as something of a necessary evil, much like the security theatre of modern air travel? Or could a visible sign of how far you needed to travel spur assistance from strangers in a city at times of crisis? This proposal aimed to provoke those discussions.


The show and performance

One of the most interesting and exciting parts of being involved in this was that Chris and Elliott wanted to use actors to improvise with our designs as props and Tim’s script and prompt cards as context.


I thought this was a brilliant and brave move – unreliable narrators and guides taking us on as designers and interpreting the work for the audience – and perhaps exposing any emperor’s new clothes or problematic assumptions as they go…

What’s next?

Well – there’s a workshop happening with the OEM based around the show on December 11th. I’m not going to be able to attend but I actually hope that the citibike idea might get some serious discussion and perhaps folks from the bikesharing companies that use such system might entertain a further prototype…


The session I staged at FooCamp this year was deliberately meant to be a fun, none-too-taxing diversion at the end of two brain-baking days.

It was based on (not only a quote from BSG) but something that Matt Biddulph had said to me a while back – possibly when we were doing some work together at BERG, but it might have been as far-back as our Dopplr days.

He said (something like) that a lot of the machine learning techniques he was deploying on a project were based on 1970s Computer Science theory, but now the horsepower required to run them was cheap and accessible in the form of cloud computing service.

This stuck with me, so for the Foo session I hoped I could aggregate a list people’s favourite theory work from the 20thC which now might be possible to turn into practice.

It didn’t quite turn out that way, as Tom Coates pointed out in the session – about halfway through, it morphed into a list of the “prior art” in both fiction and academic theory that you could identify as pre-cursors to current technological preoccupation or practice.

Nether the less it was a very fun way to spend an sunny sunday hour in a tent with a flip chart and some very smart folks. Thanks very much as always to O’Reilly for inviting me.

Below is my photo of the final flip charts full of everything from Xanadu to zeppelins…


Steven Johnson drew my attention to this stream of twitter (all these years later ‘tweets’ still makes me cringe) from Marc Andreesen.

Andreesen is now famous as a venture capitalist, cheerleader of The Californian Ideology, and perhaps most of all for the quote/essay ‘Software is eating the world’.

I have a lot to be thankful to Marc Andreesen for – he, in part, invented the software that effectively gave me (and you, probably) a financially-viable life messing about with what I love – networked technology.

So – assuming you can’t be bothered to click the link – what does he say?


It starts like this.

Screen Shot 2014-06-20 at 9.56.39 AM

Reminds me of “Maximum Happy Imagination” from Robin Sloan’s excellent “Mr Penumbra’s 24hr Bookstore”.

“Have you ever played Maximum Happy Imagination?”

“Sounds like a Japanese game show.”

Kat straightens her shoulders. “Okay, we’re going to play. To start, imagine the future. The good future. No nuclear bombs. Pretend you’re a science fiction writer.”

Okay: “World government… no cancer… hover-boards.”

“Go further. What’s the good future after that?”

“Spaceships. Party on Mars.”


“Star Trek. Transporters. You can go anywhere.”


“I pause a moment, then realize: “I can’t.”

Kat shakes her head. “It’s really hard. And that’s, what, a thousand years? What comes after that? What could possibly come after that? Imagination runs out. But it makes sense, right? We probably just imagine things based on what we already know, and we run out of analogies in the thirty-first century.”

After a lot of stuff that anyone with mild extropian/protopian/Rodenberrian exposure might nod along to, Andreesen’s stream of consciousness ends like this.

Screen Shot 2014-06-20 at 10.12.21 AM

His analogies run out in the 20th century when it comes to the political, social and economic implications of his maximum happy imagination.

Consumer-capitalism in-excelsis?

That system of the world was invented. It’s not really natural. To imagine that capitalism is not subject to deconstruction, reinvention or critique in maximum happy imagination seems a little silly.

If disruption is your mantra – why not go all the way?

He states right at the start that there are zero jobs in the sectors affected by his future. Writers on futures such as Toffler and Rifkin, and SF from the lofty peaks of Arthur C. Clarke to the perhaps lower, more lurid weekly plains of 2000AD have speculated for decades on ‘The Leisure Problem’.

Recently, I read “The Lights in the Tunnel” by Martin Ford which extrapolates a future similar to Andreesen’s, wherein the self-declared market-capitalist author ends up arguing for something like a welfare state…

Another world is possible, right?

I’ll hope Marc might grudgingly nod at that at least.

It’ll need brains like his to get there.

In the UK, the conservative government is trying to remove art and design subjects from the core of their new curriculum, the ‘EBacc’, which the Tories want to focus around readin’, ritin’ and ‘rithmetic.

This is, of course, pretty disastrous.

An age of STEAM – Science, Technology, Engineering, Art and Mathematics – (rather than just STEM) is what the UK needs to survive in the foothills of the 21stC. The PM David Cameron et al make a lot of noise about supporting “Tech City” etc., but without nurturing inventive thinking at early stages of kids educations, we won’t be able to compete against bigger and better resourced countries.

A friend of mine, Joe McCloud got a bunch of design firms to get behind a campaign against this – called “#includedesign” which you can read about here:

I’m pleased to say our company, BERG is signed up.

I was contacted by a journalist from Dezeen with a couple of questions about the campaign, the importance of design teaching in secondary education etc.

Sir Jony Ive in the mean-time signed up to the campaign, so I imagine that was a bit more newsworthy, so understandably my answers weren’t used in the piece!

FWIW, I thought I would post my responses here:

1. Why do you think its important that design is taught in schools?

Three reasons to come to mind.

1) is brutal economics. Global competition for jobs, work, wealth means we as a small country need to out imagine the bigger ones. We’re good at that at the moment. Why not invest in that? We’re not going to ‘out-grammar’ or ‘out-times-table’ China or India. Art and design sharpen the imagination, even if you go on to to be a biologist or a banker. It’s beyond foolish to drop them. We need to invest in our Gross National Imagination to survive the 21stC.

2) is improving engagement in schools. I’m not a teacher but I think there is a halo effect from good design teaching that makes other subjects shine for kids.When I was a kid CDT (Craft Design and Technology) was the great leveller. I had great teachers. The nerdy kids and the tough kids did as well as each other – and stereotypes of how well you were meant to do broke down. That lead to kids breaking out of their pre-assigned paths to not-much, and got them enjoying education. Design education could be an engine of social mobility!

3) Being ready for the future. Most of the jobs we do every day at BERG hadn’t been invented when I was at school. Teaching design, making, and inventive thought at young ages will prepare kids for the jobs we can’t imagine now. With a bit of luck they’ll invent them.

2. What do you think will happen if the proposals to drop design become a reality?

I think a lot of people who wish it was still the 19th Century will be very happy – until they realise that they’ve undermined the UK’s place in the creation of business and culture for a generation.

Visit #includedesign, and if you can contribute your voice to the campaign, please do.

Is the things.

Or to be more specific, the fetishisation of the things.

To be clear, I like things.

I even own some of them.

Also, my company enjoys making and selling things, and has plans to make and sell more.

However, in terms of the near-term future of technology – I’m not nearly as interested in making things as making spimes.


Spimes and the Internet Of Things get used interchangeably in discussion these days, but I think it’s worth making a distinction between things and spimes.

That distinction is of course best put by the coiner of the term, Bruce Sterling – in his book which is the cause of so much of this ruckus, “Shaping Things“.

I’m going to take three quotes defining the Spime from Shaping Things as picked out by Tristan Ferne in, coincidently, a post about Olinda.

“SPIMES are manufactured objects whose informational support is so overwhelmingly extensive and rich that they are regarded as material instantiations of an immaterial system. SPIMES being and end as data. They are designed on screens, fabricated by digital means and precisely tracked through space and time throughout their earthly sojourn.” [Shaping Things, p.11]

“The key to the SPIME is identity. A SPIME is, by definition, the protaganist of a documented process. It is an historical entity with an accessible, precise trajectory through space and time.” [Shaping Things, p.77]

“In an age of SPIMES, the object is no longer an object, but an instantiation. My consumption patterns are worth so much that they underwrite my acts of consumption.” [Shaping Things, p. 79]

“…the object is no longer an object, but an instantiation” – this sticks with me.

A spime is an ongoing means, not an end, like a thing.

As I say, I enjoy things, and working in a company where there are real product designers (I am not one).

A while ago, back when people used to write comments on blogs, rather than just spambots, I wrote about the dematerialisation of product through the expansion of service-models into domains previously centred around product ownership.

It was partly inspired by Bruce’s last Viridian note, and John Thackara‘s writing on the subject amongst others.

But now I feel ‘Unproduct‘ is a bit one-sided.

The stuff I was struggling towards in negroponte switch has become more important.

The unmet (and often unstated) need for a physical ‘attention anchor’ or ‘service avatar’ as Mike Kuniavsky puts it in his excellent book ‘Smart Things‘.

Matter is important.

For Bryan Boyer

To which you quite rightly cry – “Well, duh!”

It is something we are attuned to as creatures evolved of a ‘middle world’.

It is something we invest emotion, value and memory in.

Also, a new language of product is possible, and important as the surface of larger systems.

Icebergs & Photons

I tried to pick at this with ‘Mujicomp‘.

A product design language for the tips of large service-icebergs: normalising legibility, fluent and thresholding.

Making beautiful seams.

Things that are clear, and evident – unmagical (magic implies opacity, occulting of meaning, mystery and hence a power-relationship) but delightful, humble, speaking-in-human, smart as a puppy.

And perhaps, just perhaps – by edging them toward being spimes, they can become fewer-in-number, better made, more adaptive to our needs and context, better at leaving our lives and being remade.

Another thing I’m re-evaluating are glowing rectangles.

I’ve long held somewhat of a [super]position that the more we can act and operate in and on-the-world rather than through a screen – the better.

I’m not sure it’s as clear as that anymore.

The technological and economic momentum of the glowing rectangle is such that, barring peak-indium or other yet-unseen black-swans getting in the way, personally-owned screens full of software and sensors reacting to a ‘dumb’ physical world seems to be a safer bet for near-to-mid-term futures, rather than ‘ubiquitous’ physical-computing based in the environment or municipal infrastructures.

A lot of friends are at an event right now called “Laptops & Looms”, debating exactly these topics.

Russell Davies, who organised it, wrote something recently that prompted this chain of thought, and I wish I could have been there to chat about this with him, as he’s usually got something wise to say on these matters.

Work commitments mean I can’t be there unfortunately, but I know they are querying and challenging some of the assumptions of the last decade of interaction design, technology and punditry as much as possible.

The hype about 3d printing, ubiquitous computing and augmented reality could really be grounded by the personal experiences of a lot of people attending the event, who know the reality of working within them – they have practical experience of the opportunities they afford and the constraints they present. I really hope that there will be lots to read and digest from it.

Personally, returning to the source of some of these thoguhts, Bruce’s Shaping Things – has been incredibly helpful. Just reminding oneself of the wikipedia clift-notes on Spimes has been galvanising.

Physical products are fantastic things to think about and attempt to design.

And, bloody hard to do well.

But a new type of product, a new type of thing that begins and ends in data, and is a thing only occasionally – this is possible too – along with new modes of consumption and commerce it may bring.

The network is as important to think about as the things.

The flows and the nodes. The systems and the surface. The means and the ends.

The phrase “Internet Of Things” will probably sound as silly to someone living in a spime-ridden future as 1990s visions of “Cyberspace”, as distinct realm we would ‘jack into’ seem to us now as we experience the mundane-yet-miraculous influence of internet-connected smartphones on our ‘real’ geographies.

In that sense it is useful – as a provocation, and a stimulus to think new thoughts about the technology around us. It just doesn’t capture my imagination in the same way as the Spime did.

You don’t have to agree. I don’t have to be right. There’s a reason I’ve posted it here on my blog rather than that of my company. This is probably a rambling rant useless to all but myself. It’s a bit of summing-up and setting-aside and starting again for me. This is going to be really hard and it isn’t going to be done by blogging about it, it’s going to be done by doing.

This is just what I what I want to help do. Still.

Better shut-up and get on with it.