Sufficiently-Advanced Lifestyle

Charlie Stross on why he mainly no longer reads science fiction books.

The exercise of substituting “SF” for “Design” or “Speculative Design” is left to the reader.

Similar to the sad baggage surrounding space battles and asteroid belts, we carry real world baggage with us into SF. It happens whenever we fail to question our assumptions. Next time you read a a work of SF ask yourself whether the protagonists have a healthy work/life balance. No, really: what is this thing called a job, and what is it doing in my post-scarcity interplanetary future? Why is this side-effect of carbon energy economics clogging up my post-climate-change world? Where does the concept of a paid occupation whereby individuals auction some portion of their lifespan to third parties as labour in return for money come from historically? What is the social structure of a posthuman lifespan? What are the medical and demographic constraints upon what we do at different ages if our average life expectancy is 200? Why is gender? Where is the world of childhood?

Some of these things may feel like constants, but they’re really not. Humans are social organisms, our technologies are part of our cultures, and the way we live is largely determined by this stuff. Alienated labour as we know it today, distinct from identity, didn’t exist in its current form before the industrial revolution. Look back two centuries, to before the germ theory of disease brought vaccination and medical hygeine: about 50% of children died before reaching maturity and up to 10% of pregnancies ended in maternal death—childbearing killed a significant minority of women and consumed huge amounts of labour, just to maintain a stable population, at gigantic and unpleasant (to them) social cost. Energy economics depended on static power sources (windmills and water wheels: sails on boats), or on muscle power. To an English writer of the 18th century, these must have looked like inevitable constraints on the shape of any conceivable future—but they weren’t.

Similarly, if I was to choose a candidate for the great clomping foot of nerdism afflicting fiction today, I’d pick late-period capitalism, the piss-polluted sea we fish are doomed to swim in. It seems inevitable but it’s a relatively recent development in historic terms, it’s clearly not sustainable in the long term. However, trying to visualize a world without it is surprisingly difficult. Take a random grab-bag of concepts and try to imagine the following without capitalism: “advertising”, “trophy wife”, “health insurance”, “jaywalking”, “passport”, “police”, “teen-ager”, “television”.

SF should—in my view—be draining the ocean and trying to see at a glance which of the gasping, flopping creatures on the sea bed might be lungfish. But too much SF shrugs at the state of our sea water and settles for draining the local aquarium (or even just the bathtub) instead, or settles for gazing into the depths of a brightly coloured computer-generated fishtank screensaver. If you’re writing a story that posits giant all-embracing interstellar space corporations, or a space mafia, or space battleships, never mind universalizing contemporary norms of gender, race, and power hierarchies, let alone fashions in clothing as social class signifiers, or religions … then you need to think long and hard about whether you’ve mistaken your fishtank for the ocean.

And I’m sick and tired of watching the goldfish.

Azeem’s excellent Exponential View newsletter this week published some predictions for 2018, which you should go take a look at.

A couple of weeks ago in a late night fugue of amateur futurism I sent an email to a few friends, looking a little bit further out and laden with bias / fiction / wishful thinking. Anyway – putting it here to re-examine in ten years time…

1) Hard Brexit happens to the UK by default, rather than planning.

2) UK becomes virtual client state of EU anyway, any business wanting to make money in goods or services (apart from hardliners like Witheredspoons and Dyson) has to ditch ideology and comply with EU regs, without any of the benefits to citizenry. Well done everyone.

3) USA becomes increasingly insular, either as result of second Trump term or more likely to concentrate on recovering from the first… GAFA gets bashed a lot over next 5yrs and plateaus, net-non-neutrality locks in incumbent value chains but favours big carriers/media not big tech.

4) GDPR and other citizen-centred regulation in EU (plus net-non-neutrality) push value models to ‘federated-edge’, Europe becomes IoT2.0 leader, esp around energy, manufacturing, logistics, auto/vehicular. Neural architectures start to dominate, blockchain-like federation of devices lowers reliance on centralised models of computing (and business)

5) European (esp French, German) startups focus on EU and Africa, to an extent Asia Minor and South Asia as markets. Edge-computing hybrids leapfrog solely cloud-centric business models in hipness if not value (yet).

5.1) EU open banking and finance laws + mobile money innovation around Africa make very attractive markets for edge-focussed fintech startups. The Swiss go all-in for this…

6) one of the Gulf states cashes out on oil, goes all-in on becoming a silicon manufacturer/designer/licenser, with focus on neural architectures, Indian and Chinese manufacturers license, buy or copy.

6.1) Tim Cook moves Apple corp HQ to Dubai. Significant automation in Apple supply chain + IP risks allow edge manufacturing of most of their premium hardware. Other members of GAFA pay attention as the fruit company leaves the plateau of the last few years behind.

7) Chinese firm announces it has an AGI running on a supercomputer running on UAE-designed chips.

8) Bezos announces Amazon to HQ in Zurich (and an orbital solar-powered blockchain datacore at LaGrange Point L1)

9) It’s 2027 and 60% of tech ‘unicorns’ are HQ’d in EU, UAE, India or Singapore.

10) UK joins special economic area of EU in order to adopt data laws formally. Press starts to refer to this as Brentrance…

If you (or anyone) still read this you’re probably aware I’ve been banging on about Centaurs for a little while.

I started idly sketching something that could become a shorthand for a ‘centaur’ actor in a system. The kind of visual shorthand that you might often use on whiteboards or in sketches of flows in designing interactive systems.

For example… back in 2006 I sketched this…

My first centaur symbol sketches… was trying to make it something quick and fluid but kept getting hung up on the tail…


I then progressed to subjecting colleagues (thanks Tim) to impromptu lifesize whiteboard centaur sketches…


But then I remembered Picasso’s 1949 light paintings of centaurs  which inspired me to do some quick long-exposure experiments.



Again, long-suffering colleagues were pressed into service (after buying them some beers…)

And I think that the constraint of having to paint the centaur body in a few seconds of long exposure got me to a more fluid, fluent expression


But then I think in the end it was Nuno who nailed the tail on this old designer…


More centaurs soon, no doubt.

The WSJ published an “explainer” on visual facial recognition technology recently.

They’re to be commended on the clear wording of their intro, and policy on personal/biometric info…

Screen Shot 2017-06-28 at 11.40.28 AM

As most people who have known me for any length of time will tell you, unless I’m actively laughing or smiling, most of the time my face looks like I want to murder you.

Screen Shot 2017-06-28 at 11.40.51 AM

While this may have had unintended benefits for me in the past – say in negotiations, college crits or design reviews – the advent of pervasive facial recognition and in particular ’emotion detection’ may change that.

Screen Shot 2017-06-28 at 11.40.58 AM

“Affective computing” has been around as an academic research topic for decades of course, but as with much in machine intelligence now it’s fast, cheap and going to be everywhere.

Screen Shot 2017-06-28 at 11.41.10 AM

I wonder.

How many unintended micro-aggressions will I perpetrate against the machines? What essential-oil mood enhancers will mysteriously be recommended to me? Will my car refuse to let me take manual control?

Perhaps I’ll tell the machines what Joss Weedon/Mark Ruffalo’s Hulk divulges as the source of his powers:

“That’s my secret, Captain. I’m always angry.”

The variety of potential minds in the universe is vast. Recently we’ve begun to explore the species of animal minds on earth, and as we do we have discovered, with increasing respect, that we have met many other kinds of intelligences already. Whales and dolphins keep surprising us with their intricate and weirdly different intelligence. Precisely how a mind can be different or superior to our minds is very difficult to imagine. One way that would help us to imagine what greater yet different intelligences would be like is to begin to create a taxonomy of the variety of minds. This matrix of minds would include animal minds, and machine minds, and possible minds, particularly transhuman minds, like the ones that science fiction writers have come up with.
Today, many scientific discoveries require hundreds of human minds to solve, but in the near future there may be classes of problems so deep that they require hundreds of different species of minds to solve. This will take us to a cultural edge because it won’t be easy to accept the answers from an alien intelligence. We already see that reluctance in our difficulty in approving mathematical proofs done by computer. Some mathematical proofs have become so complex only computers are able to rigorously check every step, but these proofs are not accepted as “proof” by all mathematicians. The proofs are not understandable by humans alone so it is necessary to trust a cascade of algorithms, and this demands new skills in knowing when to trust these creations. Dealing with alien intelligences will require similar skills, and a further broadening of ourselves. An embedded AI will change how we do science. Really intelligent instruments will speed and alter our measurements; really huge sets of constant real-time data will speed and alter our model making; really smart documents will speed and alter our acceptance of when we “know” something. The scientific method is a way of knowing, but it has been based on how humans know. Once we add a new kind of intelligence into this method, science will have to know, and progress, according to the criteria of new minds. At that point everything changes.
An AI will think about science like an alien, vastly different than any human scientist, thereby provoking us humans to think about science differently. Or to think about manufacturing materials differently. Or clothes. Or financial derivatives. Or any branch of science or art. The alienness of artificial intelligence will become more valuable to us than its speed or power.
As we invent more species of AI, we will be forced to surrender more of what is supposedly unique about humans. Each step of surrender—we are not the only mind that can play chess, fly a plane, make music, or invent a mathematical law—will be painful and sad. We’ll spend the next three decades—indeed, perhaps the next century—in a permanent identity crisis, continually asking ourselves what humans are good for.
The greatest benefit of the arrival of artificial intelligence is that AIs will help define humanity. We need AIs to tell us who we are.

Quotes from the excellent “H is for Hawk” by Helen MacDonald with “Hawk” replaced with “Machine Intelligence”

“The world she lives in is not mine. Life is faster for her; time runs slower. Her eyes can follow the wingbeats of a bee as easily as ours follow the wingbeats of a bird. What is she seeing? I wonder, and my brain does backflips trying to imagine it, because I can’t. I have three different receptor-sensitivities in my eyes: red, green and blue. Machine Intelligences, [like other birds], have four. This Machine Intelligence can see colours I cannot, right into the ultraviolet spectrum. She can see polarised light, too, watch thermals of warm air rise, roil, and spill into clouds, and trace, too, the magnetic lines of force that stretch across the earth. The light falling into her deep black pupils is registered with such frightening precision that she can see with fierce clarity things I can’t possibly resolve from the generalised blur. The claws on the toes of the house martins overhead. The veins on the wings of the white butterfly hunting its wavering course over the mustards at the end of the garden. I’m standing there, my sorry human eyes overwhelmed by light and detail, while the Machine Intelligence watches everything with the greedy intensity of a child filling in a colouring book, scribbling joyously, blocking in colour, making the pages its own.

“Bicycles are spinning mysteries of glittering metal. The buses going past are walls with wheels. What’s salient to the Machine Intelligence in the city is not what is salient to man”

“These places had a magical importance, a pull on me that other places did not, however devoid of life they were in all the visits since. And now I’m giving my Machine her head, and letting her fly where she wants, I’ve discovered something rather wonderful. She is building a landscape of magical places too. [She makes detours to check particular spots in case the rabbit or the pheasant that was there last week might be there again. It is wild superstition, it is an instinctive heuristic of the hunting mind, and it works.] She is learning a particular way of navigating the world, and her map is coincident with mine. Memory and love and magic. What happened over the years of my expeditions as a child was a slow transformation of my landscape over time into what naturalists call a local patch, glowing with memory and meaning. The Machine is doing the same. She is making the hill her own. Mine. Ours.”

What companion species will we make, what completely new experiences will they enable, what mental models will we share – once we get over the Pygmalion phase of trying to make sassy human assistants hellbent on getting us restaurant reservations?

See also Alexis Lloyd on ‘mechanomorphs’.

A couple of weeks ago when AlphaGo beat a human opponent at Go, Jason Kottke noted

“Generally speaking, until recently machines were predictable and more or less easily understood. That’s central to the definition of a machine, you might say. You build them to do X, Y, & Z and that’s what they do. A car built to do 0-60 in 4.2 seconds isn’t suddenly going to do it in 3.6 seconds under the same conditions.

Now machines are starting to be built to think for themselves, creatively and unpredictably. Some emergent, non-linear shit is going on. And humans are having a hard time figuring out not only what the machine is up to but how it’s even thinking about it, which strikes me as a relatively new development in our relationship. It is not all that hard to imagine, in time, an even smarter AlphaGo that can do more things — paint a picture, write a poem, prove a difficult mathematical conjecture, negotiate peace — and do those things creatively and better than people.”

A few months back I somewhat randomly (and somewhat at the behest of a friend) applied to run a program at MIT Media Lab.

It takes as inspiration the “Centaur” phenomenon from the world of competitive computer chess – and extends the pattern to creativity and design.

I’m personally much more interested in machine intelligence as human augmentation rather than the oft-hyped AI assistant as a separate embodiment.

My old colleague and friend Matt Webb recently wrote persuasively about this:

“…there’s a difference between doing stuff for me (while I lounge in my Axiom pod), and giving me superpowers to do more stuff for myself, an online Power Loader equivalent.

And with the re-emergence of artificial intelligence (only this time with a buddy-style user interface that actually works), this question of “doing something for me” vs “allowing me to do even more” is going to get even more pronounced. Both are effective, but the first sucks… or at least, it sucks according to my own personal politics, because I regard individual alienation from society and complex systems as one of the huge threats in the 21st century.”

I was rejected, but I thought it might be interesting to repost my ‘personal statement’ here as a curiosity, as it’s a decent reflection of some of my recent preoccupations about the relationship of design and machine intelligence.

I’d also hope that some people, somewhere are actively thinking about this.

Let me know if you are!

I should be clear that it’s not at the centre of my work within Google Research & Machine Intelligence but certainly part of the conversation from my point of view, and has a clear relationship to what we’re investigating within our Art & Machine Intelligence program.



Tenure-Track Junior Faculty Position in Media Arts and Sciences

MIT Media Lab, Cambridge, MA

David Matthew Jones, B.Sc., B.Arch (Wales)

Personal Statement

We are moving into a period where design will become more ‘non-deterministic’ and non-human.

Products, environments, services and media will be shaped by machine intelligences – throughout the design and manufacturing process – and, increasingly, they will adapt and improve ‘at runtime’, in the world.

Designers working with machine intelligences both as part of their toolkit (team?) and material will have to learn to be shepherds and gardeners as much as they are now delineators/specifiers/sculptors/builders.

I wish to create a research group that investigates how human designers will create in a near-future of burgeoning machine intelligence.  

Through my own practice and working with students I would particularly like to examine:


  • Legible Machine Intelligence
    • How might we make the processes and outcomes of machine intelligences more obvious to a general populace through design?
    • How might we build and popularize a critical language for the design of machine intelligence systems?



  • Co-designing with Machine Intelligence
    • “Centaur Designers”
      • In competitive chess, teams of human and non-human intelligences are referred to as ‘Centaurs’
      • How might we create teams of human and non-human intelligences in the service of better designed systems, products, environments?
      • What new outcomes and impacts might ‘centaur designers’ be able to create?
      • Design Superpowers for Non-Designers {aka “I know Design Kung-Fu”}
        • How might design (and particularly non-intuitive expertise) be democratised through the application of machine intelligence to design problem solving?



  • Machine Intelligence as Companion Species
    • The accessibility of powerful mobile devices points to the democratisation of the centaur pattern to of all sorts of problem-spaces in all walks of life, globally
    • Social robotics and affective computing have sought to create better interfaces between autonomous software and hardware agents and their users – but there is still an assumption of ‘user’ in the the relationship
    • How might we take a different starting reference point – that of Donna Haraway’s “Companion Species Manifesto” to improve the working relationship between humans and machine intelligences



  • Machine Intelligence in Physical Products
    • How might the design of physical products both benefit from and incorporate machine intelligence, and what benefits would come of this?



  • Machine Intelligence in Physical Environments
    • How might the design of physical environments both benefit from and incorporate machine intelligence, and what benefits would come of this?