Archive

Ubiquitous computing

Warning – this is a collection of half-formed thoughts, perhaps even more than usual.

I’d been wanting to write something about Google Latitude, and other location-sharing services that we (Dopplr) often get lumped in with for a while. First of all, there was the PSFK Good Ideas Salon, where I was thinking about it (not very articulately) then shortly after that Google Latitude was announced, in a flurry of tweets.

At the time I myself blurted:

twitter-_-matt-jones_-i-still-maintain-perhaps

My attitude to most Location-Based Services (or LBS in the ancient three-letter-acronymicon of the Mobile Industry) has been hardened by sitting through umpty-nine presentations by the white-men-in-chinos who maintain a fortune can be made by the first company to reliably send a passer-by a voucher for a cheap coffee as they drift past *bucks.

It’s also been greatly informed by working and talking with my esteemed erstwhile colleague Christopher Heathcote who gave a great presentation at Etech (5 years ago!!! Argh!) called “35 ways to find your location“, and has both at Orange and Nokia been in many of the same be-chino’d presentations.

me_home_work1Often, he’s pointed out quite rightly, that location is a matter of routine. We’re in work, college, at home, at our corner shop, at our favourite pub. These patterns are worn into our personal maps of the city, and usually it’s the exceptions to it that we record, or share – a special excursion, or perhaps a unexpected diversion – pleasant or otherwise that we want to broadcast for companionship, or assistance.

Also, most of the time – if I broadcast my location to trusted parties such as my friends, they may have limited opportunity to take advantage of that information – they after all are probably absorbed in their own routines, and by the time we rendevous, it would be too late.

Location-based services that have worked with this have had limited success – Dodgeball was perhaps situated software after all, thriving in a walkable bar-hopping subculture like that of Manhattan or Brooklyn, but probably not going to meet with the same results worldwide.

This attitude carried through to late 2006/early 2007 and the initial thinking for Dopplr – that by focussing on (a) nothing more granular than cities-as-place and days-as-time and (b) broadcasting future intention, we could find a valuable location-based service for a certain audience – surfacing coincidence for frequent travellers.

Point (a): taking cities and days as the grain of your service, we thought was the sweet-spot. Once that ‘bit’ of information about the coincidence has been highlighted and injected into whichever networks you’re using, you can use those networks or other established communications methods to act on it: facebook, twitter, email, SMS or even, voice…

“Cities-and-days” also gave a fuzziness that allowed for flexibility and, perhaps plausible deniablity – ameliorating some of the awkwardness that social networks can unitentionally create (we bent over backwards to try and avoid that in our design decisions, with perhaps partial success)

In the latest issue of Wired, there’s a great example of the awkward situations broadcasting your current exact location could create:

“I explained that I wasn’t actually begging for company; I was just telling people where I was. But it’s an understandable misperception. This is new territory, and there’s no established etiquette or protocol.

This issue came up again while having dinner with a friend at Greens (37.806679 °N, 122.432131 °W), an upscale vegetarian restaurant. Of course, I thought nothing of broadcasting my location. But moments after we were seated, two other friends—Randy and Cameron—showed up, obviously expecting to join us. Randy squatted at the end of the table. Cameron stood. After a while, it became apparent that no more chairs would be coming, so they left awkwardly. I felt bad, but I hadn’t really invited them. Or had I?”

It also seemed like a layer in a stack of software enhancing the social use and construction of place and space – which we hoped would ‘handover’ to other more appropriate tools and agents in other scales of the stack. This hope became reinforced when we saw a few people taking to prefacing twitters broadcasting where they were about to go in the city as ‘microdopplr‘. We were also pleased to see the birth of more granular intention-broadcasting services such as Mixin and Zipiko, also from Finland

This is also a reason that we were keen to connect with FireEagle (aside from the fact that Tom Coates is a good friend of both myself and Matt B.) in that it has the potential to act as a broker between elements in the stack, and in fact help, weave the stack in the first place. At the moment, it’s a bit like being a hi-fi nerd connecting hi-specification separates with expensive cabling (for instance, this example…), but hopefully an open and simple way to control the sharing of your whereabouts for useful purposes will emerge from the FE ecosystem or something similar.

Point (b) though, still has me thinking that sharing your precise whereabouts – where you are right now, has limited value.

lightcone_slideThis is a slide I’ve used a lot when giving presentations about Dopplr (for instance, this one last year at IxDA)

It’s a representation of an observer moving through space and time, with the future represented by the ‘lightcone’ at the top, and the past by the one at the bottom.

I’ve generally used it to emphasise that Dopplr is about two things – primarily optimising the future via the coincidences surfaced by people sharing their intended future location with people they trust, and secondly, increasingly – allowing you to reflect on your past travels with visualisations, tips, statistics and other tools, for instance the Personal Annual Reports we generated for everyone.

It also points out that the broadcasting of intention is something that necessarily involves human input – it can’t be automated (yet)- more on which later.

By concentrating on the future lightcone, sharing one’s intentions and surfacing the potential coincidences, you have enough information to make the most of them – perhaps changing plans slightly in order to maximise your overlap with a friend or colleague. It’s about wiggling that top lightcone around based on information you wouldn’t normally have in order to make the most of your time – at the grain of spacetime Dopplr operates at.

Google Latitude, Brightkite and to an extent FireEagle have made mee think a lot about the grain of spacetime in such services, and how best to work with it in different contexts. Also, I’ve been thinking about cities a lot, in preparation for my talk at Webstock this week – and inspired by Adam‘s new book, Dan’s ongoing mission to informationally refactor the city and the street, Anne Galloway and Rob Shield’s excellent “Space and culture” blog and the work of many others, including neogeographers-par-excellance Stamen.

I’m still convinced that hereish-and-soonish/thereish-and-thenish are the grain we need to be exploring rather than just connecting a network of the pulsing ‘blue-dot’.

Tom Taylor gave voice to this recently:

“The problem with these geolocative services is that they assume you’re a precise, rational human, behaving as economists expect. No latitude for the unexpected; they’re determined to replace every unnecessary human interaction with the helpful guide in your pocket.

Red dot fever enforces a precision into your design that the rest must meet to feel coherent. There’s no room for the hereish, nowish, thenish and soonish. The ‘good enough’.

I’m vaguely tempted to shutdown iamnear, to be reborn as iamnearish. The Blue Posts is north of you, about five minutes walk away. Have a wander around, or ask someone. You’ll find it.”

My antipathy to the here/now fixation in LBS lead me to remix the lightcone diagram and post it to flickr, ahead of writing this ramble.

The results of doing so delighted and surprised me.

Making the most of hereish and nowish

In retrospect, it wasn’t the most nuanced representation of what I was trying to convey – but it got some great responses.

There was a lot of discussion around whether the cones themselves were the right way to visualise spacetime/informational futures-and-pasts, including my favourite from the ever-awesome Ben Cerveny:

“I think I’d render the past as a set of stalactites dripping off the entire hypersurface, recording the people and objects with state history leaving traces into the viewers knowledgestream, information getting progressively less rich as it is dropped from the ‘buffers of near-now”

Read the entire thread at Flickr – it gets crazier.

But, interwoven in the discussion of the Possibility Jellyfish, came comments about the relative value of place-based information over time.

Chris Heathcote pointed out that sometimes that pulsing blue dot is exactly what’s needed to collapse all the ifs-and-buts-and-wheres-and-whens of planning to meet up in the city.

Blaine pointed out that

“we haven’t had enough experience with the instantaneous forms of social communication to know if/how they’re useful.”

but also (I think?) supported my view about the grain of spacetime that feels valuable:

“Precise location data is past its best-by date about 5-10 minutes after publishing for moving subjects. City level location data is valuable until about two hours before you need to start the “exit city” procedures.”

Tom Coates, similarly:

“Using the now to plan for ten minutes / half an hour / a day in the future is useful, as is plotting and reflecting on where you’ve been a few moments ago. But on the other hand, being alerts when someone directly passes your house, or using geography to *trigger* things immediately around you (like for example actions in a gaming environment, or tool-tips in an augmented reality tool, or home automation stuff) requires that immediacy.”

He also pointed out my prejudice towards human-to-human sharing in this scenario:

“Essentially then, humans often don’t need to know where you are immediately, but hardware / software might benefit from it — if only because they don’t find the incoming pings distracting and can therefore give it their full and undivided attention..”

Some great little current examples of software acting on exact real-time location (other than the rather banal and mainstream satnav car navigation) are Locale for Android – a little app that changes the settings of your phone based on your location, or iNap, that attempts to wake you up at your rail or tube stop if you’ve fallen asleep on the commute home.

But to return to Mr. Coates.

Tom’s been thinking and building in this area for a long time – from UpMyStreet Conversations to FireEagle, and his talk at KiwiFoo on building products from the affordances of real-time data really made me think hard about here-and-now vs hereish-and-nowish.

Tom at Kiwifoo

Tom presented some of the thinking behind FireEagle, specifically about the nature of dealing with real-time data in products an services.

In the discussion, a few themes appeared for me – one was that of the relative-value of different types of data waxing and waning over time, and that examining these patterns can give rise to product and service ideas.

Secondly, it occured to me that we often find value in the second-order combination of real-time data, especially when visualised.

Need to think more about this certainly, but for example, a service such as Paul Mison’s “Above London” astronomical event alerts would become much more valuable if combined with live weather data for where I am.

Thirdly, bumping the visualisation up-or-down a scale. In the discussion at KiwiFoo I cited Citysense as an example of this – which Adam Greenfield turned me onto –  where the aggregate real-time location of individuals within the city gives a live heatmap of which areas are hot-or-not at least in the eyes of those who participate in the service.

From the recent project I worked on at The Royal College of Art, Hiromi Ozaki’s Tribal Search Engine also plays in this area – but almost from the opposite perspective: creating a swarming simulation based on parameters you and your friends control to suggest a location to meet.

I really want to spend more time thinking about bumping things up-and-down the scale: it reminds me of one of my favourite quotes by the Finnish architect Eliel Saarinen:

demons029

And one of my favourite diagrams:

brand_keynote_400

It seems to me that a lot of the data being thrown off by personal location-based services are in the ‘fashion’ strata of Stewart Brand’s stack. What if we combined it with information from the lower levels, and represented it back to ourselves?

Let’s try putting jumper wires across the strata – circuit-bending spacetime to create new opportunities.

Finally, I said I’d come back to the claim that you can’t automate the future – yet.

twitter-_-matt-jones_-kiwifoo-plasticbaguk_sIn the Kiwifoo discussion, the group referenced the burgeoning ability of LBS systems to aggregating patterns of our movements.

One thing that LBS could do is serve to create predictive models of our past daily and weekly routines – as has been investigated by Nathan Eagle et al in the MIT Reality Mining project.

I’ve steered clear of the privacy implications of all of this, as it’s such a third-rail issue, but as I somewhat bluntly put it in my lightcone diagram the aggregation of real-time location information is currently of great interest to spammers, scammers and spooks – but hopefully those developing in this space will follow the principles of privacy, agency and control of such information expounded by Coates in the development of FireEagle and referenced in our joint talk “Polite, pertinent and pretty” last year.

The downsides are being discussed extensively, and they are there to be sure: both those imagined, unimagined, intended and unintended.

But, I can’t help but wonder – what could we do if we are given the ability to export our past into our future…?

Reblog this post [with Zemanta]
Can I have a … ?, originally uploaded by straup.

This Saturday saw the first-ever PaperCamp successully prototyped.

After an amount of last-minute panic, I think I stopped being stressed-out about 5 minutes into Aaron’s talk.

Instead I started to become delighted and fascinated by the strange, wonderful directions people are taking paper, printing and prototyping the lightweight, cheap connection of the digital and the physical.

Jeremy Keith did a wonderful job of liveblogging the event, and there is a growing pool of pictures in the papercamp group on flickr.

Highlights for me included the gusto that the group gave to making things with paper in a frenetic 10min session hosted by Alex of Tinker.it, Karsten‘s bioinformatic-origami-unicorn proposal, and the delightful work of Sawa Tanaka.

Also, the fact that we’ve made Craft Bioinformatic Origami Unicorns a tag on flickr has to be seen as a ‘win’ in my view.

Lots of people didn’t hear about this one as I was deliberately trying to keep it a small ‘prototype’, and also we were luckily operating as a ‘fringe’ event to the Bookcamp event that had been set up by Russell, Jeremy and James and didn’t want to take the mickey too much (thanks guys) – so apologies to those who didn’t make it.

But, the enthusiastic response means we’ll definitely be doing this again, as a bigger, open, stand-alone event, maybe in the summer, with more space, more attendees and hopefully more heavy-duty printing and papermaking activities.

The next PaperCamp is going to be in NYC in early Feb, and I hear noises there maybe one gestating in San Francisco also…

Stay tuned, paperfans…

The conference cliché strikes again.

The highlights of my time at the Sarasota Design Summit were found in the spaces outside the formal sessions. One theme pervading the interstices inspired by Dave Gray and Josh DiMauro was the renaissance of paper as a medium in a mixed digital/physical world – as prototype spime.

Following Josh’s Paperbit’s work, Aaron’s Papernet thinking and Dave’s investigations of the changing form of books, we came up with a nascent plan for a PaperCamp – a weekend of hacking paper and it’s new possibiities.

I scrawled some ideas.

  • Way-new printing
  • Protospimes
  • Ingestion/Digestion/Representation
  • Bionic sketching
  • Folding/structure
  • Paper’s children

As per usual, I don’t really know what any of these mean exactly. It was kind of automatic writing.

But.

It does feel like there’s something here, and I’m really intrigued at what might happen at a papercamp(s).

Who’s with me?

Irving Street

Didn’t manage to get to designengaged this year in Montreal, but it seems they continued the tradition of an afternoon walk, semi-guided to immerse oneself in the city your visiting, and do some deep noticing.

There’s been a flurry of writing on the skill, innate or learned of noticing. I like to think I have a little bit of the innate, but I’ve been *ahem* noticing that my increasingly mobile personal-informatics tool-cloud seems to be training me to notice more.

Location tracker and sports-tracker on my N95,  Fireeagle, Dopplr, (+ Paul Mison‘s excellent mashup ‘Snaptrip‘) and of course Flickr are the main things helping me build up my own personal palimpsest of places.

I recently renewed my Flickr account. I have 19,404 pictures at time of writing from 4 or so years, and, though slow starting, now 1,507 geotagged. This to me, represents a deep pool of personal noticing.

Adam Greenfield recently has been presenting a fascinating flip-around of the original Eno conceit of the Big here and the long now.

Adam talks of the ‘long here, big now’ where information overlaid on place creates a ‘long here‘ record of interactions with the place, and a ‘big now’ where we are never separated from our full-time intimate communities.

The long here that Flickr represents back to me is becoming only more fascinating and precious as geolocation starts to help me understand how I identify and relate to place.

The fact that Flickr’s mapping is now starting to relate location to me the best it can in human place terms is fascinating – they do a great job, but where it falls done it falls down gracefully, inviting corrections and perhaps starting conversation.

Incidentally, I’m typing this with tea and toast in a little cafe on Irving Street called La Chandelle, accross the street is a cafe called Little Italy.

Next door is “The Italian Restaurant” – is this london’s little italy? Why such a concetration of italian restaurants here? how did it start? That statue is of Henry Irving, the actor at the end of the street. So, what was it called before being rededicated perhaps to him?

What is the Long-here of Irving Street?

Robert Elms would have a field day. I use to love listening to his phone in show, which was really, all about ‘noticing’ between the music. Maxwell Hutchinson‘s roving reports, taxi drivers, lovers of mother london and it’s tapestry of histroy and trivia all contributed to a wonderful shaggy-dog style story that would assemble about a place or a custom or a thing every morning. Perhaps the BBC and it’s new controller of archives will start investing in geolocated bionic noticing and storytelling?

But why the Little Italy on Irving street? Why the clustering? I can’t ask Robert Elms’ future-bionic noticing community yet. I wish I could – the playful aggregation of the story of a place that tumbled through his shows would be just the sort of thing I would love to read right or listen to now, right here.

Apart from the tools of bionic noticing, this play of noticing is amplified by the web beautifully – flickr, outside.in, placeblogging, things like Iamnear.net – and increasingly ARGs and ‘BUGs’ – Big Urban Games making use of the increasing locative abilities of our devices, and perhaps more importantly – the increasing ownership of those devices.

For instance, I’m on Irving Street, noticing all this stuff for instance because my friend Alfie has staged a wonderful, casual locative game to raise awareness for XDRTB, where people follow clues embedded in blog posts like this one, to places where they can find the game rewards. Alfie’s hoping the time is right for a whole lot more people to participate in these types of games with the advent of mass adoption of location-aware mobiles like the iPhone.

I’ve written before about the dearth of casual BUGs before. Til now, often necessarily they have required an awful lot of staging and concentrated participation from a dedicated few.

Area/code’s Plundr was an early inflection point away from that. Alfie’s game isn’t quite at the Slow Urban Game stage I hoped for a few years ago but it and things like “And I saw” by Jaggeree point the way towards a slower, more inclusive play with the city, based around the rich rewards of noticing, rather than competitive and basic game mechanics.

All of this though leaves me again reminded of Stephen Johnson in Emergence, building on the thinking of the late, great Jane Jacobs on the way that cities iterate on themselves, encouraging the clustering and gathering of businesses and communities – and hopefully through Alfie’s efforts for XDRTB.org, a community made aware and inspired to take up it’s cause.

As Johnson, Jacobs and Greenfield point out, our cities themselves are slow computers, but quickly our personal computers are becoming mobile and embedded within them, and as we play so our noticing superpowers grow…

I missed quite a lot of Picnic, mainly due to getting together with the Dopplr team for a rare physical pow-wow – but I did manage to spend a good chunk of the Friday in the Internet of Things special session.

Speakers included Rafi Haladjian of Violet/Nabaztag fame and David Orban of Widetag/OpenSpime, and there were demos from Tikitag, and Pachube (Usman Haque‘s excellent new venture).

Sat in the audience was God-Emperor of Spime, Bruce Sterling which lent it an extra something. I managed to snag a Tikitag start kit, which I hope to have a play with this week – I’ll post some unboxing pics when I have chance.

It was one of those sessions where the palpable sense of the scenius is the thing, rather than the content so much (although there was a lot of good stuff in there too) – I came away with renewed enthusiasm for ‘practical ubicomp’ and all things spime-y.

I wasn’t sure whether the talks where being video’d, so I managed to record two of the speakers on my N95, so the quality of the audio isn’t particularly great.

So, with that disclaimer, here are the presentations by Matt Cottam of Tellart and Mike Kuniavsky of ThingM.

Jessica Helfand of Design Observer on Iron Man’s user-interfaces as part of the dramatis personae:

“…in Iron Man, vision becomes reality through the subtlest of physical gestures: interfaces swirl and lights flash, keyboards are projected into the air, and two-dimensional ideas are instantaneously rendered as three and even four-dimensional realities. Such brilliant optical trickery is made all the more fantastic because it all moves so quickly and effortlessly across the screen. As robotic renderings gravitate from points of light in space into a tangible, physical presence, the overall effect merges screen-based, visual language with a deftly woven kind of theatrical illusion.”

Made me think back to a post I wrote here about three years ago, on “invisible computing” in Joss Wheedon’s “Firefly”.

Firefly touch table

“…one notices that the UI doesn’t get in the way of the action, the flow of the interactions between the bad guy and the captain. Also, there is a general improvement in the quality of the space it seems ? when there are no obtrusive vertical screens in line-of-sight to sap the attention of those within it.”

Firefly touch table

Instead of the Iron Man/Minority Report approach of making the gestural UI the star of the sequence, this is more interesting – a natural computing interface supports the storytelling – perhaps reminding the audience where the action is…

As Jessica points out in her post, it took us some years for email to move from 3D-rendered winged envelopes, to things that audiences had common experience and expectations of.

Three years on from Firefly, most of the audience watching scifi and action movies or tv will have seen or experienced a Nintendo Wii or an iPhone, and so some of the work of moving technology from star to set-dressing is done – no more outlandish or exotic as a setting for exposition than a whiteboard or map-table.

Having said that – we’re still in tangible UIs transition to the mainstream.

A fleeting shot from the new Bond trailer seems to indicate there’s still work for the conceptual UI artist, but surely this now is the sort of thing that really has a procurement number in the MI6 office supplies intranet…

Bond Touch table

And – it’s good to see Wheedon still pursuing tangible, gestural interfaces in his work…