Archive

Interface innovations

Umeå

I got invited to northern Sweden by the lovely folks at Umeå Institute of Design and Tellart.

Umeå Design School

It was a fantastic couple of days, where ideas were swapped, things were made and fine fun was had late into the sub-artic evening…

Umeå

It was their first (and hopefully not the last) Spring Summit at the Umeå Institute of Design, entitled “Sensing and sensuality”.

Umeå Institute of Design Spring Summit, "Sensing and Sensuality"

I tried to come up with something on that theme, mainly of half-formed thoughts that I hope I can explore some more here and elsewhere in the coming months.

It’s called “Data as seductive material” and the presentation with notes is on slideshare, although I’ve been told that there will be video available of the entire day here with great talks from friends old and new.

Thank you so much to the faculty and students of Umeå Institute of Design, and mighty Matt Cottam of Tellart for the invitation to a wonderful event.

Apple’s iPhone 3.0 announcements caused a kerfuffle today, but it seems to me insane that the thing that’s being talked about most is… Cut and Paste?

At the time the event was running I summed my feelings up in <140 chars thusly:

Twitter / Matt Jones: of course, while I'm shaki ...

I mean – they’d announced that you could create custom UIs that worked with physical peripherals – they’d had someone from Johnson & Johnson on stage to show a diabetes sensor companion to the iphone – the nearest thing to AP’s Charmr you could imagine!

Then my friend Josh said:

“Am now wondering whether a bluetooth/serial module and arduino will be able to talk with iPhone. And, pachube

A rapid prototyping platform for physical/digital interactions? A mobile sensor platform for personal and urban informatics that’s going mainstream?

Imagine – AppleStores with shelves of niche, stylish sensor products for sale in a year’s time – pollution sensors, particulates analysis, spectroscopy, soil analysis, cholesterol? All for the price of a Nike+ or so?

Come on, that’s got to be more exciting than cut and paste?

—–
UPDATE

Tom Igoe points out in his comment correctly that I have been remiss in not mentioning Tellart’s NadaMobile project from late last year – which allows you to easily prototype physical/digital/sensor apps on the iPhone through a cable that cleverly connects to the audio jack. It’s also totally open-source.

Reblog this post [with Zemanta]

Warning – this is a collection of half-formed thoughts, perhaps even more than usual.

I’d been wanting to write something about Google Latitude, and other location-sharing services that we (Dopplr) often get lumped in with for a while. First of all, there was the PSFK Good Ideas Salon, where I was thinking about it (not very articulately) then shortly after that Google Latitude was announced, in a flurry of tweets.

At the time I myself blurted:

twitter-_-matt-jones_-i-still-maintain-perhaps

My attitude to most Location-Based Services (or LBS in the ancient three-letter-acronymicon of the Mobile Industry) has been hardened by sitting through umpty-nine presentations by the white-men-in-chinos who maintain a fortune can be made by the first company to reliably send a passer-by a voucher for a cheap coffee as they drift past *bucks.

It’s also been greatly informed by working and talking with my esteemed erstwhile colleague Christopher Heathcote who gave a great presentation at Etech (5 years ago!!! Argh!) called “35 ways to find your location“, and has both at Orange and Nokia been in many of the same be-chino’d presentations.

me_home_work1Often, he’s pointed out quite rightly, that location is a matter of routine. We’re in work, college, at home, at our corner shop, at our favourite pub. These patterns are worn into our personal maps of the city, and usually it’s the exceptions to it that we record, or share – a special excursion, or perhaps a unexpected diversion – pleasant or otherwise that we want to broadcast for companionship, or assistance.

Also, most of the time – if I broadcast my location to trusted parties such as my friends, they may have limited opportunity to take advantage of that information – they after all are probably absorbed in their own routines, and by the time we rendevous, it would be too late.

Location-based services that have worked with this have had limited success – Dodgeball was perhaps situated software after all, thriving in a walkable bar-hopping subculture like that of Manhattan or Brooklyn, but probably not going to meet with the same results worldwide.

This attitude carried through to late 2006/early 2007 and the initial thinking for Dopplr – that by focussing on (a) nothing more granular than cities-as-place and days-as-time and (b) broadcasting future intention, we could find a valuable location-based service for a certain audience – surfacing coincidence for frequent travellers.

Point (a): taking cities and days as the grain of your service, we thought was the sweet-spot. Once that ‘bit’ of information about the coincidence has been highlighted and injected into whichever networks you’re using, you can use those networks or other established communications methods to act on it: facebook, twitter, email, SMS or even, voice…

“Cities-and-days” also gave a fuzziness that allowed for flexibility and, perhaps plausible deniablity – ameliorating some of the awkwardness that social networks can unitentionally create (we bent over backwards to try and avoid that in our design decisions, with perhaps partial success)

In the latest issue of Wired, there’s a great example of the awkward situations broadcasting your current exact location could create:

“I explained that I wasn’t actually begging for company; I was just telling people where I was. But it’s an understandable misperception. This is new territory, and there’s no established etiquette or protocol.

This issue came up again while having dinner with a friend at Greens (37.806679 °N, 122.432131 °W), an upscale vegetarian restaurant. Of course, I thought nothing of broadcasting my location. But moments after we were seated, two other friends—Randy and Cameron—showed up, obviously expecting to join us. Randy squatted at the end of the table. Cameron stood. After a while, it became apparent that no more chairs would be coming, so they left awkwardly. I felt bad, but I hadn’t really invited them. Or had I?”

It also seemed like a layer in a stack of software enhancing the social use and construction of place and space – which we hoped would ‘handover’ to other more appropriate tools and agents in other scales of the stack. This hope became reinforced when we saw a few people taking to prefacing twitters broadcasting where they were about to go in the city as ‘microdopplr‘. We were also pleased to see the birth of more granular intention-broadcasting services such as Mixin and Zipiko, also from Finland

This is also a reason that we were keen to connect with FireEagle (aside from the fact that Tom Coates is a good friend of both myself and Matt B.) in that it has the potential to act as a broker between elements in the stack, and in fact help, weave the stack in the first place. At the moment, it’s a bit like being a hi-fi nerd connecting hi-specification separates with expensive cabling (for instance, this example…), but hopefully an open and simple way to control the sharing of your whereabouts for useful purposes will emerge from the FE ecosystem or something similar.

Point (b) though, still has me thinking that sharing your precise whereabouts – where you are right now, has limited value.

lightcone_slideThis is a slide I’ve used a lot when giving presentations about Dopplr (for instance, this one last year at IxDA)

It’s a representation of an observer moving through space and time, with the future represented by the ‘lightcone’ at the top, and the past by the one at the bottom.

I’ve generally used it to emphasise that Dopplr is about two things – primarily optimising the future via the coincidences surfaced by people sharing their intended future location with people they trust, and secondly, increasingly – allowing you to reflect on your past travels with visualisations, tips, statistics and other tools, for instance the Personal Annual Reports we generated for everyone.

It also points out that the broadcasting of intention is something that necessarily involves human input – it can’t be automated (yet)- more on which later.

By concentrating on the future lightcone, sharing one’s intentions and surfacing the potential coincidences, you have enough information to make the most of them – perhaps changing plans slightly in order to maximise your overlap with a friend or colleague. It’s about wiggling that top lightcone around based on information you wouldn’t normally have in order to make the most of your time – at the grain of spacetime Dopplr operates at.

Google Latitude, Brightkite and to an extent FireEagle have made mee think a lot about the grain of spacetime in such services, and how best to work with it in different contexts. Also, I’ve been thinking about cities a lot, in preparation for my talk at Webstock this week – and inspired by Adam‘s new book, Dan’s ongoing mission to informationally refactor the city and the street, Anne Galloway and Rob Shield’s excellent “Space and culture” blog and the work of many others, including neogeographers-par-excellance Stamen.

I’m still convinced that hereish-and-soonish/thereish-and-thenish are the grain we need to be exploring rather than just connecting a network of the pulsing ‘blue-dot’.

Tom Taylor gave voice to this recently:

“The problem with these geolocative services is that they assume you’re a precise, rational human, behaving as economists expect. No latitude for the unexpected; they’re determined to replace every unnecessary human interaction with the helpful guide in your pocket.

Red dot fever enforces a precision into your design that the rest must meet to feel coherent. There’s no room for the hereish, nowish, thenish and soonish. The ‘good enough’.

I’m vaguely tempted to shutdown iamnear, to be reborn as iamnearish. The Blue Posts is north of you, about five minutes walk away. Have a wander around, or ask someone. You’ll find it.”

My antipathy to the here/now fixation in LBS lead me to remix the lightcone diagram and post it to flickr, ahead of writing this ramble.

The results of doing so delighted and surprised me.

Making the most of hereish and nowish

In retrospect, it wasn’t the most nuanced representation of what I was trying to convey – but it got some great responses.

There was a lot of discussion around whether the cones themselves were the right way to visualise spacetime/informational futures-and-pasts, including my favourite from the ever-awesome Ben Cerveny:

“I think I’d render the past as a set of stalactites dripping off the entire hypersurface, recording the people and objects with state history leaving traces into the viewers knowledgestream, information getting progressively less rich as it is dropped from the ‘buffers of near-now”

Read the entire thread at Flickr - it gets crazier.

But, interwoven in the discussion of the Possibility Jellyfish, came comments about the relative value of place-based information over time.

Chris Heathcote pointed out that sometimes that pulsing blue dot is exactly what’s needed to collapse all the ifs-and-buts-and-wheres-and-whens of planning to meet up in the city.

Blaine pointed out that

“we haven’t had enough experience with the instantaneous forms of social communication to know if/how they’re useful.”

but also (I think?) supported my view about the grain of spacetime that feels valuable:

“Precise location data is past its best-by date about 5-10 minutes after publishing for moving subjects. City level location data is valuable until about two hours before you need to start the “exit city” procedures.”

Tom Coates, similarly:

“Using the now to plan for ten minutes / half an hour / a day in the future is useful, as is plotting and reflecting on where you’ve been a few moments ago. But on the other hand, being alerts when someone directly passes your house, or using geography to *trigger* things immediately around you (like for example actions in a gaming environment, or tool-tips in an augmented reality tool, or home automation stuff) requires that immediacy.”

He also pointed out my prejudice towards human-to-human sharing in this scenario:

“Essentially then, humans often don’t need to know where you are immediately, but hardware / software might benefit from it — if only because they don’t find the incoming pings distracting and can therefore give it their full and undivided attention..”

Some great little current examples of software acting on exact real-time location (other than the rather banal and mainstream satnav car navigation) are Locale for Android – a little app that changes the settings of your phone based on your location, or iNap, that attempts to wake you up at your rail or tube stop if you’ve fallen asleep on the commute home.

But to return to Mr. Coates.

Tom’s been thinking and building in this area for a long time – from UpMyStreet Conversations to FireEagle, and his talk at KiwiFoo on building products from the affordances of real-time data really made me think hard about here-and-now vs hereish-and-nowish.

Tom at Kiwifoo

Tom presented some of the thinking behind FireEagle, specifically about the nature of dealing with real-time data in products an services.

In the discussion, a few themes appeared for me – one was that of the relative-value of different types of data waxing and waning over time, and that examining these patterns can give rise to product and service ideas.

Secondly, it occured to me that we often find value in the second-order combination of real-time data, especially when visualised.

Need to think more about this certainly, but for example, a service such as Paul Mison’s “Above London” astronomical event alerts would become much more valuable if combined with live weather data for where I am.

Thirdly, bumping the visualisation up-or-down a scale. In the discussion at KiwiFoo I cited Citysense as an example of this – which Adam Greenfield turned me onto -  where the aggregate real-time location of individuals within the city gives a live heatmap of which areas are hot-or-not at least in the eyes of those who participate in the service.

From the recent project I worked on at The Royal College of Art, Hiromi Ozaki’s Tribal Search Engine also plays in this area – but almost from the opposite perspective: creating a swarming simulation based on parameters you and your friends control to suggest a location to meet.

I really want to spend more time thinking about bumping things up-and-down the scale: it reminds me of one of my favourite quotes by the Finnish architect Eliel Saarinen:

demons029

And one of my favourite diagrams:

brand_keynote_400

It seems to me that a lot of the data being thrown off by personal location-based services are in the ‘fashion’ strata of Stewart Brand’s stack. What if we combined it with information from the lower levels, and represented it back to ourselves?

Let’s try putting jumper wires across the strata – circuit-bending spacetime to create new opportunities.

Finally, I said I’d come back to the claim that you can’t automate the future – yet.

twitter-_-matt-jones_-kiwifoo-plasticbaguk_sIn the Kiwifoo discussion, the group referenced the burgeoning ability of LBS systems to aggregating patterns of our movements.

One thing that LBS could do is serve to create predictive models of our past daily and weekly routines – as has been investigated by Nathan Eagle et al in the MIT Reality Mining project.

I’ve steered clear of the privacy implications of all of this, as it’s such a third-rail issue, but as I somewhat bluntly put it in my lightcone diagram the aggregation of real-time location information is currently of great interest to spammers, scammers and spooks – but hopefully those developing in this space will follow the principles of privacy, agency and control of such information expounded by Coates in the development of FireEagle and referenced in our joint talk “Polite, pertinent and pretty” last year.

The downsides are being discussed extensively, and they are there to be sure: both those imagined, unimagined, intended and unintended.

But, I can’t help but wonder – what could we do if we are given the ability to export our past into our future…?

Reblog this post [with Zemanta]

Wikidashboard

Wikidashboard‘s a fantastic project from Parc, which I found today via Waxy.

It displays and visualises the change history of an article in-situ. As Waxy says it would be great to have a greasemonkey script which placed this information on the pages of wikipedia proper.

Except – it’s too much for me.

I want it to be glanceable-not-pore-over-able at this level, just giving me a swift indication of the volatility of the entry.

In early 2005, I proposed tiny sparklines based on the HistoryFlow visualisation project that could be seen in-situ to give a very quick feel of the change history of an article.

Proposal for 'History Flow' sparkline graphics for Wikipedia

Since then, sparklines are almost part of the furniture in services like Flickr, Google Analytics and *ahem* Dopplr. There are libraries and refinements galore for such things. Putting something together like this using Wikidashboard, greasemonkey and one of those libraries would be well within the reach of an enterprising geek I would have thought…

So, can I have it?



WALKING CITY, originally uploaded by blackbeltjones.

Jonathan Feinberg emailed me and said “Inspired by your typographically sophisticated “hand-tooled” cloud, I came up with a novel way of cramming a bunch of words together.” which is underserved praise for me, and dramatically underselling what he’s acheived with Wordle.
It does the simple and much-abused thing of creating a tag-cloud, and executes it playfully and beautifully. There are loads of options for type and layout, and it’s enormous fun to fiddle with.
As I said back when Kevan Davies did his delicious phrenology visualiser, there is some apophenic pleasure in scrying your tag could and seeing the patterns there – so I was very pleased when my playing with Wordle returned me an Archigram-esque walking city of things I’ve found interesting.
Congrats to Jonathan on building and finally releasing Wordle!

A week or so ago, Ryan of Adaptive Path conducted a long, looping interview with me over IM where we covered the above and beyond.

Of course, this was meant to be something punchy, level-headed and action-packed as a promotion for their upcoming MX event, where people want to hear about the business-like practicalities and opportunities of ‘design thinking’ etc.

Instead they got something that Peter accurately described as ‘DVD-extras’, and I’m pretty comfortable with that.

For me, at least, and YMMV of course – crispy, crunchy blue-shirt and chinos bullet-points don’t do it. Design, invention and making comes out of play, punning and rambling on – generative, diverging and looping and splicing.

I’m very glad that Ryan decided to do the interview in IM, rather than emailing me questions that I could respond to as if in an exam. It’s a fun mess, that I’m glad to say Peter returned to and found a seed of something to advance further himself: the influence that our new ability of visualising shared behaviours has on our old ability as a social species to flock.

I’m hoping that my talk at MX will have a little more discipline to it, but still have enough DVD extras there for people to pick out and run with. If you register for MX, then use the discount code AP have given me: “MXMJ”, you’ll get 15% off the
registration price…

Lamenting lost futures is not that productive, but it doesn’t stop me enjoying it. Whether it’s the pleasure of reading Ellis’s “Ministry of Space” and thinking “what if?” or looking through popculture futures past as in this Guardian article – it’s generally a sentimental, but thought-provoking activity.

Recently, though, I’ve been thinking about a temporarily lost future that’s closer to home in the realm of mobile UI design. That’s the future that’s been perhaps temporarily lost in the wake of the iPhone’s arrival.

A couple of caveats.

Up until June this year. I worked at Nokia in team that created prototype UIs for the Nseries devices, so this could be interpreted as sour-grapes, I suppose.. but I own an iPodTouch, that uses the same UI/OS more-or-less, and love it.

I spoke at SkillSwap Bristol in September (thanks to Laura for the invite) and up until the day I was travelling to Bristol, I didn’t know what I was going to say, but I’d been banging on at people in the pub (esp. Mr. Coates) about the iPhone’s possible impact on interface culture, so I thought I’d put together some of those half-formed thoughts for the evening’s debate.

The slides are on Slideshare
(no notes, yet) but the basic riff was that the iPhone is a beautiful, seductive but jealous mistress that craves your attention, and enslaves you to its jaw-dropping gorgeousness at the expense of the world around you.

skillswap250907

This, of course, is not entirely true – but it makes for a good starting point for an argument! Of course, nearly all our mobile electronic gewgaws serve in some small way or other to take us away from the here and now.

But the flowing experience just beyond Johnny Ive’s proscenium chrome does have a hold more powerful than perhaps we’ve seen before. Not only over users, but over those deciding product roadmaps. We’re going to see a lot of attempts to vault the bar that Apple have undoubtedly raised.

Which, personally, I think is kind-of-a-shame.

First – a (slightly-bitter) side-note on the Touch UI peanut gallery.

In recent months we’ve seen Nokia and Sony Ericsson show demos of their touch UIs. To which the response on many tech blogs has been “It’s a copy of the iPhone”. In fact, even a Nokia executive responded that they had ‘copied with pride’.

That last remark made me spit with anger – and I almost posted something very intemperate as a result. The work that all the teams within Nokia had put into developing touch UI got discounted, just like that, with a half-thought-through response in a press conference. I wish that huge software engineering outfits like S60 could move fast enough to ‘copy with pride’.

Sheesh.

Fact-of-the-matter is if you have roughly the same component pipeline, and you’re designing an interface used on-the-go by (human) fingers, you’re going to end up with a lot of the same UI principles.

But Apple executed first, and beautifully, and they win. They own it, culturally.

Thus ends the (slightly-bitter) side-note – back to the lost future.

Back in 2005, Chris and myself gave a talk at O’Reilly Etech based on the work we were doing on RFID and tangible, embodied interactions, with Janne Jalkanen and heavily influenced by the thinking of Paul Dourish in his book “Where the action is”, where he advances his argument for ‘embodied interaction’:

“By embodiment, I don’t mean simply physical reality, but rather, the way that physical and social phenomena unfold in real time and real space as a part of the world in which we are situated, right alongside and around us.”

I was strongly convinced that this was a direction that could take us down a new path from recreating desktop computer UIs on smaller and smaller surfaces, and create an alternative future for mobile interaction design that would be more about ‘being in the world’ than being in the screen.

That seems very far away from here – and although development in sensors and other enablers continues, and efforts such as the interactive gestures wiki are inspiring – it’s likely that we’re locked into pursuing very conscious, very gorgeous, deliberate touch interfaces – touch-as-manipulate-objects-on-screen rather than touch-as-manipulate-objects-in-the-world for now.

But, to close, back to Nokia’s S60 touch plans.

Tom spotted it first. In their (fairly-cheesy) video demo, there’s a flash of something wonderful.

Away from the standard finger and stylus touch stuff there’s a moment where a girl is talking to a guy – and doesn’t break eye contact, doesn’t lose the thread of conversation; just flips her phone over to silence and reject a call. Without a thought.

Being in the world: s60 edition from blackbeltjones on Vimeo.

As Dourish would have it:

“interacting in the world, participating in it and acting through it, in the absorbed and unreflective manner of normal experience.”

I hope there’s a future in that.

.flickr-photo { border: solid 2px #000000; }
.flickr-yourcomment { }
.flickr-frame { text-align: left; padding: 3px; }
.flickr-caption { font-size: 0.8em; margin-top: 0px; }

After reading Jane’s post about using time people spend fiddling with Facebook for solving problems with other (gaming) networks, I wondered whether there were other things you could do with all those idle hands.

What about Folding@home or Mechanical Turk tasks, as shown rather sketchily above.

Back in May, referring to Sony’s announcment that the folding@home client would be installed on the PS3, Alice wrote about “Games that do good”

“Are there games or game mechanics that could be used to fund-raise or awareness-raise?”

My quick mock up is not all that enticing or interesting, though touches like sparklines, league-tables and scoring could rapidly turn such things into more of a playful and engaging activity, turning all those idle hands to good causes.

Know of anything like this going on?

Nokia Design: Explore Concept 2012 on VimeoConcept work here by the lovely people in our Calabasas studio illustrating what Nokia Nseries could do in 2012.

Just the device to have around for the end of the Mayan Calendar and the arrival of TimeWave-Zero/Barbelith/VALIS/The Solar Maximum/Whatever.

Our team was peripherally involved in brainstorming it with them, but they have put together a rather lovely thing here. There had to be a Welshman involved…

Follow

Get every new post delivered to your Inbox.

Join 5,085 other followers