Contextualising online diaries

In the process of archiving some old media lab class work I ran across notes I had made and forgotten about on the subjects of “moblogging, temporal and spatial rhythms and visualisation”.

While my final class paper for Judith Donath’s ‘Designing Sociable Media‘ is an interesting enough read, looking back it’s these notes I find provide a clearer narrative on the subjects I studied during that class.

Using PRoPs for presence

Published in 2000, The Robot in the Garden, is a collection of essays edited by Ken Goldberg on telerobotics and telepistemology – ‘the study of knowledge acquired at a distance’. While many of the texts feel a little taken by the novelty of the internet, it remains a succinct review of foundations in the field.

In their chapter, John Canny and Eric Paulos describe PRoPs – Personal Roving Presence devices which allow them to explore to what extent manifestations of computer mediated presence can be effective in placing distant participants into the social and physical context of a space.

PRoPs need not be realistic portraits of humans because our motor-intentional behaviors are flexible. Our PRoPs are cubist statues, with rearrangements of face and arms, and separation of eyes from gaze … dictated by function and engineering constraint.

Their devices are relatively simple – a conferencing system mounted at eye level on a roomba type device or a helium filled blimp that can navigate a space – but allowed for social experiments into the psychology of interactions mediated through mechanically extended body. Canny and Paulos have long since moved on to many other projects, but their research approach remains a pertinent and valid one.

Simplicity class notes

Tangible interfaces for a bar
Tangible interfaces - simplicity studio class

For some reason I never got round to making a contents page for my web responses to the class tasks set by John Maeda, Hiroshi Ishii and Chris Csikszentmihalyi in the Simplicity design studio I took while I was a student at the MIT Media Lab. The studio was John’s Simplicity consortium in the making – exploring ideas of what simplicity can mean. From the simplicity class page:

Intellectual Goal:
To develop a method for making concrete the process of designing for simplicity across interaction, aesthetic, engineering, and cultural concerns.

Core methods tested, debugged, and invented together with exercises from Design Fellows and Instructors. Skills culminate in a final competition of small teams.

So here they are:

  • P1 more to less to more to less – creating visual scales of More to Less (set by John)
  • P2 haiku to concept – write Haikus and create conceptual pieces based on it (Chris)
    • frozen chicken bird feeder was the highlight for me
  • P3 two parts rum – sketch a tangible interface (Hiroshi)
    • I proposed a cocktail mixing bar projection that would augment the bar top with instructions and advice
  • P4 weather reports – after a presentation from Alexander Gelman and a look at the IDEO design methodology, we were asked to design interfaces for weather information
  • P5 Tablepaper™ – after a session with Charlie Lazor we were asked to re-design a product that doesn’t ‘work right
    • I decided to redesign placemats as a disposable magazine format for reading, decoration and note taking while eating
  • P6 A onedo flutter – I forget exactly what the brief was for this one, something about process I think
    • I’ve always wanted to make an animation using bank notes. One frame on each note, spending the artwork after scanning it in. I chose ubiquitous materials (spray paint, money, porn, halftone print) and made each frame unrecognisable. It is only in motion that the result is clear. I loved Hiroshi’s feedback in this class – he said the low resolution animation on $1 bills suggested higher resolution on higher denomination.
    • Check out the money animation making-of images
  • P7 hello … hi … hi … er … hello … – create an algorithmic system for generating sound, images or motion
    • I chose sound. My piece involved standing in the MIT infinite corridor with a microphone and recording the first utterance from each person who passed (mp3). This recording was later published in the Ephemera issue of Thresholds magazine.
  • Exhibition
    • Our final exhibition in the foyer of the E15 used simple packaging (brown paper labels and boxes) as its theme. I designed the poster – you can see it on p11 of this pdf of some of my visual work at the lab.
    • My first pieces in the exhibition was a brass map using the Buckminster-Fuller projection that could be carried in the pocket. the idea was that over years, like a favourite sculpture, the map would be polished smooth in regions that the user pointed to often. I’ll see if I can find a picture of this.
    • My second piece was a video of faces to accompany my audio recording, installed inside one of the boxes in the exhibition.

Robotic mounts need roll-cages

When I was based in the old Camden Jim Henson’s creature shop (currently in the process of becoming a swanky block of flats), working with Dave Housman, I learnt about why it would be useful for motors to feel pain.

The Big One was a puppet control system developed by the creature shop to control complex motorised animatronics in a simpler way. Before the new control system, those puppets that could not be controlled directly by the hand-up-arse method had many servo motors in them, each wired to a dial or lever. It would take a team of people, each controlling one or two motors, to coordinate a simple smile. What the Big One did was to make a controller that had many degrees of freedom – basically a sensor filled kermit the frog – that allowed a single performer to map the movement of their hand in the control glove to the motors in the puppet (this later evolved into full computer mapping from glove and joystick movement to emotions of the puppets both real and digital in the Henson Performance Control System).

The problem was, if you ever pulled your hand out of the controller at the end of a good take on set of filming, for example, and let the jaw of your controller drop away suddenly, the motors in the giant dragon face in front of you would faithfully replicate the movement. Puppets could rip themselves apart – the motors were generally strong enough to do the damage. Similarly when testing a control system, if it was powered down and back on, it was possible – depending on how carefully the electronics and code had been thought out – for a motor to reset to an extreme position. Again this can be beyond the reach of the puppet, leading to broken jaws or limbs impaling themselves.

I am reminded, nervously, of these reset conditions when watching this video of an extreme thrill ride test using an industrial robotic arm. I know the developers probably ran the arm many times before climbing in, and that the accuracy of these industrial systems must be extremely high. But unless you put sensors in, like the doors on a lift, machines just don’t stop.

One answer is to give robotic limbs pain. Embedding sensors into joints that detect strain and limit the power of their motors, just as our own bodies prevent us from breaking ourselves most of the time. A second solution is to borrow from another machine that can have dramatic faliure modes – the sports car. A roll-cage is a structural defense – something strong enough to withstand your system going wrong. I think if I were testing an industrial robotic mount, I would want a roll-bar.

Robot Lab – industrial robots in public spaces

We’re pretty familiar as a society now with the mechanised factory arms and pick-n-place circuit board robots of everyday assembly. Most of our knowledge comes from car ads of the last few years picturing the modern car production line as a futuristic disco environment where every car is perfect.

However, Matthias Gommel, Martina Haitz and Jan Zappe from Robot Lab argue that until recently the public have had little ability to observe and understand industrial robotics – to fathom a feeling for their potential and limitations. The Robot Lab projects address this, placing a number of standard robotic arms into gallery and new media exhibition settings.

Up until now, people haven’t had the chance to meet robots neither in public nor in private spaces. robots are mostly situated in special industrial spaces, therefore humans do not have contact with them, do not experience how they behave and do not know how to behave correctly with them. Today social patterns between man and machines do not exist. Instead there are only fictional images from science fiction literature and films. [about page]

My favourite piece The Bible Scribe, in 2007, saw the arm carefully replicating each calligraphic pen stroke of the Gutenberg Bible, while a gallery audience watched on.

Wodiczko’s approach to filming the face

Tijuana, MexicoThere is a rich background and considered approach to Krzysztof Wodiczko‘s more recent work of the last decade. It is a long process of interaction with individuals in a community before projecting the narratives they have recorded, or recount live, onto the sides of city architecture. However, what struck me most when I first saw a video of Wodiczko’s work was the sense of performance and intimacy that his technical set-up creates. The video I first saw was of an intervention in Tijuana, Mexico in which a live feed of woman’s face is projected in stark detail onto the curved surface of the El Centro Cultural building behind her while she recounts experiences from her life.

Krzysztof Wodiczko

The camera, microphone and lighting are all mounted on to the participants head. This is cumbersome, and unnatural, and yet it frees them up to move as they wish through the crowd and space. This allows them to move themselves away from the centre of attention in the social space, although of course their face is always fixed in full illumination large on the building above. In an interview in “Art in the Twenty-First Century” from which the Tijuana excerpt is taken, Wodiczko describes this as “background and foreground at the same time … shifting focus”.

Dog-mounted social tele-presence, Auger–Loizeau

Jimmy Loizeau and James Auger, who’s work I first encountered in their isophone project at the Media Lab Europe, play with some really fun concepts in their Social Tele–presence project. Telepresence is the use of technology to enable a sense of ‘being there’ for someone in a remote location. It’s an idea that the corporate world has toyed with for a decade or so with little success. The face-to-face meeting still dominates trust and relationship building in that domain.

The telepresence scenarios that Loizeau and Auger imagine are social systems – explored through a combination of working prototypes and designed futures, an approach that has become one of the trademarks of the Design Interactions course at the RCA in recent years. In the project, actors can be sent to explore socially awkward situations on a customer’s behalf (their example is a politician taking a Strange Days-esque teleprescence trip to a red light district). They also show tests of a dog-mounted system that carrys a camera and binaural microphone with two axes of rotation. The practicalities of motion sickness from a dog mounted VR-headset aside for now, the use of animals as a means of adding complex mobility in place of robotic mounts is a concept that has intereed me for some time. More on this at a later date.

robotic extensions

Tails for all! is one of my all time favourite half-bakery suggestions. The benefits are immediately recognisable – an extra prehensile limb, another facet of emotional expression or a decorative tail of choice – and yet the idea seems so completely unobtainable. However, biomechanics and robotic body augmentation have begun to be explored – first in medicine and military uses and then by artists.

robotic earsQuite a few artists have played with the addition of animal traits through mechanical devices. I particularly like the work of Paul Granjon. I saw Paul talk about some of his projects at last year’s (re)actor2 festival hosted at Leeds Met. His talk, ‘Performing with machines and performing machines’ showed projects like Sexed Robots and Robotic Ears. Sexed Robots raises interesting ideas about robots bound to the nature of their human creators, but that’s a whole other post. Robotic Ears is an intriguing example of a robotic body extension. Like Tails for All, the addition feels like it makes sense, and yet the ears have no functional utility, as such, beyond their performability.

Paul performed using a number of his machines later in the conference – what he calls ‘human-machine live performance’ – as part of a caberet evening in the Gatecrasher club. The ears featured alongside live lo-fi digital music, self-destroying devices and percussive mechanical sculptures. Bringing this train of thought full circle, the ears were worn with a robotic tail while Paul performed the Animal song (mp3).

robots I’d like – underwearbot

Drawing - Tissue Box by pigpogm on Flickr by-nc-sa

Probably need to file this under robots that would need to be AI complete to work as expected – but I’d love to have a robot that could fold underwear into a container in the same way that tissues are folded in a tissue box. That way each pair of pants would just pull out of the box, and leave tomorrows pair ready to grab.

Does a desire not to waste time choosing what to wear make me Seth Brundle out of Cronenberg’s The Fly? Probably heading that way. And let’s not get into the ethical questions of whether it’s morally right to ask a sentient robot to spend it’s time folding my pants.