8
Agents in Social SpaceI am often asked: how is this robot different from a scientific experiment? or: What makes it art? I can answer that question in several ways:
1. I was heartened to hear recently Allan Kaprow's definition of Experimental Art: "that action or object whose identity as art remains forever in doubt". "What makes it art?" is certainly a question which has been asked of many before me, although no-one has told me a child of six could do it!
2. The only function of Petit Mal is as an 'actor in social space.' Although it is clearly a machine, when people interact with Petit Mal no one responds to it intellectually, they always respond emotionally, affectionately! They see it as an animal, a child or a disabled person, they have sympathy for it.
As we move into the world of digital data, it is important to recall that Art has traditionally been about sensuous engagement with matter, or at least with complex sensory experience, about learning through the utilisation of kinesthetic and proprioceptive ways of knowing. It is important to me that people interact bodily, sensually with my robot, there is no text to read, no data to analyse.
I have learned several things from watching people interact with the robot:
1. The machine trains the users, no user manual is necessary. People readily adopt a certain gait, a certain pace, in order to elicit responses from the robot.
2. As a computer based machine, and as a computer based artwork, Petit Mal is unusual in that it induces socialization amongst people.
3. People immediately ascribe vastly complex motivations and understandings to the robot. The robot does not possess these characteristics or capabilities, they are projected upon it by viewers. This is because viewers interpret the behavior of the robot in terms of their own life experience. In order to understand it, they bring to it their experience of dogs, cats, babies and other mobile interacting entities. The machine is ascribed complexities which it does not possess.
Such observations have, I beleive, deep ramifications for the building of agents: just how complex does the agent have to be, or conversely, what is the most expedient way to trigger such interpretations in the user? The construction of agents (at least of the interface to the agent) becomes not so much the development of automated intelligences but the development of highly efficient triggers for certain desired human responses.
The vast majority of the information which shapes the understanding of an agent is not inherent in the agent, but is in the cultural education of the viewer. The agent has meaning to the extent that the cultural background of the maker is shared by the user. Software, like any cultural product, is culturally specific. The so called 'intuitive interface' of Mac or Windows operating systems is only intuitive or universal if you happen to share a cultural understanding of files, folders and desktops, office furniture and office stationery. Likewise, a computer painting program is only 'intuitive' to those who already recognise a paintbrush and understand its uses. An interface is a model, a model is meaningful to the extent that it employs material from common cultural background to 'explain' the functioning of a novel phenomenon. The general point here is that what we see is constituted by our cultural experience. Any hope for 'universal' signification in software is a naive, doomed hope.