Evolving for the Audience

Jon McCormack

Centre for Electronic Media Art
School of Computer Science and Software Engineering
Monash University, Wellington Road
Clayton, Victoria, 3800 Australia

jonmc@csse.monash.edu.au

version of this paper.

 

Abstract

This paper presents a novel type of artistic virtual environment, populated by evolving sonic agents. Particular attention is paid to the methodology employed as part of the agent's evolution in relation to the audience. Through a link between the real and virtual spaces, agents evolve implicitly to try to maintain the interest of the audience.

 

1.         Introduction

Ideas from Cybernetics and Artificial Life (AL) can offer a fruitful basis to the design of virtual worlds (Ashby, 1952; Bonabeau and Theraulaz, 1994; Cariani, 1993; Langton, 1995; Morris, 1999; Reichardt, 1971; Sommerer and Mignonneau, 1998). In recent years, a number of different methodologies have been used to try to create virtual spaces that are open-ended in terms of their behaviour and user interaction.

This paper looks at a particular methodology that was devised to address two important questions in relation to designing artificial worlds. Firstly, how a designer can create a virtual AL world that evolves towards some subjective criteria of its users, without them needing to explicitly perform fitness selection. Secondly, how the relationship between real and virtual spaces is acknowledged in a way that integrates those spaces phenomenologically.

The methodology referred to, will largely be illustrated by describing an electronic artwork developed by the author, titled Eden. The artwork is exhibited as an installation, and experienced by multiple users simultaneously. It consists of multiple screens, video projectors, audio speakers, infrared distance sensors, computers, and custom designed electronic systems. More details on the configuration can be found in section 4 of this paper. Figure 1 shows some still images of the work in operation.

Figure 1: Images of Eden in operation. [click on each image for larger view]

1.1.     Evolutionary Environments, Generative Art

Evolutionary techniques have been applied to a wide variety of creative applications (Bentley and Corne, 2002; Bentley, 1999). In the system described in this paper, the application of evolutionary techniques is to a generative artwork. It is worth spending a moment to define and place generative art contextually (for an more comprehensive overview of generative art see (Dorin and McCormack, 2001)).

An artwork is referred to as ‘generative’ if it involves, as part of its means of expression, some process which is authored or established by the artist, and which primarily operates independently of the author. The work as exhibited may be the process, or it may be an artefact produced by it. In either case, we consider a metaphor that relates the biological term of genotype to that which is created by the artist (a set of instructions, a set of axioms, a set of rules about interactions etc.), and phenotype to the work as it is experienced by a viewer.

In describing the motivations and results of the system described in this paper, it is important to think of art as experience in context (McCormack and Dorin, 2001). The context for the audience (or ‘user’ in the common parlance of computing) is that of a visual and sonic experience. A person experiencing the work may have certain expectations and assumptions about the work if they know it is art, and experience it in the context of, for example, an art gallery. In pragmatic terms, the motivation in this context is to design a work that is more reactive than interactive.

1.2.     Interaction

Interaction, if it is to have any meaning, needs a language (Manovich, 2001). In the case of many ‘new media’ and ‘interactive art’ works, such languages are either built on those adopted by the computer industry (Raskin, 2000; Norman, 1990, 1993), or must be learnt via instructional, guided, or unguided (trial and error) experience. Particularly in the case of VR works, language elements may be borrowed from ‘real’ experience, using visual metaphors or icons that ‘stand for’ their function in the real world. However, the virtual functionality rarely matches in scope that of the real, leading to user confusion and frustration.

As each artwork is unique, so too is its interactive language, requiring the ‘user’ to learn or relearn a language specific to individual works. In the context of an interactive media work, the usual time available to both learn and experience the work is limited (as opposed to, say, a designer learning a CAD system). Such limitations place restrictions on how complex or sophisticated any interactive language can be, possibly affecting the quality and depth of the experience for the user.

Moreover, the aesthetic qualities typically associated with human-computer interaction may not situate themselves harmoniously with those desired in artistic works. The majority of research and production in human-computer interaction has been aimed at what has been called the ‘ideological legitimation of technology’ (Dunne, 1999), assumptions that many artworks reject as a fundamental tenet of their structure.

One possible solution (and the approach used here) to this language-learning problem is to minimize from the person’s consciousness the notion of explicitly having to learn an interface at all. In this mode, the space becomes reactive, rather than interactive. Obviously, such a solution is not useful in every context or application. However, as will be illustrated, minimizing the interaction does not necessarily make the experience of the work inert – via a series of feedback relationships between the real and virtual spaces, a language can develop rather than it being explicitly mandated.

Before looking at how this might be achieved in detail, some related issues need to be explored. Section 2 looks at a related technique, aesthetic evolution, where the user evolves things of interest by explicit selection. Section 3 describes the Eden system and its technical operation in detail. Section 4 illustrates the physical configuration of the installation, including information regarding the way the work makes use of motion detection sensors in the environment. Section 5 discusses how this sensory information is used to indicate the interest of the audience and thus evolve the system to particular audience patterns of behaviour. Finally, conclusions are given in section 6.

2.         The Problem of Aesthetic Evolution

Richard Dawkin’s Blind Watchmaker software was the first system to make use of aesthetic evolution via selection (Dawkins, 1986). Aesthetic selection has been used to successfully evolve images (Rooke, 2002; Sims, 1991a), dynamic systems (Sims, 1991b), morphogenic forms (Todd and Latham, 1992; McCormack, 1993), even musical patterns and structures (Bulhak, 1999). In a typical aesthetic evolutionary system, a small number (usually 16-24) of phenotypes are displayed and the user of the system selects one or more of them according to some subjective criteria. Crossover or mutation operations are performed on the genotypes that produce the selected phenotypes. This results in a new set of descendents, whose phenotypes are displayed for the user to again select. The process is repeated until the user is satisfied with the result, or does not wish to continue.

Genetic algorithms usually incorporate some form of fitness function (Mitchell, 1996) – used to rank each successive generation, in the case of aesthetic selection this function is implicit, and is determined by the user of the system. What users typically select is based on highly subjective criteria, often described as ‘interesting’, ‘strange’, ‘weird’, and so on. Alternatively, or perhaps even co-requisitely, the user doing the selection may have in mind some specific goal that they wish to ‘evolve towards’. Used in either mode, the genetic algorithm is effectively a search algorithm, trying to locate points of subjective interest from the phase space of possibilities presented by the data structures and algorithms particular to the system. Searching is slow because there is a human evaluation and selection process in the loop, creating a bottleneck in terms of the speed of evaluation and size of the population that can be selected from.

In contrast, genetic search techniques that have some form of explicit, machine representable fitness function are able to search far more efficiently, because the selection process is automated. In addition, population size is not limited by factors such as screen resolution or the limitations of differentiation abilities of humans.

For an interactive artwork that uses explicit aesthetic selection, the selection process necessarily becomes part of the aesthetic experience itself, in addition to the bottleneck problems mandated by the selection process. This can be restrictive, not only in terms of the processes described, but also in terms of the mental experience of the user who must constantly focus on comparing and selecting phenotypes (rather than other issues which may be the primary intent of the work) as the major experience of the work.

Ideally, we would like a process that offers the subjective search potential of aesthetic selection without necessarily being restricted by the particular aesthetics or bottleneck difficulties that result from the explicit selection of phenotypes for each generation.

3.         The Eden System

Eden is a reactive, evolving ecosystem that is defined over a finite cellular lattice. This lattice forms the Eden world, which develops using a discrete, step based model. The world has ‘seasonal’ variation of radiant energy, resulting in ‘yearly’[1] cycles of growth and decay. Three basic types of entity populate the world: rocks, biomass and sonic agents. Rocks provide refuge[2] and shelter, and are essentially inert. Biomass is a food source that grows or decays according to environmental conditions[3]. Biomass can grow in a cell up to a set limit at which time it spawns new biomass into neighbouring cells.

3.1.     Sonic Agents

Sonic agents (who are both tasty and carnivorous) are the principle evolutionary entity in the Eden world. They consist of a fixed set of sensors, an evolvable performance system and a fixed set of actuators. The basic configuration is shown below in Figure 2. A limited description of the system will be provided here in order to understand the interactive elements described latter in this paper. A more detailed technical description of the system can be found in (McCormack, 2001).

Figure 2: The basic configuration of an agent in Eden. An agent consists of a number of environmental sensors, a rule-based performance system, and a set of actuators that attempt to perform actions in the world. [click on image for larger view]

Agents maintain an internal state that is updated at each time step in the world’s development. The internal state includes parameters such as health, energy, mass, and age. If energy or health measures fall to zero, the agent dies. Following death, its body converts to biomass with nutritional energy proportional to the mass of the creature at the moment of death.

3.1.1.               Sensors

Agents are equipped with a fixed set of sensors that do not evolve, although there is no inducement to make use of a sensor unless it aids survival or mating chances. Unused sensors incur no cost penalty. Sensors provide information about the local environment and the agent itself. Possible sensors include:

§       Tongue[4] – a sensor specific to the local cell that detects the nutritional value of whatever else is occupying the cell the agent is occupying.

§       Eyes – agents can ‘see’ the ‘colour’ of objects in neighbouring cells. Rocks, biomass, and other agents all have different colours. An agent’s vision system is sensitive to colour only and has limited resolution and range.

§       Ears – a considerable bandwidth in the sensory apparatus of each agent is devoted to listening to sound. Agents can orthogonally discriminate both volume and frequency of sound. Hearing is forward facing in a conical pattern up to a preset range, typically a much greater range than the agent can see.

§       Health – introspection into the agent’s health status, essentially how much energy the agent has. Agents gain energy by eating biomass or other creatures (whom they must kill first).

§       Pain – agents register pain in situations that are detrimental to their health, for example bumping into a rock, or being hit by another agent. Pain is a convenient indicator of a negative health differential.

3.1.2.               Actions

An agent’s internal performance system may direct it to perform an action via its actuators. As is the case with the agent’s sensors, the set of possible actions is fixed and does not evolve. There is no penalty for not making use of a particular action, so actions will only be used if they increase the agent’s fitness[5]. An agent’s intent to perform an action does not necessarily mean it can be carried out – a number of physical limitations on entities are imposed by the world; for example, it is not possible for an agent to move into cell that contains a rock. Attempting to perform such an action would result in the agent’s pain sensor being activated (to anthropomorphize, one can imagine this as an agent running into the rock, hitting its head against the rock, and finding it impassable). In addition, attempting to perform actions, regardless if they are successful or not, costs energy, which decreases the agent’s health. An agent may also chose to ‘do nothing’ which also costs energy, though less than any other action.

The set of actions an agent can perform is listed below:

§       Move forward – move forward one cell.

§       Turn – turn left or right.

§       Eat – attempt to eat whatever is in the cell the agent is occupying. Agents can eat biomass or other animals (if they are dead).

§       Hit – attempt to hit whatever is occupying the current cell. If an agent is hit, it will feel pain and suffer a decrease in its health proportional to the health of the agent who performed the hit action. Thus, a healthy agent inflicts more damage than an unhealthy one. Attempting to hit rocks or biomass results in pain and decreases health.

§       Sing – as with hearing, considerable bandwidth is devoted to the ability to make sound, with agents able to control the overall spectral content of the sounds they make (volume and frequency as orthogonal parameters).

3.1.3.               Performance System

The performance system links an agent’s sensors and actions – hence it determines how the agent interacts with the environment. The performance system is essentially a multiple rule-based system, similar to that proposed by Holland (Holland, 1995). The performance system for each agent maintains an active condition table – an ordered list of bit strings. Bit strings enter the table either as digitised representations of the agents sensory data at a given time step, or through the production of rules.

Rules consist primarily of a condition predicate, a production, and strength. Rules that successfully match a given condition from the active condition table earn the right to bid to be used. Bidding is based on a rule’s strength and specificity. Strength is a scalar value proportional to how useful the rule has been in the past. Specificity is also a scalar value, determined as an implicit quantity of a particular rule. Some rules are very general, hence will be used more often as they match a wider range of conditions. Other rules may be more specific, only being called upon in unlikely circumstances. Yet, each may be of importance to an agent’s survival. Thus more specific rules are able to ‘bid higher’ than more general rules with the same strength, since they are likely to receive payoff (detailed below) less often.

Following the bidding process, only one rule will be successful for any given condition. This rule’s production is then added to the active conditions table, which may trigger the use of further rules (in the next time step) or an attempted action. Strings in the active condition table are encoded in such a way as to differentiate between these two classes.

3.1.3.1.           Payoffs

An agent’s health is monitored at each time step. At regular intervals, a payoff is performed to those rules that have been active since the previous payoff. Active rules will have bid successfully, and will be responsible (ultimately) for actions the agent has performed. An increase in health since the previous payoff will result in a positive payoff, decrease in a negative payoff.

Receiving a positive payoff enables the rule to offer a higher bid next time it matches a suitable condition. There may be many time-steps between payoffs, which means rules that cause a temporary decrease in health will not be punished if there is an overall gain eventually. For example, walking around costs energy, but may give you more chance of finding food, so a rule like ‘if no food in the current cell, move forward’ will be rewarded, even though the rule by itself does not increase fitness. Due to this payoff system, rules that result in increased health will be used more often than those that result in decreased health.

Rules whose strength falls to zero will be removed, since they can no longer bid. In this way, unsuccessful rules are removed from the performance system. What is yet to be described is how new rules may be ‘discovered’ – this is covered next.

3.1.3.2.           Mating

Two agents over a certain age may choose to mate, that is exchange ‘genetic’ material and produce offspring. In the Eden system, this involves the exchange of rules. If two agents decide to mate, the rules with the highest strength from each agent are selected. Mutation and crossover operations are preformed on these successful rules, introducing variation into the offspring of the parents (Holland, 1992).

Over time, agents adapt to their environment, discovering rules that best enable them to survive in the given environmental conditions. As we will see in section 5, an important factor governing this adaptation is that the environment has causal relationships with the real environment, hence the agents are able to adapt indirectly to events taking place outside the simulation itself.

3.2.     Sound

Sound plays an important role in Eden. As detailed in sections 3.1.1 and 3.1.2, agents have the ability to make and listen to sounds over a range of frequency bands. Sounds made by agents are propagated across the environment using a restricted physical model that mimics the way sound travels in air (see (McCormack, 2001) for details). In addition, sounds made by an agent in the virtual world can be heard in the real world via a sonification process. A library of pre-recorded sounds is mapped to all the possible sounds an agent can make. In the current implementation, there are three frequency bands, each with four possible levels, resulting in a total gamut of 64 distinct sounds. This configuration is arbitrary and chosen as a reasonable trade-off between complexity and space of representation. There is no reason why it could not be increased if required.

Agents whose current action is ‘make a sound’ trigger the playback of the appropriate real sound, which is then spatialized using an elementary algorithm to the approximate location of the agent in space. Multiple agents who are singing result in multiple sound sources.

4.         Exhibition

This section details the physical exhibition of the work including the important use of environmental monitoring to create a feedback loop between people experiencing the work and the agent’s virtual environment.

4.1.     Layout

In its current exhibition form, Eden as an artwork consists of: two large translucent screens arranged to form an ‘X’ shape; a multi-channel audio system; and a number of environmental sensor devices[6]. The layout is shown below as a plan in Figure 3a, and visualized in Figure 3b.

The typical space occupied by the work is approximately 8m x 8m, though the exact requirements are somewhat flexible. The space typically has little ambient illumination, the majority of light in the space coming from the video projections.

The use of translucent screens integrates the projected virtual space into the physical space. People can stand on either side of the screen and see both the virtual world and the physical world through the screen. In an aesthetic sense, the boundaries between the real and virtual space become blurred.

(a)

(b)

Figure 3: the floor plan layout of the artwork (a), and an image of the work in operation for the perspective of a viewer – a 3D visualization (b) [click on images for larger view]

The images projected onto the screen are an abstract representation of the data-structures and processes going on in the computer that is running the simulation. This visualization is loosely based on Islamic ornamental tiling patterns (Grünbaum and Shephard, 1993). The position and orientation of entities in the world create minimalist geometric textures that suggest abstract landscapes rather than the iconic or literal individual representations that are common in many Artificial Life simulations. This design decision forms an integral aesthetic component of the work.

The multi-channel audio system outputs the results of sonification of those agents making sound in the virtual environment. Sound is spatialized so that the relative position of the sound in physical space roughly corresponds to its source in virtual space.

4.2.     Motion Detection Sensors

The physical environment contains a number of motion detection sensors located around the immediate area of the screens. These sensors are designed to detect the presence and distance of solid objects in a single dimension. The detection pattern is a narrow cone with a diameter ranging from 2cm at the sensor itself to approximately 4cm at the maximum detection distance, which is approximately 100cm. A variety of sensor types have been tested, based on different technologies (infrared, ultrasonic, laser) – the sensors finally used (infrared) were chosen for their accuracy, reliability, size, and most importantly that they have no perceptible visual or sonic emissions, making them essentially unperceivable to people experiencing the work. This ‘invisibility’ of the physical sensors is a key component in helping to ‘remove the interface’ from what is still fundamentally an interactive computer system.

Sensors output a scalar analogue signal, so information from all the sensors is collected and digitised in real-time, at a sample rate of approximately 15Hz. Digitisation rates are dependent on computer speed, as the digitisation rate is locked to the refresh rate of the simulation. In a multi-computer environment, such as is described here, one machine does the digitisation and broadcasts the data to all the machines via a TCP/IP network. The bandwidth required to broadcast the sensor data is relatively small (around 180 bytes/sec) and so does not place much load on a standard 100Mbps network.

A person’s proximity within the range of any given sensor will result in that sensor returning distance information. Sensors are placed so that no static physical objects interfere with the active sensing area. Sensors are distributed to cover the area in the immediate vicinity of the screens. The visual design of the work encourages people to move relatively close to the screens rather than standing back to take the whole in at a distance.

5.         Real/Virtual Relationships

Having described both the technical and physical configuration of the work, the relationship between the motion sensors and the virtual world will be detailed. These relationships are important as they dictate the reactive nature of the work and indicate how it might be possible to implicitly select ‘interesting’ behaviours without using the explicit methodology of aesthetic selection as discussed in section 2.

The basic relationship between the raw physical distance sensors and the parameters within the virtual world they control is shown below in Figure 4. As shown in the figure, data from a number of sensors is combined and manipulated to produce ‘higher-level’ information. Such information includes the presence of an object; its approximate location in relation to the virtual space; and movement rates (positional differentials). This higher-level data is then mapped to parameters in the virtual world simulation.

Figure 4: Information flow. Raw sensor data from a number of sensors is converted to ‘higher level’ information which is then mapped to parameters that drive local aspects of the virtual world simulation. [click on image for larger view]

5.1.     Parameter Mapping

A number of different mappings have been experimented with, and the mapping configuration can have substantial effects on the ultimate perceptual qualities of the work. Here are the currently used mappings:

§       Local presence maps to local energy absorption rates for biomass. Energy absorption rates could roughly be considered as being equivalent to having more nutrients in the soil for plants, thus making the plant healthier. The net effect is that for biomass to grow well it requires the presence of people in the real space. If the installation operates with nobody around, all the biomass will eventually die out, leaving a barren landscape. Since an agent’s survival is highly dependent on its ability to locate food, the agents too will die.

§       Local movement maps to local mutation rates and crossover points. As discussed in section 3.1.3.2, crossover and mutation operations are performed on the rules that form part of an agent’s performance system. Thus, if people are moving around, presumably trying to locate something interesting, the mutation rates of the agents will increase. If they are standing still, presumably because what is going on is interesting them, mutation rates will decrease.

5.2.     Discussion

Now let us consider the implications of these mappings in relation to the audience of the work and the evolution that occurs in the system. In an evolutionary system, agents that can adapt to their environment survive. In the system described here, the environment is not only the simulated environment, but the real environment as well. The movement and presence of people in the real space effects environmental conditions, hence the agents need to adapt to those conditions in total in order to survive.

In the context of the experience of a kinetic artwork, people will generally only stay around for any period if the work maintains their interest. If they become bored, they will probably leave. Of course, there may be other reasons for leaving, but in general, there is some correlation between the duration of stay and the interest the work maintains[7]. Since the work has a strong focus on sound, one possible way people’s interest might be extended would be in hearing ‘interesting’ sounds. Repeated sounds that are all the same will not maintain interest for long, so the progression of sound is also important. Agents that produce sound could, in theory, attract the attention of the audience (since people can hear the sound the agent produces). A person’s presence corresponds to a fertile environment, therefore, making sounds, and making sounds with variety could indirectly increase the amount of food locally available, hence the agent’s survival prospects. This might be considered an example of an extended phenotype effect (Dawkins, 1982).

The mapping of movement to mutation rates and crossover points is used under the following rational: people will move around the installation if there is nothing to fixate their attention, looking for something that catches their awareness. If an agent is, or group of agents are doing something interesting, people are more likely to stand still, watch, and listen. Increasing mutation rates injects more noise into the genome of the agent, resulting in new and more varied rules[8]. The rational here is to try to reward ‘interesting’ behaviour by the agents, based on the movement of people in the space – standing still indicates that the agent nearby could be doing something interesting, because the person is watching or listening near them.

6.         Results and Conclusions

To set up the installation involves considerable effort, so it has not yet been possible to do exhaustive testing. In addition, in an exhibition environment, the focus is on exhibiting the work, rather than experimenting with different configurations and studying the results. However, it is possible to confirm that the evolution is influenced by visitor patterns over a typical exhibition period of the work (several days to weeks).

When the exhibition begins, agents placed in the world are given a set of preset rules that guide basic ‘instinctual’ behaviour. While this is not strictly necessary, it saves the time required by many generations of agents to discover basic survival rules such as ‘if on top of food then eat’ and ‘don’t bump into rocks’. The number of instinctual rules can be controlled on a per-agent basis.

Evidence from a number of exhibitions suggests that the behaviour of the audience does have a positive effect on the evolution of agent’s genomes. Without the reactive system, agents generally only use sound in specific circumstances such as an aid to mating – what one would expect in an evolutionary communication system (Werner and Dyer, 1991). Once the environmental pressures from audience behaviour are incorporated into the system, the generation of sound shows a marked increase and analysis of the rules discovered shows that making sound is not only used for mating purposes.

One important issue is the rate of rule discovery and the ability of the agent to adapt to changing environmental conditions in real time. Agents may live for many Eden years, so if the only way to introduce new rules is by mating, rule discovery may not be capable of developing adaptations to the real-time nuances of the environment as much as would be liked. Thus, a kind of Lamarckian evolution is used (Bowler, 1992), whereby the rules within an agent can undergo internal crossover and mutation within the agent’s rule table. In addition, rules that have been selected against during the agents lifetime are passed on to siblings, based on their performance over the parent’s life. Thus, the rule system serves as both a learning system during the agent’s lifetime, and the best of what the agent has learned is passed (along with those of the other parent), to siblings.

The video shown in Figure 5, illustrates the work in operation. All the sounds heard in the video were recorded from a live run of the system after several days of audience interaction. Notice the wide variety of sounds produced. Two recordings from the multi-channel sound system are also provided.

 

edenSounds1.mp3 [3.5 Mb]

edenSounds2.mp3 [4.7 Mb]

Eden Video [Quicktime format]

Eden Video [Real video format]

Figure 5: Video and sounds of the Eden Installation.

 

 

 

6.2.     Designing Reactive Environments

Aesthetic selection provides a novel way to search based on subjective and emotive criteria that have thus far proved difficult to quantify as explicit, machine-representable fitness functions. However, aesthetic selection imposes restrictions on both the interaction aesthetics and performance of any system that makes use of it. This paper has shown that it is possible to overcome the difficulties of explicit selection by designing reactive environments that exploit modes of human behaviour relevant to the circumstance of the audience or user of the system. They allow evolutionary systems to discover subjectively interesting behaviours without the need for individual users to explicitly select what they find interesting over each generation, as is the case with aesthetic selection. Moreover, reactive interfaces scale well to multiple simultaneous users.

Reactive environments eliminate the need for users to learn new interactive languages in situations where they may have neither the time nor the inclination to do so. Such methodologies are not suited to all situations and applications, or to serve as a replacement for all traditional modes of human-computer interaction. However, reactive evolutionary environments show potential when designers want to offer an open-ended, highly sensorial and experiential mode of engagement with the audience, particularly when that audience has limited time or a lack of experience with cumbersome user interface technologies and languages. By ‘removing the interface’ users are free to explore the aesthetics or functionality of the system, without the hindrance of learning an interactive language, wearing or manipulating a tangible interface, responding to errors, or discovering the limitations of more traditional interactive metaphors.

6.2.     Summary

In summary, a system has been produced that attempts to integrate the open-ended nature of synthetic evolutionary systems into an interactive virtual space. The approach used in this paper has been to measure components of the real environment, incorporating them into that of the virtual one, thus enabling an evolutionary relationship between virtual agents and the artwork’s audience, without explicit need to learn the language of interactivity of the system.

 

 


7.         References

Ashby, WR: 1952, Design for a Brain, Chapman & Hall, London.

Bentley, PJ: 1999, Evolutionary design by computers, Morgan Kaufmann Publishers, San Francisco, Calif.

Bentley, PJ and Corne, DW (eds.): 2002, Creative Evolutionary Systems, Academic Press, London.

Bonabeau, EW and Theraulaz, G: 1994, Why Do We Need Artificial Life?, Artificial Life 1(3), 303-325.

Bowler, PJ: 1992, Lamarckism, in Keller, EF and Lloyd, EA (eds), Keywords in Evolutionary Biology, Harvard University Press, Cambridge, MA, pp. 188-193.

Bulhak, A: 1999, Evolving Automata for Dynamic Rearrangement of Sampled Rhythm Loops in Dorin, A and McCormack, J (eds), First Iteration: a conference on generative systems in the electronic arts, CEMA, Melbourne, Australia, pp. 46-54.

Cariani, P: 1993, To evolve an ear: epistemological implications of Gordon Pask's electrochemical devices, Systems Research 10(3), 19-33.

Dawkins, R: 1982, The extended phenotype : the gene as the unit of selection, Freeman, Oxford [Oxfordshire] ; San Francisco.

Dawkins, R: 1986, The Blind Watchmaker, Longman Scientific & Technicial, Essex, UK.

Dorin, A and McCormack, J: 2001, First Iteration / Generative Systems (Guest Editor's Introduction), Leonardo 34(3), 335.

Dunne, A: 1999, Hertzian Tales: Electronic Products, Aesthetic Experience and Critical Design, Royal College of Art, London.

Grünbaum, B and Shephard, GC: 1993, Interlace Patterns in Islamic and Moorish Art, in Emmer, M (ed) The Visual Mind: Art and Mathematics, MIT Press, Cambridge, MA.

Holland, JH: 1992, Adaptation in natural and artificial systems : an introductory analysis with applications to biology, control, and artificial intelligence, MIT Press, Cambridge, Mass.

Holland, JH: 1995, Hidden order : how adaptation builds complexity, Addison-Wesley, Reading, Mass.

Langton, CG: 1995, Artificial life : an overview, MIT Press, Cambridge, Mass.

Manovich, L: 2001, The language of new media, MIT Press, Cambridge, Mass. ; London.

McCormack, J: 1993, Interactive Evolution of L-System Grammars for Computer Graphics Modelling, in Green, D and Bossomaier, T (eds), Complex Systems: from Biology to Computation, ISO Press, Amsterdam, pp. 118-130.

McCormack, J: 2001, Eden: An Evolutionary Sonic Ecosystem in Kelemen, J and Sosík, P (eds), Advances in Artificial Life, 6th European Conference, ECAL 2001, Springer, Prague, Czech Republic, pp. 133-142.

McCormack, J and Dorin, A: 2001, Art, Emergence and the Computational Sublime in Dorin, A (ed) Second Iteration: a conference on generative systems in the electronic arts, CEMA, Melbourne, Australia, pp. 67-81.

Mitchell, M: 1996, Introduction to Genetic Algorithms, MIT Press, Cambridge, MA.

Morris, R: 1999, Artificial worlds : computers, complexity, and the riddle of life, Plenum Trade, New York.

Norman, DA: 1990, The design of everyday things, Doubleday, New York.

Norman, DA: 1993, Things that make us smart : defending human attributes in the age of the machine, Addison-Wesley Pub. Co., Reading, Mass.

Raskin, J: 2000, The humane interface : new directions for designing interactive systems, Addison-Wesley, Reading, Mass. ; Harlow.

Reichardt, J: 1971, Cybernetics, art and ideas, New York Graphic Society, Greenwich, Conn.

Rooke, S: 2002, Eons of Genetically Evolved Algorithmic Images, in Bentley, PJ and Corne, DW (eds), Creative Evolutionary Systems, Academic Press, London, pp. 339-365.

Sims, K: 1991a, Artificial Evolution for Computer Graphics, Computer Graphics 25(4), 319-328.

Sims, K: 1991b, Interactive evolution of dynamical systems First European Conference on Artificial Life, MIT Press, Paris, pp. 171-178.

Sommerer, C and Mignonneau, L: 1998, Art as a Living System, in Sommerer, C and Mignonneau, L (eds), Art@Science, Springer, Wein, pp. 148-161.

Todd, S and Latham, W: 1992, Evolutionary Art and Computers, Academic Press, London.

Werner, GM and Dyer, MG: 1991, Evolution of Communication in Artificial Systems, in Langton, CG (ed) Artificial Life II, Addison-Wesley, Redwood City, CA, pp. 659-682.



Footnotes

[1] An Eden year lasts 600 Eden days, but passes by in around ten minutes of real time on a standard PC.

[2] The reader will appreciate that such anthropomorphic references, scattered throughout this paper, are common in Artificial Life literature (and elsewhere). They are used as conveniences, representing the (limited) linguistic metaphor on which the feature is conceptualised and based within the symbol-processing mode of computation.

[3] Both virtual and real environmental conditions – this is detailed in section 5.

[4] The anthropomorphic names given to each sensor serve illustrative purposes only. Ultimately all sensors are just bit strings. With the possible exception of the ears, no attempt is made to provide any realistic physical modelling of sensor behaviour that might be alluded to by the names – i.e., all sensors are finite, precise, and infallible.

[5] In basic terms, fitness equates the ability to survive and mate in the current environment.

[6] This use of the term sensor should not be confused with the sensors that form part of each agent described in section 3.1.1. While the basic function of each is the same – to sense the environment, it is important to differentiate between physical (electronic) sensors and virtual (algorithmic) sensors, and the context of their use.

[7] This assumption is largely based on anecdotal evidence, but simple intuition suggests that you will not spend a lot of your ‘leisure’ time with things that don’t engage your interest.

[8] It is important to note, however, that the majority of rules mutated by noise will not be ‘better’ ones (i.e. increase fitness), but by the rule credit and payoff system, unsuccessful rules will quickly be weeded out. The role of noise is to increase the possibility or more radical rule discovery than that which could be achieved by crossover alone.