Emergence and interactivity: A-Life Art as a paradigm for the creation of experiences in interactive communication

Author: Joan Soler-Adillon (Universitat Pompeu Fabra)

Citation: Soler-Adillon, Joan (2010). "Emergence and interactivity: A-Life Art as a paradigm for the creation of experiences in interactive communication". Hipertext.net, 8, http://www.upf.edu/hipertextnet/en/numero-8/a-life_art.html

Joan Soler-Adillon

Abstract: The idea behind the concept of emergence is that there are some complex phenomena which cannot be explained by the mere analysis of its parts or conforming elements. In this paper we analyze the concept of emergence in the context of artificial-live art, in order to draw conclusions upon its applications in interactive communication in general. To do so, we will focus on the example of the use of genetic algorithms in interactive installations.

Keywords: Emergence, Interactivity, Interaction, Art, Artificial life, A-life art, Interactive installation, Museum

Table of contents

1. Introduction
2. The concept of emergence
    2.1. Notes on the history of the concept
    2.2. Definitions
    2.3. Discussion
3. Emergence in a-life art
    3.1. Craig Reynolds' "boids"
    3.2. Genetic algorithms
    3.3. Evolved Virtual Creatures
    3.4. A-Volve
    3.5. Digital Babylon: cumulative interaction
4. Simulation of evolutionary processes as a general interactive communication strategy
    4.1. Future work: application in interactive installations in museums
5. Conclusions
6. Bibliografia



1. Introduction

The idea behind the concept of emergence is that there are some complex phenomena which cannot be explained by the mere (reductionist) analysis of its parts or conforming elements. Between the level of local interactions among agents and their behavior at the group level (or at least at a higher level in terms of behavior) there is something more which cannot be explained as a mere sum of the individual agent's behaviors.

Typical examples used to describe emergence include ant colonies, the human mind understood as a product of the interconnectivity of neurons in the brain, or even the complexity of a game such as chess, despite its certainly limited number of pieces and their rather simple predefined rules.

We begin here with a review of the history, definitions and discussion of the concept of emergence. We then analyze examples of its use in artificial life art (a-life art). It is in this discipline, and especially in its genetic art subgenre, where we find a narrow enough use of emergence so that it can be used in the last part of our discussion. In this last part we attempt to describe how this idea could be a useful tool for the creation of interactive experiences in a more general context.

2. The concept of emergence

There is not a doubt that emergence is one of those concepts that is as much often used as it is difficult to define. It is central in disciplines such as generative art o a-life art (art creation based on the use of artificial life systems) and some monographic books have been written on the topic. But even so, it is still difficult to define. This difficulty, which for some is precisely one of the things that makes emergence interesting [Whilelaw, 2006, p. 208], also generates a considerable amount of scientific discussion.

We will here briefly review some of the history of the concept's use, proposed definitions and the most relevant lines of discussion, always from the point of view of emergence in the context of a-life art. This is indeed the paradigm we will use to propose possible applications of emergence for interactive communication in general.

2.1. Notes on the history of the concept

The term emergence as used in our context was coined by the philosopher George Henry Lewes in 1875, in a context previously suggested by John Stuart Mill: the discussion between different types of causation. Mill examined causation in cases such as the vectorial composition of forces in physics (where a vectorial force can be reduced to a set of simpler compounding force vectors) and compared it to the combination of the elements in chemical compounds. In the latter, the reduction is not such a simple task. Water, for instance, is not a simple aggregation of hydrogen and oxygen; there is something more.

In this context, Lewes proposes a distinction between resultant and emergent compounds. Thus, in some cases we can apply the classical reductionist method of isolating the parts and studying them in order to understand how the whole is formed and behaves. These would be the resultant compounds. In other cases, this whole can't be reduced to the mere sum of its parts. These would be the emergent compounds.

After its debut, the concept received very different attention over different periods of time. In the 1920s it had certain amount of success among authors who wrote about emergent evolution in the context of Darwinism, in search for alternatives to the dominant vitalistic and mechanistic views. The basic idea for these authors is that there is a series of hierarchical levels in the organization of life (e.g., inert matter, living matter, the mind) and that higher levels emerge from the interactions at the lower ones.

Although this movement's activity diminished in the 1930s, this idea of emergence went on being used, mainly in the context of the philosophy of science, and often as a defensive argument against strongly reductionist views [Whitelaw, 2006].

At the beginning of the 1960s, much more skeptical views of the idea began to appear, such as that of Ernst Nagel. He argued that the attribution of emergence to certain phenomena is simply a product of using a theory that is not sufficiently able to describe them, thereby introducing the notion of emergence as an epistemological phenomenon [Whitelaw, 2006, p. 211].

Finally, and after several years receiving little attention, in the 1980s and 1990s the concept of emergence found one of its natural habits in the artificial life discipline, and specifically in a-life art, in which it plays a central role, as we will see in section 3.

2.2. Definitions

As it has been said, emergence is an especially difficult concept to define, and as such its definitions implicitly carry certain connotations of how it is exactly interpreted and understood in each case.

One of the best and most complete treatments of emergence is the monograph from 1998 by John Holland, which already in its subtitle suggests the approach taken: Emergence. From Chaos to Order.

The first definition we find in the book is very close to that which is most generally accepted, which states that the whole is more than the mere sum of its parts: emergence is much coming from little [Holland, 1998, p. 1]. A definition that works as the opposite of the idea of reductionism, which states that the whole can be strictly and entirely understood through the analysis of its parts.

Holland proposes a framework for understanding emergent phenomena that is based on the creation of multi-agent systems, and the study of the interactions among them, the state of the model in different times (and the transitions between states), and the rules followed by the agents.

In this context, he proposes a different kind of reductionism. One which is not based on the analysis of the isolated agents but of the interactions among them and with the environment: "Emergence is above all a product of coupled, context-dependent interactions. (...) Under these conditions the whole is indeed more than the sum of its parts. However, we can reduce the behavior of the whole to the lawful behavior of its parts, if we take the nonlinear interactions into account" (emphasis in the original text) [Holland, 1998, p. 122].

Thus, although Holland considers that truly emergent phenomena exist, and that they can be understood according to the parameters just described, he also admits the difficulty in determining exactly which phenomena are emergent. To assist with the distinction, he proposes, in opposition to that of emergence, the concept of serendipitous novelty. The idea would refer to phenomena which produce unexpected results during the execution of the model, but in which cease to be unexpected once the novelty is gone and we can fully understand its causes.

Other approaches to the definition of the concept take advantage of the potential of the idea of emergence, even if only as a metaphor, to stimulate an intuitive understanding of complex systems [Miller and Page, 2007]. The problem arises when we attempt to analyze its validity as a scientific term.

Finally, there is a series of definitions that much more clearly take into account the role of the observer, such as that of Mitchell Whitelaw, who defines emergence as "the moment the system exceeds itself, breaches its initial boundaries, surprises us" [Whitelaw, 1998].

In fact, the idea of surprise is the main point of the last definition we will consider here, which, based on the Turing test, proposes its own test for emergence [Ronald et al. [1999].

If the Turing test was, in a nutshell, an attempt to see if a computer was capable of passing as a human in a text-based "conversation" [Turing, 1950], the emergence test attempts to surprise someone familiar with the design of a particular complex system (in the context of artificial life) by running this system.

The emergence test consists of three steps: first, the designer of a system (or someone who is totally familiar with its design) describes the local interactions among the system's components. Next, the same person describes the global behaviors of the system. Finally, if the causal link between what was described in the first step and what is observed in the second is not obvious to the observer, what the authors call cognitive dissonance occurs.

This would be the moment of surprise, which, under this construct, would be synonymous to emergence. Thus, the concept of emergence presented here is entirely related to the observer, in what some have defined as subjective emergence [Monro, 2009], as opposed to the objective emergence sought by Holland.

Elaborating on this idea, and expanding the idea of surprise to a more complex concept combining it with wonder, mystery and autonomy, the generative artist Gordon Monro proposes a definition of emergence from the perspective of generative art that seems very appropriate for more general application. This definition consists of two states, both of which assume an observer with complete knowledge on the design of the observed system:

  • (1) The work produces a result or behavior which is "unobvious or difficult to predict".
  • (2) This result or behavior "evokes feelings of surprise-wonder-mystery-autonomy" in the observer.

The terms used in the preceding definitions rise issues that bring us to the need of reviewing some of the most relevant topics in the discussion around the concept of emergence.

2.3. Discussion

Some of the main lines of discussion around emergence have already been suggested in the previous section. It has been shown that one of the main issues is the importance of the role of the observer.

At the heart of the issue, the question is whether emergent phenomena actually exist (i.e., if they have ontological validity in the philosophical sense of the term) or belong solely to the realm of epistemology. In the latter case, they would only exist as such, strictly speaking, because of human incapacity to understand the complexity of the system within which they occur.

In this sense, the quality of emergence would be merely transitory, temporary. The phenomenon would lose its emergent quality at the moment in which we became capable of establishing the causal relationships between what we know at the micro-level and observe at the macro-level. This could occur with advances in knowledge or, if we stay with the notion of surprise cited above, when it is no longer surprising.

Perhaps de most properly levied critique against the concept of emergence (the fact that it is the most often cited in the literature should be sufficient proof) is that offered by Peter Cariani at the beginning of the 1990s [Carani, 1992]. His argumentation takes up Nagel's criticism in the 1960s mentioned in section 2.1.

Cariani's approach is know as emergence relative to a model, and argues that emergence, in the cases where it can exist, is always relative to the observer and, in fact, contingent in that it depends on the model from which it is observed: the observational frame.

Writing in within the context of artificial life, Cariani questions whether or not computational artifacts can generate truly emergent phenomena. In this sense, he speaks of artifacts that should, first of all, be open to their environment. They must be able to change the rules of their internal computations. This is what Cariani calls syntactic adaptation. Something that could be achieved, for instance, through genetic algorithms.

His second requirement is much more difficult, if not impossible, to meet. It is the semantic adaptation, and would consist of the capacity to change the way the environment is perceived. That is, of generating (evolving) not only new mappings between what the artifact perceives from the environment and its computations, but also the ways in which it perceives it, through evolving new sensors.

Taken to the extreme, Cariani's critique renders it practically impossible for a true emergent phenomenon, in the double level he describes, to occur in a-life art (at least under the current possibilities offered by technology). Even so, his criticism, as Whitelaw argues, is a very useful instrument to understand the limitations and possibilities of this artistic practice [Whitelaw, 2006, p. 220].

Indeed, an analysis of his critique (if accepted), means that no system that is solely computational can be truly emergent. This leaves in a very difficult position the discipline of generative art, which is very closely linked to emergence. On the other hand, for a-life art it leaves a door open to the introduction of non-computational elements in the work and, therefore, to emergence: interactivity.

For the framework we are attempting to define here, it is enough with the use of programming with genetic algorithms (the basis of a-life's subgenre of genetic art), and the notion of emergence relative to a model, whether or not it does have ontological validity.

3. Emergence in a-life art

As we pointed out in section 2.1, the concept of emergence regained importance during the 1980s, at a time of advancements in the theories of complex systems and artificial life, when it acquired a central role in the latter discipline. As described by Whitelaw in his recent book dedicated to a-life art, the concept of emergence, understood as a process, served as an umbrella for understanding the outcomes of systems consisting of a multitude of complex micro-interactions [Whitelaw, 2006, p. 212].

Therefore, the discipline called artificial life adopted this formulation of emergence, devoid of the quasi-mysticism of some earlier formulations, in accordance with the need to understand a way of working that consists of designing lower levels to test their impact at higher levels: "A-life's continued pursuit of a bottom-up approach reflects its faith in this form of emergence, one with none of the mystical or ineffable overtones of emergent evolution but seen as the most appropriate way to effect the synthesis of life" [Whitelaw, 2006, p. 212]. It is precisely this way of understanding emergence that is seriously called into question by critics such as Cariani (section 2.3), and is the reason this critique must always be taken into consideration, whether to accept it (even partially) or refute it.

In any case, a-life art's emergence can be described, according to Whitelaw, using two levels [2006, pp. 212-3]:

  • A local or computational level, where complex interactions are defined by a series of formal rules. This would be the technological substrate, that of hardware and software.

  • A global level, where behaviors appear as patterns. Such behaviors are the result of the interactions in the previous level. They are the result of the technological substrate. The emergent quality at this level would be the something more (the excess, in Whitelaw's terms) which would appear, in contrast to what would be expected from the mere analysis of the interactions in the lower level.

Having established this definition of emergence in the context of a-life art, and still with the alternative proposals (section 2.2) and critiques (section 2.3) in mind, we proceed to the following sections. In these sections we will first briefly review a classic case of emergence in a computationally generated system. Secondly, we will explain and review examples of a specific technique, that of genetic algorithms, that will provide the basis for the proposal contained in section 4.

3.1. Craig Reynolds' "boids"

The Word boids was popularized (mostly among the A-Life community) by New York University's Craig Reynolds in 1987. It is a contraction of bird-oid, referring to something that resembles a bird or a group of birds.

In fact, the birds were not only soon taken as a clear case of emergence, but can even be considered an icon of the a-life art discipline [Penny, 2009].

They are in any case an excellent example of how, in a system with multiple agents controlled by simple rules, behaviors can appear at the group (global) level that are not obvious according to the mere analysis of such local rules.

The basic example of boids is precisely that which simulates the movements of a flock of birds. The agents, which can be graphically represented as simple triangles (which is useful in order to see their direction), move according to three very simple rules in respect to their group [Reynolds, 1995]:

  • Separation. When it moves, each agent always tries to maintain a certain distance respect each one of the other group members.

  • Alignment. In its movement, it will tend to orient himself respect the average orientation of the agents in the group.

  • Cohesion. At the same time, the movement will also tend to approach the average position of the local group mates.

Each agent does all of this solely in respect to the group mates that are closest to it, according to a predefined distance. It is, therefore, an action based only in local information.

The outcome is astonishing. In any of the simulations the global behavior is amazingly coordinated if the take into account the agent's such simple (and such local) behavioral rules.


Craig Reynolds' Boids

Figure 1. Craig Reynolds' Boids in the short digital animation film Stanley and Stella in: Breaking the Ice¸ 1987. (c) Craig Reynolds. Used with permission.

This paradigmatic example shows very clearly what is usually understood as emergence, at least in the context of a-life art. Even so, it does not have (because it didn't intend to) what we think is one of the most interesting characteristics in many of these system, the ability of adaptation. Such ability is often sought through the use of a sort of programming that simulates evolutionary processes using what is known as genetic algorithms.

3.2. Genetic algorithms

Genetic algorithms were first described by John Holland in 1975 as a type of computational modeling inspired by the theory of evolution. Most often, genetic algorithms are used as a strategy for the resolution of complex problems [Whitely, 1993], but as we shall see, they can also be useful in art projects and, obviously, especially in a-life art.

Briefly described, genetic algorithms usually start with a population of elements (agents, programming subroutines, etc.) created randomly according to certain predefined parameters (the genotype). Each one of these elements (phenotypes) is then evaluated according to some predefined criteria (the fitness function). Then, the elements which are the most successful are selected to create a new generation of population, which will in turn be evaluated, and so on with a certain number of generations or until the fitness criteria are met.

In the process of creating new individuals, the characteristics (essentially a series of programming variables) of each of the successful ones are recombined. In this manner, the newly created individuals, although different from their progenitors, will inherit the characteristics that made them successful. Each generation, therefore, should be closer to the optimal solution than the previous one.

In addition to all these combinations, there is also the possibility of mutation: a random change in some of the variables that define each of the individuals. This allows the introduction into the system of new possibilities unanticipated by the system's designer which, if they turn out to be successful (if they help create a phenotype closest to the fitness), may enter the evolutionary process.

As with other aspects of the scientific artificial life, a-life art has taken into its own field of artistic interest the technique of genetic algorithms. As it has been said, this subgenre of a-life art is known as genetic art.

3.3. Evolved Virtual Creatures

One of the first authors to experiment with genetic algorithms for artistic purposes was Karl Sims. He used this technique to create procedural animation in 3D, a case in which we can easily see once more the idea of emergent phenomena.

In Evolved Virtual Creatures, Sims defined some basic rules for the application of genetic algorithms to make his virtual creatures evolve within predetermined environments. The goal of these evolutionary processes (the fitness by which they would be evaluated) was the ability to successfully accomplish a series of behaviors within these environments: swim, walk, jump, and follow a light source [Sims, 1994a].

If the results of Craig Reynold's boids were spectacular, Sims accomplished nothing less. The system he used was able to produce an animation with a feel that we immediately and intuitively relate to the idea of life. A feel that would have been, if not impossible, very difficult to accomplish with the usual animation methods, and even more so if we are remained that this was made in 1994.

Karl Sims: Evolved Virtual Creatures

Figure 2. Karl Sims: Evolved Virtual Creatures. (c) Karl Sims. Used with permission

As with many emergent phenomena, it is truly difficult to verbally explain the results Sims obtained; it is worth watching the video documentation of the piece [Sims, 1994b: http://www.karlsims.com/evolved-virtual-creatures.html].

3.4. A-Volve

Despite their undeniable interest, neither the Reynolds boids nor the Sims virtual creatures provide for interaction; the process occurs without any user participation whatsoever.

This is not the case of A-Volve, undoubtedly the most commonly identified artwork with the genetic art field, with an iconic status similar to that of Reynold's boids in respect to emergence.

In this interactive installation, a virtual ecosystem, inhabited by virtual creatures programmed with genetic algorithms, is projected into a water-filled glass pool. The creature's goal is, simply put, try to survive and reproduce in the environment. None of the creatures is predesigned. They are either created by the users (with a drawing tool) or born as a result of a pairing of two creatures already in the system.

Besides creating new individuals, the visitors can also interact with the already existing creatures by placing their hands in the water surface of the pool. By doing so, they can protect a virtual creature from its predator, or try to stimulate reproduction by driving to creatures close to each other.

A group of visitors interacting with the A-Volve installation

Figure 3: A group of visitors interacting with the A-Volve installation. (c) Christa Sommerer and Laurent Mignonneau. Used with permission

In addition to this interaction between visitors and creatures, Sommerer and Mignonneau distinguish between two other levels of interaction: among creatures and among visitors. The first is where the artists' designed behaviors are found (which, in turn, are what can be altered by the visitors): predator-prey, mating and reproduction, and the parent's protection of a new creature. The second level, that of the interactions among users, is generated through the interactions with the creatures and the identification of the visitors with those they have created [Sommerer and Mignonneau, 1996].

3.5. Digital Babylon: cumulative interaction

To complete this section, and with no claim of equivalence with the works described above, we include our own 2005 interactive installation because its intention (we won't judge here if also in its realization) constitutes the step that is missing to bring our argument to the next point.

Digital Babylon is an interactive installation that was strongly influenced by and inspired in the previously described artworks. It presents to the visitor a virtual ecosystem containing two species (from now on, the main species and the predator) in constant evolution through genetic algorithms, and a third grass-like element that works as food. Initially, the system works without the intervention of any user.

The Digital Babylon virtual ecosystem

Figure 4: The Digital Babylon virtual ecosystem (c) Joan Soler-Adillon

Through a series of rules and interaction among the virtual individuals, a system of a considerable complexity and with its own equilibrium is generated [Soler-Adillon, 2005]. Without a doubt, we could find in it at least traces of various emergent phenomena.

If the visitor so desires, he or she can intervene to change the scene. Each individual of the principal species will respond in a different manner to this presence, according to the coding of its virtual DNA that determines propensity to approach or avoid the visitor. Therefore, the visitor may choose to take actions that are helpful to the individuals that come closer to him or her (keeping them away from the predators and close to the food) or, on the contrary, take actions that are likely to cause them harm.

Depending on whether the visitor does one or the other, either those virtual individuals who come closer to him or her or those who stay away will be more likely to survive and, thus, reproduce. Hence, with that the visitor will affect the interactions of future visitors, making it more or less easy to interact with the individuals of the main species. The main species, therefore, will be more or less likely to come closer (to interact) with the visitors according to the sum of all the previous interactions.

A visitor in front of the Digital Babylon installation

Figure 5: A visitor in front of the Digital Babylon installation (c) Joan Soler-Adillon

Through this process, there exists, in addition to the immediate interaction with the user's environment, what we could call cumulative interaction: each visitor affects the future of the piece by introducing changes that are imperceptible him or herself, but which are accumulated to the evolutionary process.

With all this, there is constant change through the evolution of the two species. A change that affects how individuals move and interact with their own species and the other species, with the environment and also with the visitor.

If the installation is active over a long period of time (various days or weeks), the changes could be significant enough so that a visitor who repeats the experience can perceive the differences.

As it has been said, this constant evolution is generated through the use of genetic algorithms. But here (like in A-Volve) there is no predefined fitness criterion to evaluate the individuals with. Instead, it is how they adjust to the virtual environment, their interactions with the rest of the elements in it and with the visitors, what will determine whether or not an individual is going to survive and reproduce.

The basic idea is that, through the use of genetic algorithms, interactive experiences that change over time can be created. Experiences that offer the visitors the immediate interactions we are already quite used to, and at the same time keep adding up these interactions in the much more subtle cumulative interaction. A process that is perceivable over significant periods of time. We dedicate the following section to this idea.

4. Simulation of evolutionary processes as a general interactive communication strategy

We have analyzed Whitelaw's definition of emergence (section 3), along with the reservations arisen from Cariani's critique, and the idea that interactivity is precisely what can help overcome this reservations (section 2.3). All of this allows us to have a good framework to understand the emergent processes based on genetic algorithms.

This approach is what we want to propose as a means of creating a particular kind of interactive communication. A type which is capable of offering experience that change over time through the use of such algorithms, with the idea of emergence being one of the main concepts to be taken into account.

Although the basic idea is derived from a-life art, its application need not be strictly confined to that discipline. In fact, it is a type of strategy that can be generalized to any type of interactive system, although of course it will be easier, and more appropriate, to apply to certain types of experiences than to others.

In this respect, the last point to be made before we come to our conclusions is an attempt to provide an example of possible future applications of the idea.

4.1. Future work: application in interactive installations in museums

One kind of interactive communication experience in which the application of the described framework would be especially appropriate is interactive installations in museums.

In the first place, an installation based on genetic algorithms, in the context of a science museum or a similar environment, would obviously be ideal to explain concepts such as evolution, ecosystems, equilibrium, etc.

But, in fact, the proposed system allows for something more complex and of more general application. It could indeed be applied to any interactive installation that is exposed to the public for a long enough period of time so that it can allow for the evolutionary process to have a noticeable impact, whether the installation is artistic or informational.

The idea is that a person who repeatedly visits a museum could find that a familiar piece had been modified, not by a redesign but rather by an evolutionary process involving each interaction with a museum visitor.

Obviously, this approach necessitates specific requirements with respect to the interactive design, which must take into account both immediate interactivity and the cumulative process.

The main difficulties, in fact, would be found precisely in the emergent properties to which we are led by the genetic algorithms. Actually, we are somehow faced here with a paradox: if the designer of the interaction is able to anticipate what the piece will be in the future, then this means that no emergent phenomenon occurs. In that case, the potential interest of emergence is lost.

On the other hand, if emergent phenomena are to happen, then, by definition, it is impossible that the interaction designer fully anticipates the possibilities of the piece.

We should, then, create models where the maximum conditions are met in order to achieve interesting emergent phenomena. Obviously, in order to do so the knowledge of emergent phenomena in the context of interactivity shall be as exhaustive as possible.

5. Conclusions

The concept of emergence is certainly problematic, both regarding the amount of scientific literature and the many non strict uses of it. But even so, if carefully defined it can be a very interesting idea as a strategy in interactive communication in general.

In this sense, a-life art, and specifically genetic art, offer an appropriate context to approach the idea of emergence with a method that allows us to sough for possible new levels of interest, without losing any of the richness of the interactive experience.

Regarding the design of these interactive experiences, we find the important paradox described above, since the proposed framework leads us to attempt to design for emergence, which, by definition, cannot be designed.

We are therefore left with the future task of understanding with detail how the idea of emergence fits with that of interactivity, in order to offer a framework of creation for emergent interactive experiences (or, at least, interactive experiences with an important component of emergent phenomena). With this, the designer of these interactions, even if always needing to run the model to ascertain its outcome, could have the appropriate tools in order to undertake the task with the maximum possibilities of success.

6. References

Balijko, Melanie; Tenhaaf, Nell (2008). "The aesthetics of emergence: Co-constructed interactions". ACM Transactions on Computer-Human Interaction, v. 15, n. 3, a. 11.

Cariani, Peter (1992). "Emergence and Artificial Life". En Langton, C.; Taylor, C.; Farmer, J. D.; Rasmussen, S. (Eds.) Artificial Life II. Redwood City, CA: AdisonWesley, pp. 775-789.

Crutchfield, J. (1993). "The Calculi of Emergence: Computation, Dynamics, and Induction". Physica D, v. 75 n. 1-3, pp. 11-54.

Goldstein, Jeffrey (1999). "Emergence as a Construct: History and Issues". Emergence, v. 1, n. 1, pp. 49-72.

Hollan, John (1998). Emergence. From Chaos to Order. New York: Basic Books.

Kubik, Ales (2003). "Toward a Formalization of Emergence". Artificial Live, v. 9, pp. 41-65.

McCormack, John; Dorin, Alan (2001). "Art, Emergence, and the Computational Sublime". En Dorin, A. (ed.) Proceedings of Second Iteration: A Conference on Generative Systems in the Electronic Arts. Melbourne: CEMA, pp. 67-81

Miller, John H.; Page, Scott E. (2007). Complex Adaptive Systems. An Introduction to Computational Models of Social Life. Princenton, New Jersey: Princenton University Press.

Monro, Gordon (2009). "Emergence and Generative Art". Leonardo, v. 42, n. 5, pp. 446-447.

Ronald, Edmund M.A.; Sipper, Moshe; Capcarrère, Mathieu S. (1999). "Design, Observation, Surprise! A Test of Emergence". Artificial Life, v. 5, pp. 225-239.

Penny, Simon (2009). "Art and Artificial Life - a Primer". UC Irvine: Digital Arts and Culture 2009. [Retrieved from: http://escholarship.org/uc/item/1z07j77x]

Reynolds, Carl (1995). Boids. Background and Update, http://www.red3d.com/cwr/boids/index.html [Consultado: 03/02/10].

Sims, Karl (1994a). "Evolving Virtual Creatures". Siggraph '94 Proceedings, pp. 15-22.

Sims, Karl (1994b). Evolved Virtual Creatures, http://www.karlsims.com/evolved-virtual-creatures.html [Consultado: 03/02/10].

Soler-Adillon, Joan (2005). Digital Babylon, http://joan.cat/project.php?id=1 [Consultado: 03/02/10].

Sommerer, C.; Mignonneau, L. (1996). A-Volve, http://www.fundacion.telefonica.com/at/avolve.html [Consultado: 03/02/10].

Turing, Alan (1950). Turing, A.M., "Computing machinery and intelligence". Mind, v. 59, pp. 433-460.

Whitelaw, Mitchell (1998). "Tom Ray's Hammer. Emergence and Excess in A-Life Art". Leonardo, v. 31, n. 5, pp. 377-381.

Whitelaw, Mitchell (2006). Metacreation. Art and Artificial Life. Cambridge, Massachusetts: MIT Press.

Whitely, D. (1993) "A Genetic Algorithm Tutorial", Technical Report CS-93-103, http://www.citeseernj.nec.com, 10-11-1993.


Licencia Creative Commons

Last updated 05-06-2012
© Universitat Pompeu Fabra, Barcelona