Topic: Someone please explain to me "Emergent properties" (Read 6855 times)
BillRoh
Guest
Someone please explain to me "Emergent properties"
« on: 2003-02-05 18:59:23 »
In the old days, when I was reading Michael Talbot, Davies and many others, the concept of emergent properties frequently arose.
The problem I have in all the examples I have seen is that the concept of "emergent properties" seems to me to simply be an added step to understanding. I was just reading the post by RHINO and the question came up again:
Quote:
Another explanation can be found in the properties of self-organization and emergence. Water is an emergent property of a particular arrangement of hydrogen and oxygen molecules, just as consciousness is a self-organized emergent property of billions of neurons.
I dont think of water as anything different that H and O. I know that water is an arrangement of H and O. H2O is water and it's arrangement based on the variables of the physical location of the H2O. If I say "Water" I mean H2O in liquid form.
Where does emergent come in? I understand we use the word water, but only because the molecule was common and observable before the chemical understanding arrived. But when we make a new molecule and then name it, how does the name become an emergent property? How does "table salt" become an emergent property of sodium chloride? Table salt IS sodium chloride under the proper conditions. Water is H2O under the proper conditions.
What about consciousness? If you look at my posts through the years, I have frequently described mind, or thought, or consciousness, simply as a functioning brain. I think of mind as a physical process. I have never thought of it as anything else. The notion of saying that consciousness is a self organized property of millions of neurons sounds incorrect to me. The neurons did not arrange themselves at all, this was dicated by the genetic instructions and the environement. Why call it more? Why add the extra step and call consciousness emergent instead of simply functioning? Why call consciousness anything but a physical process we happen to possess?
if emergent is not a description of function, then I simply see no use for it except as a placeholder for things lacking information. like a sky hook or a black box in a schematic.
It is only because so many intelligent people seem to think that there is meat to the concept of emergent properties that I don't simply laugh at the phrase. Obviously I am missing something and I would like to know what it is.
In the study of complex systems, the idea of emergence is used to indicate the arising of patterns, structures, or properties that do not seem adequately explained by referring only to the system's pre-existing components and their interaction. Emergence becomes of increasing importance as an explanatory construct when the system is characterized by the following features:
when the organization of the system, i.e., its global order, appears to be more salient and of a different kind than the components alone; when the components can be replaced without an accompanying decommissioning of the whole system; when the new global patterns or properties are radically novel with respect to the pre-existing components; thus, the emergent patterns seem to be unpredictable and nondeducible from the components as well as irreducible to those components. The applicability of emergence as an explanatory construct forms a continuum. On one end, the system can be sufficiently understood by an appeal to the components and their interaction alone. Whereas, at the other end, an appeal to components and their interactions is simply not very useful in understanding the dynamics of the system as a whole. Because most systems fall somewhere between these two extremes, it is usually not the case that turning to emergence entirely supplants the need to also take into consideration the components and their interactions. Issues involved in using emergence as an explanatory construct include: how causality is to be understood in such systems: the question of whether emergence is ever more than a provisional heuristic device to be replaced when there is more knowledge of the components and their interactions; and what general laws or principles can be discerned in the emergent patterns, structures, and properties.
Re:Someone please explain to me "Emergent properties"
« Reply #2 on: 2003-02-05 20:42:05 »
[BillRoh] The problem I have in all the examples I have seen is that the concept of "emergent properties" seems to me to simply be an added step to understanding. I was just reading the post by RHINO and the question came up again.
<quote from Shermer's article> "Another explanation can be found in the properties of self-organization and emergence. Water is an emergent property of a particular arrangement of hydrogen and oxygen molecules, just as consciousness is a self-organized emergent property of billions of neurons." <end quote>
I'll quote some more lines from the article, just because emergence is also mentioned here:
<quote from Shermer's article> "The evolution of complex life is an emergent property of simple life: prokaryote cells self-organized into eukaryote cells, which self-organized into multicellular organisms, which self-organized into ... and here we are. Self-organization and emergence arise out of complex adaptive systems that grow and learn as they change. As a complex adaptive system, the cosmos may be one giant autocatalytic (self-driving) feedback loop that generates such emergent properties as life. We can think of self-organization as an emergent property and emergence as a form of self-organization. Complexity is so simple it can be put on a bumper sticker: life happens. <end quote>
[BillRoh] I dont think of water as anything different that H and O. I know that water is an arrangement of H and O. H2O is water and it's arrangement based on the variables of the physical location of the H2O. If I say "Water" I mean H2O in liquid form.
Where does emergent come in? I understand we use the word water, but only because the molecule was common and observable before the chemical understanding arrived. But when we make a new molecule and then name it, how does the name become an emergent property? How does "table salt" become an emergent property of sodium chloride? Table salt IS sodium chloride under the proper conditions. Water is H2O under the proper conditions
[rhinoceros] In the case of water, it is no big deal. Water has some physical, chemical and biological properties which neither Oxygen nor Hydrogen had. These properties emerge only when Oxygen and Hydrogen are combined to become water, hence they are emergent properties.
The name of salt is not an emergent property either. Its physical, chemical and biological properties are. The chemical elements that it was made of did not have these properties, so the properties are emergent.
[BillRoh] What about consciousness? If you look at my posts through the years, I have frequently described mind, or thought, or consciousness, simply as a functioning brain. I think of mind as a physical process. I have never thought of it as anything else.
[rhinoceros] Maybe someone else can add some thoughts here. I have gone over this one before, and my personal preference is to define mind as the ability to think. That way, I can have a meaningful conversation with someone who does not believe that mind is a brain function, and I can be fully aware that I'll have to present evidence that it is, rather than making it an issue of definition.
[BillRoh] The notion of saying that consciousness is a self organized property of millions of neurons sounds incorrect to me. The neurons did not arrange themselves at all, this was dicated by the genetic instructions and the environement. Why call it more? Why add the extra step and call consciousness emergent instead of simply functioning? Why call consciousness anything but a physical process we happen to possess?
[rhinoceros] The example of emergent consciousness is a little bit more speculative. As a start for the discussion, I'll just say that a single neuron or a few neurons do not have consciousness -- never have been found to have. The brain, which is a complex network of billions of neurons does have consciousness, so this property emerged somewhere along the way up the path of complexity.
So, we can assume that a certain level of complexity is necessary for the emergence of consciousness. Some researchers go further and speculate that a certain level of complexity is also adequate for the emergence of consciousness.
The latter is popular with some researchers of Artificial Intelligence who hope that they will manage to create consciousness without having to understand its specifics.
The issue of the self-organization of the neurons, that is, how the genetic instruction were carried out initially and how the neuron's react and change with the input they get is not an easy one; we can't know without experimental data. I have heard of some experiments where the neurons change significantly and undertake new duties after a trauma or disability has been inflicted on the brain.
Now, "Why call it more [than a functioning brain]?" Because when the "engineer's" approach to a functioning brain does not help us much to understand how consciousness works, we have to try more specific hypothesis or even alternative approaches to the problem. This is even more important for those who want to build "conscious" machines without using biological stuff. Those people have to try to understand the "nature" of thought and consciousness in more abstract terms.
[BillRoh] if emergent is not a description of function, then I simply see no use for it except as a placeholder for things lacking information. like a sky hook or a black box in a schematic.
It is only because so many intelligent people seem to think that there is meat to the concept of emergent properties that I don't simply laugh at the phrase. Obviously I am missing something and I would like to know what it is.
[rhinoceros] No, emergent is just something -- anything -- that was not there before and appears under some conditions.
Thinking about thought: Self-organization and the Science of Emergence by Piero Scaruffi (Koestler, Salthe, Von Bertalanffy, Laszlo, Haken, Eigen, Prigogine, Cohen, Turing, Von Neumann, Conway, Holland, Goldberg, Langton, Kauffman, Thom, Gell-Man, Varela, Fuller) The Origin of Order When Darwin discovered evolution, he also indirectly created the premises for a momentous shift in the scientific paradigm. Over the centuries, Science had always held that order can be built only rationally, by application of a set of fundamental laws of Physics. Scientists like Newton and Einstein simply refined that model by using more and more sophisticated mathematics. Throughout the theoretical developments of Physics, the fundamental idea remained that in nature order needs to be somehow created by external forces. Darwin showed that order can build itself spontaneously, without any help from the outside. Evolution is such a process: it is capable of building higher and higher degrees of order starting from almost nothing. As far as Darwin was concerned, this paradigm only applied to Biology, but the idea has been so powerful that recently more and more natural phenomena have been reduced to some kind of spontaneous "emergence" of order. Indirectly Darwin is causing a dramatic change in the idea of Physics itself: are splitting the atom and observing distant galaxies the right ways to explain the universe? or should we focus instead on the evolutionary process that gradually built the universe the way it is now? Should we study how things are modified when a force is applied (the vast majority of what Physics does today) or should we deal with how things modify themselves spontaneously? Can Physics ever explain how a tree grows or how a cloud moves by bombarding particles with radiations? The macroscopic phenomena we observe are more likely to be explained by laws about systems than by laws about particles. Science™s ultimate theme is still the origin of order, but the perspective may be changing. Furthermore, order is directly related to information, and Darwin™s theory has to do with the creation of information. From its new perspective, Science may be as much a study of information as it is a study of gravitation or electricity. And the creation f order is inevitably related to the destruction of entropy (or the creation of negative entropy): entropy is therefore elevated to a higher rank among physical quantities. As a matter of fact, Darwin™s laws, unlike the laws of nature claimed by physical sciences, cannot be written down in the form of differential equations. They can only be stated in a generic manner, and any attempt to formalize them resorts to algorithms rather than equations. Algorithms are fundamentally different from equations in that they are discrete, rather than continuous, they occur in steps rather than istantaneously, and they can refer to themselves. A Science based on algorithms would be inherently different from a Science based on equations. Finally, Darwin™s paradigm is one that is rooted in the concept of organization and that ultimately aims at explaining organization. Indirectly, Darwin brought to the surface the elementary fact that the concept of organization is deeply rooted in the physical universe. Darwin™s treatise on the origin of species was indeed a treatise on the origin of order. There lies its monumental importance. Design Without a Designer Why do children grow up? Why aren't we born adults? Why do all living things (from organs to ecosystems) have to grow, rather than being born directly in their final configuration? Darwin's principle was that given a population and fairly elementary rules of how the population can evolve (mainly, natural selection), the population will evolve, and get better and better (adapted) over time. Whether natural selection is really the correct rule is secondary. The powerful idea was that the target object can be reached not by designing it and then building it, but by taking a primitive object and letting it evolve. The target object will not be built: it will emerge. Trees are not built, they grow. Societies are not built, they form over centuries. Most of the interesting things we observe in the world are not built, they developed slowly over time. How they happen to be the way they are depends to some extent on the advantages of being the way they are and to some extent on mere chance. When engineers build a bridge, they don't let chance play with the design and they don't assume that the bridge will grow by itself. They know exactly what the bridge is going to look like and they decide which day construction will be completed. They know the bridge is going to work because they can use mathematical formulas. Nature seems to use a different system, where things use chance to vary and variation leads to evolution because of the need for adaptation. By using this system, Nature seems to be able to obtain far bigger and more complex structures than humans can ever dream of building. It is ironic that, in the process, Nature uses much simpler mathematics. Engineers need to deal with derivatives and cosines. Nature's mathematics (i.e., the mathematics involved in genetic variation) is limited to arithmetics. Humans have developed a system that is much more complex than anything Nature has ever dreamed of using! It is stunning that such simple algorithms as used by Nature can produce the complexity of living organisms. Each algorithm can be reduced to even simpler steps. And still the repeated application of those steps eventually yields the complex order of life. The same theme occurs inside the brain. Neurons exchange simple messages, but the network of those messages over time can produce the very complex behavior of the human mind. Another simple algorithm that creates complexity. In both cases the algorithm is simple, but there is a catch. The algorithm is such that every time it ends it somehow remembers the result of its computation and will use it as the starting point for the next run. Species are selected out of the most recently selected species. Neural connections are modified out of the connections already established. Chaos and Complexity One of the themes straddling both biological and physical sciences is the quest for a mathematical model of phenomena of emergence (spontaneous creation of order), and in particular adaptation, and a physical justification of their dynamics (which seems to violate physical laws). The physicist Sadi Carnot, one of the founding fathers of Thermodynamics, realized that the statistical behavior of a complex system can be predicted if its parts were all identical and their interactions weak. At the beginning of the century, another French physicist, Henri Poincare`, realizing that the behavior of a complex system can become unpredictable if it consists of few parts that interact strongly, invented "chaos" theory. A system is said to exhibit the property of chaos if a slight change in the initial conditions results in large-scale differences in the result. Later, Bernard Derrida will show that a system goes through a transition from order to chaos if the strength of the interactions among its parts is gradually increased. But then very "disordered" systems spontaneously "crystallize" into a higher degree of order. First of all, the subject is "complexity", because a system must be complex enough for any property to "emerge" out of it. Complexity can be formally defined as nonlinearity. The world is mostly nonlinear. The science of nonlinear dynamics was originally christened "chaos theory" because from nonlinear equations unpredictable solutions emerge. A very useful abstraction to describe the evolution of a system in time is that of a "phase space". Our ordinary space has only three dimensions (width, height, depth) but in theory we can think of spaces with any number of dimensions. A useful abstraction is that of a space with six dimensions, three of which are the usual spatial dimentions. The other three are the components of velocity along those spatial dimensions. In ordinary 3-dimensional space, a "point" can only represent the position of a system. In 6-dimensional phase space, a point represents both the position and the motion of the system. The evolution of a system is represented by some sort of shape in phase space. The shapes that chaotic systems produce in phase space are called "strange attractors" because the system will tend towards the kinds of state described by the points in the phase space that lie within them. The program then becomes that of applying the theory of nonlinear dynamic systems to Biology. Inevitably, this implies that the processes that govern human development are the same that act on the simplest organisms (and even some nonliving systems). They are processes of emergent order and complexity, of how structure arises from the interaction of many independent units. The same processes recurr at every level, from morphology to behavior. Darwin's vision of natural selection as a creator of order is probably not sufficient to explain all the spontaneous order exhibited by both living and dead matter. At every level of science (including the brain and life) the spontaneous emergence of order, or self-organization of complex systems, is a common theme. Koestler and Salthe have shown how complexity entails hierarchical organization. Von Bertalanffi's general systems theory, Haken's synergetics, and Prigogine's non-equilibrium Thermodynamics belong to the class of mathematical disciplines that are trying to extend Physics to dynamic systems. These theories have in common the fact that they deal with self-organization (how collections of parts can produce structures) and attempt at providing a unifying view of the universe at different levels of organization (from living organisms to physical systems to societies). Holarchies The Hungarian writer and philosopher Arthur Koestler first brought together a wealth of biological, physical, anthropological and philosophical notions to construct a unified theory of open hierarchical systems. Language has to do with a hierarchical process of spelling out implicit ideas in explicit terms by means of rules and feedbacks. Organisms and societies also exhibit the same hierarchical structure. In these hierarchies, each intermediary entity ("holon") functions as a self-contained whole relative to its subordinates and as one of the dependent parts of its superordinates. Each holon tends to persist and assert its pattern of activity. Wherever there is life, it must be hierarchically organized. Life exhibits an integrative property (that manifests itself as symbiosis) that enables the gradual construction of complex hierarchies out of simple holons. In nature there are no separated, indivisible, self-contained units. An "individual" is an oxymoron. An organism is a hierarchy of self-regulating holons (a "holarchy") that work in coordination with their environment. Holons at the higher levels of the hierarchy enjoy progressively more degrees of freedom and holons at the lower levels of the hierarchy have progressively less degrees of freedom. Moving up the hierarchy, we encounter more and more complex, flexible and creative patterns of activity. Moving down the hierarchy behavior becomes more and more mechanized. A hierarchical process is also involved in perception and memorization: it gradually reduces the percept to its fundamental elements. A dual hierarchical processis involved in recalling: it gradually reconstructs the percept. Hierarchical processes of the same nature can be found in the development of the embryo, in the evolution of species and in consciousness itself (which should be analyzed not in the context of the mind/body dichotomy but in the context of a multi-levelled hierarchy and of degrees of consciousness). They all share common themes: a tendency towards integration (a force that is inherent in the concept of hierarchic order, even if it seems to challenge the second law of Thermodynamics as it increases order), an openess at the top of the hierarchy (towards higher and higher levels of complexity) and the possibility of infinite regression. Hierarchies from Complexity Stanley Salthe, by combining the metaphysics of Justus Buchler and Michael Conrad's "statistical state model" of the evolutionary process, has developed what amounts to a theory of everything: an ontology of the world, a formal theory of hierarchies and a model of the evolution of the world. The world is viewed as a determinate machine of unlimited complexity. Within complexity, discontinuities arise. The basic structure of this world must allow for complexity that is spontaneously stable and that can be broken down in things divided by boundaries. The most natural way for the world to satisfy this requirement is to employ a hierarchical structure, which is also implied by Buchler's principle of ordinality: Nature (i.e., our representation of the world) is a hierarchy of entities existing at different levels of organization. Hierarchical structure turns out to be a consequence of complexity. Entities are defined by four criteria: boundaries, scale, integration, continuity. An entity has size, is limited by boundaries, and consists of an integrated system which varies continuously in time. Entities at different levels interact through mutual constraints, each constraint carrying information for the level it operates upon. A process can be described by a triad of contiguous levels: the one it occurs at, its context (what the philosopher Mario Bunge calls "environment") and its causes (Bunge's "structure"). In general, a lower level provides initiating conditions for a process and an upper level provides boundary conditions. Representing a dynamic system hierarchically requires a triadic structure. Aggregation occurs upon differentiation. Differentiation interpolates levels between the original two and the new entities aggregate in such a way that affects the structure of the upper levels: every time a new level emerges, the entire hierarchy must reorganize itself. Salthe also recalls a view of complexity due to the physicist Howard Hunt Pattee: complexity as the result of interactions between physical and symbolic systems. A physical system is dependent on the rates at which processes occur, whereas a symbolic system is not. Symbolic systems frequently serve as constraints applied to the operation of physical systems, and frequently appear as products of the activity of physical systems (e.g., the genome in a cell). A physical system can be said to be "complex" when a part of it functions as a symbolic system (as a representation, and therefore as an observer) for another part of it. These abstract principles can then be applied to organic evolution. Over time, Nature generates entities of gradually more limited scope and more precise form and behavior. This process populates the hierarchy of intermediate levels of organization as the hierarchy spontaneously reorganizes itself. The same model applies to all open systems, whether organisms or ecosystems or planets. By applying principles of complex systems to biological and social phenomena, Salthe attempts to reformulate Biology on development rather than on evolution. His approach is non-Darwinian to the extent that development, and not evolution, is the fundamental process in self-organization. Evolution is merely the result of a margin of error. His theory rests on a bold fusion of hierarchy theory, Information Theory and Semiotics. Salthe is looking for a grand theory of nature, which turns out to be essentially a theory of change, which turns out to be essentially a theory of emergence. General Systems Theory "General Systems Theory" was born before Cybernetics, and cybernetic systems are merely a special case of self-organizing systems; but General System Theory took longer to establish itself. It was conceived in the 1930s by the Austrian biologist Ludwig Von Bertalanffy. His ambition was to create a "universal science of organization". His legacy is to have started "system thinking", thinking about systems as systems and not as mere aggregates of parts. The classical approach to the scientific description of a system's behavior (whether in Physics or in Economics) can be summarized as the search for "isolable causal trains" and the reduction to atomic units. This approach is feasible under two conditions: 1. that the interaction among the parts of the system be negligible and 2. that the behavior of the parts be linear. Von Bertalanffy's "systems", on the other hand, are those entities (oe "organized complexities") that consist of interacting parts, usually described by a set of nonlinear differential equations. Systems Theory studies principles which apply to all systems, properties that apply to any entity qua system. Basic concepts of Systems Theory are, for example, the following: every whole is based upon the competition among its parts; individuality is the result of a never-ending process of progressive centralization whereby certain parts gain a dominant role over the others. General Systems Theory looks for laws that can be applied to a variety of fields (i.e., for an isomorphism of law in different fields), particularly in the biological, social and economic sciences (but even in history and politics). General Systems Theory mainly studies "wholes", which are characterized by such holistic properties as hierarchy, stability, teleology. "Open Systems Theory" is a subset of General Systems Theory. Because of the second law of Thermodynamics, a change in entropy in closed systems is always positive: order is continually destroyed. In open systems, on the other hand, entropy production due to irreversible processes is balanced by import of negative entropy (as in all living organisms). If an organism is viewed as an open system in a steady state, a theory of organismic processes can be worked out. Furthermore, a living organism can be viewed as a hierarchical order of open systems, where each level maintains its structure thanks to continuous change of components at the next lower level. Living organisms maintain themselves in spite of continuous irreversible processes and even proceed towards higher and higher degrees of order. Ervin Laszlo's take at a "theory of natural systems" (i.e., a theory of the invariants of organized complexity) is centered around the concept of "ordered whole", whose structure is defined by a set of constraints. Laszlo adopts a variant of Ashby's principle of self-organization, according to which any isolated natural system subject to constant forces is inevitably inhabited by "organisms" that tend towards stationary or quasi-stationary non-equilibrium states. In Laszlo's view, the combination of internal constraints and external forces yields adaptive self-organization. Natural systems evolve towards increasingly adapted states, corresponding to increasing complexity (or negative entropy). Natural systems sharing an environment tend to organize in hierarchies. The set of such systems tends to become itself a system, its subsystems providing the constraints for the new system. Laszlo offered rigorous foundations to deal with the emergence of order at the atomic ("micro-cybernetics"), organismic ("bio-cybernetics") and social levels ("socio-cybernetics"). A systemic view also permits a formal analysis of a particular class of natural systems: cognitive systems. The mind, just like any other natural system, exhibits an holistic character, adaptive self-organization, and hierarchies, and can be studied with the same tools ("psycho-cybernetics"). Synergetics "Synergetics", as developed in Germany by the physicist Hermann Haken, is a theory of pattern formation in complex systems. It tries to explain structures that develop spontaneously in nature. Synergetics studies cooperative processes of the parts of a system far from equilibrium that lead to an ordered structure and behavior for the system. Haken's favorite example was the laser: how do the atoms of the laser agree to produce a single coherent wave flow? The answer is that the laser is a self-organizing system far from the equilibrium (what Prigogine would call a dissipative structure). A "synergetic" process in a physical system is one in which, when energy is pumped into the system, some macroscopic structure emerges from the disorderly behavior of the large number of microscopic particles that make up the physical system. As energy is pumped into the system, initially nothing seems to happen, other than additional excitation of the particles, but then the system reaches a threshold beyond which structure suddenly emerges. The laser is such a synergetic process: a beam of coherent light is created out of the chaotic movement of particles. What happens is that energy pushes the system of particles beyond a threshold, and suddenly the particles start behaving harmoniously.. Since order emerges out of chaos, and chaos is not well defined, synergetics employs probabilities (to describe uncertainty) and information (to describe approximation). Entropy becomes a central concept, relating Physics to Information Theory. Synergetics revolves around a number of technical concepts: compression of the degrees of freedom of a complex system into dynamic patterns that can be expressed as a collective variable; behavioral attractors of changing stabilities; and the appearance of new forms as non-equilibrium phase transitions. Synergetics applies to systems driven far from equilibrium, where the classic concepts of Thermodynamics are no longer adequate. It expresses the fact that order can arise from chaos and can be maintained by flows of energy/matter. Systems at instability points (at the "threshold") are driven by a "slaving principle": long-lasting quantities (the macroscopic pattern) can enslave short-lasting quantities (the chaotic particles), and they can force order on them (thereby becoming "order parameters"). The system exhibits a stable "mode", which is the chaotic motion of its particles, and an unstable "mode", which is its macroscopic structure and behavior of the whole system. Close to instability, stable modes are "enslaved" by unstable modes and can be ignored. Instead of having to deal with millions of chaotic particles, one can focus on the macroscopic quantities. De facto, the degrees of freedom of the system are reduced. Haken shows how one can write the dynamic equations for the system, and how such mathematical equations reflect the interplay between stochastic forces ("chance") and deterministic forces ("necessity"). Hypercycles The German chemist Manfred Eigen was awarded the Nobel Prize in 1967 for discovering that very short pulses of energy could trigger extremely fast chemical reactions. In the following years, he started looking for how very fast reactions could be used to create and sustain life. Indirectly, he ended up studying the behavior of biochemical systems far from equilibrium. Eventually, Eigen came up with the concept of an "hypercycle". A hypercycle is a cyclic reaction network, i.e. a cycle of cycles of cycles (of chemical reactions). Then he proved that life can be viewed as the product of a hierarchy of such hypercycles. A catalist is a substance that favors a chemical reaction. When enough energy is provided, some catalytic reactions tend to combine to form networks, and such networks may contain closed loops, called catalytic cycles. If even more energy is pumped in, the system moves even farther from equilibrium, and then catalytic cycles tend ot combine to form closed loops of a higher level, or hypercycles, in which the enzymes produced by a cycle act as catalysts for the next cycle in the loop. Each link of the loop is now a catalytic cycle itself. Eigen showed that hypercycles are capable of self-replication, which may therefore have been a property of nature even before the invention of living organisms. Hypercycles are capable of evolution through more and more complex stages. Hypercycles compete for natural resources and are therefore subject to natural selection. The hypercycle falls short of being a living system because it defines no "boundary": the boundary is the container where the chemical reaction is occurring. A living system, on the other hand, has a boundary that is part of the living system (eg, the skin). Catalysis is the phenomenon by which a chemical reaction is sped up: without catalysis, all processes that give rise to life would take a lot longer, and probably would not be fast enough for life to happen. Then Eigen shows that they can be organized into an autocatalytic cycle, i.e. a cycle that is capable of self-reproducing: this is the fundamental requirement of life. A set of autocatalytic cycles gets, in turn, organized into a catalytic hypercycle. This catalytic hypercycle represents the basic form of life. Formally: "hypercycles" are a class of nonlinear reaction networks. They can originate spontaneously within the population of a species through natural selection and then evolve to higher complexity by allowing for the coherent evolution of a set of functionally coupled self-replicating entities. A hypercycle is based on nonlinear autocatalysis, which is a chain of reproduction cycles which are linked by cyclic catalysis, i.e. by another autocatalysis. A hypercycle is a cycle of cycles of cycles. A hypercycle can be viewed as the next higher level in the hierarchy of autocatalytic systems. Systems can be classified in four groups according to their stability with respect to fluctuations: stable systems (the fluctuations are self-regulating), indifferent systems (the fluctuations have no effect), unstable systems (self-amplification of the fluctuations) and variable systems (which can be in any of the previous states). Only the last type is suitable for generation of biological information because it can play all best tactics: indifference towards a broad mutant spectrum, stability towards selective advantages and instability towards unfavorable configurations. In other words, it can take the most efficient stance in the face of both favorable and adverse situations. Eigen™s model explains the simultaneous unity (due to the use of a universal genetic code) and diversity (due to the "trial and error" approach of natural selection) in evolution. This dual process started even before life was created. Evolution of species was preceded by an analogous stepwise process of molecular evolution. Whatever the mathematics, the bottom line is that natural selection itself turns out to be inevitable: given a set of self-reproducing entities that feed on a common and limited source of energetic/material supply, natural selection will spontaneously appear. Natural selection is a mathematical consequence of the dynamics of self-reproducing systems of this kind. Dissipative Systems By far, though, the most influential school of thought has been the one related to Ilya Prigogine's non-equilibrium Thermodynamics, which redefined the way scientists approach natural phenomena and brought self-organizing processes to the forefront of the study of complex systems. His theory found a stunning number and variety of fields of application, from Chemistry to Sociology. In his framework, the most difficult problems of Biology, from morphogenesis to evolution, found a natural model. Classical Physics describes the world as a static and reversible system that undergoes no evolution, whose information is constant in time. Classical Physics is the science of being. Thermodynamics, instead, describes an evolving world in which irreversible processes occurs. Thermodynamics is the science of becoming. The second law of Thermodynamics, in particular, describes the world as evolving from order to disorder, while biological evolution is about the complex emerging from the simple (i.e. order arising from disorder). While apparently contradictory, these two views show that irreversible processes are an essential part of the universe. Furthermore, conditions far from equilibrium foster phenomena such as life that classical Physics does not cover at all. Irreversible processes and non-equilibrium states turn out to be fundamental features of the real world. Prigogine distinguishes between "conservative" systems (which are governed by the three conservation laws for energy, translational momentum and angular momentum, and which give rise to reversible processes) and "dissipative" systems (subject to fluxes of energy and/or matter). The latter give rise to irreversible processes. The theme of science is order. Order can come either from equilibrium systems or from non-equilibrium systems that are sustained by a constant source (or, dually, by a persistent dissipation) of matter/energy. In the latter systems, order is generated by the flux of matter/energy. All living organisms (as well as systems such as the biosphere) are non-equilibrium systems. Prigogine proved that, under special circumstances, the distance from equilibrium and the nonlinearity of a system drive the system to ordered configurations, i.e. create order. The science of being and the science of becoming describe dual aspects of Nature. What is needed is a combination of factors that are exactly the ones found in living matter: a system made of a large collection of independent units which are interacting with each other, a flow of energy through the system that drives the system away from equilibrium, and nonlinearity. Nonlinearity expresses the fact that a perturbation of the system may reverberate and have disproportionate effects. Non-equilibrium and nonlinearity favor the spontaneous development of self-organizing systems, which maintain their internal organization, regardless of the general increase in entropy, by expelling matter and energy in the environment. When such a system is driven away from equilibrium, local fluctuations appear. This means that in places the system gets very unstable. Localized tendencies to deviate from equilibrium are amplified. When a threshold of instability is reached, one of these runaway fluctuations is so amplified that it takes over as a macroscopic pattern. Order appears from disorder through what are initially small fluctuations within the system. Most fluctuations die along the way, but some survive the instability and carry the system beyond the threshold: those fluctuations "create" new form for the system. Fluctuations become sources of innovation and diversification. The potentialities of nonlinearity are dormant at equilibrium but are revelead by non-equilibrium: multiple solutions appear and therefore diversification of behavior becomes possible. Technically speaking, nonlinear systems driven away from equilibrium can generate instabilities that lead to bifurcations (and symmetry breaking beyond bifurcation). When the system reaches the bifurcation point, it is impossible to determine which path it will take next. Chance rules. Once the path is chosen, determinism resumes. The multiplicity of solutions in nonlinear systems can even be interpreted as a process of gradual "emancipation" from the environment. Most of Nature is made of such "dissipative" systems, of systems subject to fluxes of energy and/or matter. Dissipative systems conserve their identity thanks to the interaction with the external world. In dissipative structures, non-equilibrium becomes a source of order. These considerations apply very much to living organisms, which are prime examples of dissipative structures in non-equilibrium. Prigogine's theory explains how life can exist and evolution work towards higher and higher forms of life. A "minimum entropy principle" characterizes living organisms: stable near-equilibrium dissipative systems minimize their rate of entropy production. From non-equilibrium Thermodynamics a wealth of concepts has originated: invariant manifolds, attractors, fractals, stability, bifurcation analysis, normal forms, chaos, Lyapunov exponents, entropies. Catastrophe and chaos theories turn out to be merely special cases of nonlinear non-equilibrium systems. In concluding, self-organization is the spontaneous emergence of ordered structure and behavior in open systems that are in a state far from equilibrium described mathematically by nonlinear equations. Catastrophe Theory Rene' Thom's catastrophe theory, originally formulated in 1967 and popularized ten years later by the work of the British mathematician Erich Zeeman, became a widely used tool for classifying the solutions of nonlinear systems in the neighborhood of stability breakdown. In the beginning, Thom, a French mathematician, was interested in structural stability in topology (stability of topological form) and was convinced of the possibility of finding general laws of form evolution regardless of the underlying substance of form, as already stated at the beginning of the century by D'Arcy Thompson. Thom's goal was to explain the "succession of form". Our universe presents us with forms (that we can perceive and name). A form is defined, first and foremost, by its stability: a form lasts in space and time. Forms change. The history of the universe, insofar as we are concerned, is a ceaseless creation, destruction and transformation of form. Life itself is, ultimately, creation, growth and decaying of form. Every physical form is represented by a mathematical quantity called "attractor" in a space of internal variables. If the attractor satisfies the mathematical property of being "structurally stable", then the physical form is the stable form of an object. Changes in form, or morphogenesis, are due to the capture of the attractors of the old form by the attractors of the new form. All morphogenesis is due to the conflict between attractors. What catastrophe theory does is to "geometrize" the concept of "conflict". The universe of objects can be divided into domains of different attractors. Such domains are separated by shock waves. Shock wave surfaces are singularities called "catastrophes". A catastrophe is a state beyond which the system is detroyed in an irreversible manner. Technically speaking, the "ensembles de catastrophes" are hypersurfaces that divide the parameter space in regions of completely different dynamics. The bottom line is that dynamics and form become dual properties of nonlinear systems. This is a purely geometric theory of morphogenesis, His laws are independent of the substance, structure and internal forces of the system. Thom proves that in a 4-dimensional space there exist only 7 types of elementary catastrophes. Elementary catastrophes include: "fold", destruction of an attractor which is captured by a lesser potential; "cusp", bufurcation of an attractor into two attractors; etc. From these singularities, more and more complex catastrophes unfold, until the final catastrophe. Elementary catastrophes are "local accidents". The form of an object is due to the accumulation of many of these "accidents".
The Origin of Regularity Prigogine's "bifurcation theory" is a descendent of the theory of stability initiated by the Russian mathematician Aleksander Lyapounov. Rene' Thom's catastrophe theory is particular case of bifurcation theory, so they all belong to the same family. They all elaborate on the same theorem, Lyapounov's theorem: for isolated systems, thermodynamic equilibrium is an attractor of nonequilibrium states. Then the story unfolds, leading to dissipative systems and eventually to the reversing of Thermodynamics' fundamental assumption, the destruction of structure. Order emerges from the very premises that seem to deny it. Jack Cohen and Ian Steward are among those who study how the regularities of nature (from Cosmology to Quantum Theory, from Biology to Cognitive Psychology) emerge from the underlying chaos and complexity of nature: "emergent simplicities collapse chaos". They proved that external constraints are fundamental in shaping biological systems (DNA does not uniquely determine an organism) and defined new concepts: "simplexity" (the tendency of simple rules to emerge from underlying disorder and complexity) and "complicity" (the tendency of interacting systems to coevolve leading to a growth of complexity). Simplexity is a "weak" form of emergence, and is ubiquitous. Complicity is a stronger form of emergence, and is responsible for consciousness and evolution. Emergence is the rule, not the exception, and it is shaped by simplexity and complicity. Emergent Computation Emergent computation is to standard computation what nonlinear systems are to linear systems: it deals with systems whose parts interact in a nontrivial way. Both Turing and Von Neumann, the two mathematicians who inspired the creation of the computer, were precursors in emergent computation: Turing formulated a theory of self-catalytic systems and Von Neumann studied self-replicating automata. Alan Turing (in the 1950's) advanced the reaction-diffusion theory of pattern formation, based on the bifurcation properties of the solutions of differential equations. Turing devised a model to generate stable patterns: X catalyzes itself: X diffuses slowly X catalyzes Y: Y diffuses quickly Y inhibits X Y may or may not catalyze or inhibit itself Some reactions might be able to create ordered spatial schemes from disordered schemes. The function of genes is purely catalytic: they catalyze the production of new morphogenes, which will catalyze more morphogenes until eventually form emerges. Von Neumann saw life as a particular class of automata (of programmable machines). Life's main property is the ability to reproduce. Von Neumann proved that a machine can be programmed to make a copy of itself. Von Neumann's automaton was conceived to absorb matter from the environment and process it to build another automaton, including a description of itself. Von Neumann realized (years before the genetic code was discovered) that the machine needed a description of itself in order to reproduce. The description itself would be copied to make a new machine, so that the new machine too could copy itself. In Von Neumann's simulated world, a large checkerboard was a simplified version of the world, in which both space and time were discrete. Time, in particular, was made to advance in discrete steps, which meant that change could occur only at each step, and simultaneously for everything that had to change. Von Neumann's studies of the 1940s led to an entire new field of Mathematics, called "cellular automata". Technically speaking, cellular automata are discrete dynamical systems whose behavior is completely specified in terms of a local relation. In practice, cellular automata are the computer scientist's equivalent of the physicist's concept of field. Space is represented by a uniform grid and time advances in discrete steps. Each cell of space contains bits of information. Laws of nature express what operation must be performed on each cell's bits of information, based on its neighbor's bits of information. Laws of nature are local and uniform. The amazing thing is that such simple "organisms" can give rise to very complex structures, and those structures recur periodically, which means that they achieve some kind of stability. Von Neumann's idea of the dual genetics of self-reproducing automata (that the genetic code must act as instructions on how to build and organism and as data to be passed on to the offspring) was basically the idea behind what will be called DNA: DNA encodes the instructions for making all the enzymes and the protein that a cell needs to function and DNA makes a copy of itself every time the cell divides in two. Von Neumann indirectly understood other properties of life: the ability to increase its complexity (an organism can generate organisms that are more complex than itself) and the ability to self-organize. When a machine (e.g., an assembly line) builds another machine (e.g., an appliance), there occurs a degradation of complexity, whereas the offsprings of living organisms are at least as complex as their parents and their complexity increases in evolutionary times. A self-reproducing machine would be a machine that produces another machine of equal of higher complexity. By representing an organism as a group of contigous multi-state cells (either empty or containing a component) in a 2-dimensional matrix, Von Neumann proved that a Turing-type machine that can reproduce itself could be simulated by using a 29-state cell component. John Conway is the inventor of a game "Life", that is staged in Von Neumann™s checkerboard world (in which the state of a square changes depending on the adjacent squares). Conway proved that, given enough resources and time, self-reproducing patterns will occur. Turing proved that there exists a universal computing machine. Von Neumann proved that there exists a universal computing machine which, given a description of an automaton, will construct a copy of it, and, by extension, that there exists a universal computing machine which, given a description of a universal computing machine, will construct a copy of it, and, by extension, that there exists a universal computing machine which, given a description of itself, will construct a copy of itself. The two most futuristic topics addressed by Cybernetics were self-reproducing machines and self-organizing systems. They are pervasive in nature, and modern technologies make it possible to dream of building them artificially as well. Still, they remained merely speculative. The step that made emergent computation matter to the real world came from the computational application of the two pillars of the synthetic theory of evolution, namely the genetic code and adaptation. Genetic Algorithms The momentum for the computational study of genetic algorithms and adaptive systems was created in large part by John Holland's work. In the 1970s, the American computer scientist John Holland had the intuition that the best way to solve a problem is to mimick what biological organisms do to solve their problem of survival: to evolve (through natural selection) and to reproduce (through genetic recombination). Genetic algorithms apply recursively a series of biologically-inspired operators to a population of potential solutions of a given problem. Each application of operators generates new populations of solutions which should better and better approximate the best solution. What evolves is not the single individual but the population as a whole. Genetic algorithms are actually a further refinement of search methods within problem spaces. Genetic algorithms improve the search by incorporating the criterion of "competition". Recalling Newell and Simon's definition of problem solving as "searching in a problem space", David Goldberg defines genetic algorithms as "search algorithms based on the mechanics of natural selection and natural genetics". Unlike most optimization methods, that work from a single point in the decision space and employ a transition method to determine the next point, genetic algorithms work from an entire "population" of points simultaneously, trying many directions in parallel and employing a combination of several genetically-inspired methods to determine the next population of points. One can employ simple algorithms such as "reproduction" (that copies chromosomes according to a fitness function), "crossover" (that switches segments of two chromosomes) and "mutation", as well as more complex algorithms such as "dominance" (a genotype-to-phenotype mapping), "diploidy" (pairs of chromosomes), "abeyance" (shielded against overselection), "inversion" (the primary natural mechanism for recoding a problem, by switching two points of a chromosome); and so forth. Holland's classifier (which learns new rules to optimize its performance) was the first practical application of genetic algorithms. A classifier system is a machine learning system that learns syntactically rules (or "classifiers") to guide its performance in the environment. A classifier system consists of three main components: a production system, a credit system (such as the "bucket brigade") and a genetic algorithm to generate new rules. Its emphasis on competition and coopertation, on feedback and reinforcement, rather than on pre-programmed rules, set it apart from knowledge-based models of Artificial Intelligence. A measure function computes how "fit" an individual is. The selection process starts from a random population of individual. For each individual of the population the fitness function provides a numeric value for how much the solution is far from the ideal solution. The probability of selection for that individual is made proportional to its "fitness". On the basis of such fitness values a subset of the population is selected. This subset is allowed to reproduce itself through biologically-inspired operators of crossover, mutation and inversion. Each individual (each point in the space of solutions) is represented as a string of symbols. Each genetic operators perform an operation on the sequence or content of the symbols. When a message from the environment matches the antecedent of a rule, the message specified in the consequent of the rule is produced. Some messages produced by the rules cycle back into the classifier system, some generate action on the environment. A message is a string of characters from a specified alphabet. The rules are not written in the first-order predicate logic of expert systems, but in a language that lacks descriptive power and is limited to simple conjunctive expressions. Credit assignment is the process whereby the system evaluates the effectiveness of its rules. The "bucket brigade" algorithm assigns a strength (a maesure of its past usefulness) to each rule. Each rule then makes a bid (proportional to its strength and to its relevance to the current situation) and only the highest bidding rules are allowed to pass their messages on. The strengths of the rules are modified according to an economic analogy: every time a rule bids, its strength is reduced of the value of the bid while the strength of its "suppliers" (the rules that sent the messages matched by this bidder) are increased. The bidder strength will in turn increase if its consumers (the rules that receive its message) will become bidders. This leads to a chain of suppliers/consumers whose success ultimately depends on the success of the rules that act directly on the environment. Then the system replaces the least useful (weak) rules with newly generated rules that are based on the system's accumulated experience, i.e. by combining selected "building blocks" ("strong" rules) according to some genetic algorithms. Holland then went on to focus on "complex adaptive systems". Such systems are governed by principles of anticipation and feedback. Based on a model of the world, an adaptive system anticipates what is going to happen. Models are improved based on feedback from the environment. Complex adaptive system are ubiquitous in nature. They include brains, ecosystems and even economies. They share a number of features: each of these systems is a network of agents acting in parallel and interacting; behavior of the system arises from cooperation and competitiong among its agents; each of these systems has many levels of organization, with agents at each level serving as building blocks for agents at a higher level; such systems are capable of rearranging their structure based on their experience; they are capable of anticipating the future by means of innate models of the world; new opportunities for new types of agents are continously beeing created within the system. All complex adaptive systems share four properties (aggregation, nonlinearity, flowing, diversity) and three mechanisms (categorization by tagging, anticipation through internal models, decomposition in building blocks). Each adaptive agent can be represented by a framework consisting of a performance system (to describe the system's skills), a credit-assignment algorithm (to reward the fittest rules) and a rule-discovery algorithm (to generate plausible hypotheses). The Edge of Chaos A new theoretical breakthrough occurred when Chris Langton demonstrated that physical systems achieve the prerequisites for the emergence of computation (i.e., transmission, storage, modification) in the vicinity of a phase transition ("at the edge of chaos"). Specifically, information becomes an important factor in the dynamics of cellular automata in the vicinity of the phase transition between periodic and chaotic behavior, i.e. between order and chaos. The idea is that systems undergo transformations, and while they transform they constantly move from order to chaos and back. This transition is similar to the "phase transitions" undergone by a substance when it turns liquid or solid or fluid. When ice turns into water, the atoms have not changed, but the system as a whole has undergone a phase transition. Microscopically, this means that atoms are behaving in a different way. The transition of a system from chaos to order and back is similar in that the system is still made of the same parts, but they behave in a different way. The state between order and chaos (the "edge of chaos") is sometimes a very "informative" state, because the parts are not as rigidly assembled as in the case of order and, at the same time, they are not as loose as in the case of chaos. The system is stable enough to keep information and unstable enough to dissipate it. The system at the edge of chaos is both a storage and a broadcaster of information. At the edge of chaos, information can propagate over long distances without decaying appreciably, thereby allowing for long-range correlation in behavior: ordered configurations do not allow for information to propagate at all, and disordered configurations cause information to quickly decay into random noise. This conclusion is consistent with Von Neumann's findings. A fundamental connection therefore exists between computation and phase transition. The edge of chaos is where the system can perform computation, can metabolize, can adapt, can evolve. In a word: these systems can be alive. Basically, Langton proved that Physics can support life only in a very narrow boundary between chaos and order. In that locus it is possible to build artificial organisms that will settle into recurring patterns conductive to an orderly transmission of information. Langton also related phase transitions, computation and life, which means that he built a bridge between Thermodynamics, Information Theory and Biology. The edge of chaos is also the locus of Murray Gell-Man's speculations. Gell-Man, a physicist who was awarded the Nobel prize for theorizing about the quarks, thinks that biological evolution is a complex adaptive system that complies with the second law of Thermodynamics once the entire environment, and not only the single organism, is taken into account. Living organisms dwell "on the edge of chaos", as they exhibit order and chaos at the same time, and they must exhibit both in order to survive. Living organisms are complex adaptive systems that retrieve information from the world, find regularities, compress them into a schema to represent the world, predict the evolution of the world and prescribe behavior for themselves. The schema may undergo variants that compete with one another. Their competition is regulated by feedback from the real world under the form of selection pressure. Disorder is useful for the development of new behavior patterns that enable the organism to cope with a changing environment. Technically speaking, once complex adaptive systems establish themselves, they operate through a cycle that involves variable schemata, randomness, phenotypic consequences and feedback of selection pressures to the competition among schemata. Complex Systems The American biologist Stuart Kauffman is the prophet of "complex" systems. Kauffman's quest is for the fundamental force that counteracts the universal drift towards disorder required by the second law of Thermodynamics. His idea is that Darwin was only half right: systems do evolve under the pressure of natural selection, but their quest for order is helped by a property of our universe, the property that "complex" systems just tend to organize themselves. Darwin's story is about the power of chance: by chance life developed and then evolved. Kauffman's story is about destiny: life is the almost inevitable result of a process inherent in nature. Kauffman's first discovery was that cells behave like mathematical networks. In the early 1960s, Monod and others discovered that genes are assembled not in a long string of instructions but in "genetic circuits". Within the cell, there are regulatory genes whose job is to turn on or off other genes. Therefore genes are not simply instructions to be carried out one after the other, they realize a complex network of messages. A regulatory gene may trigger another regulatory gene that may trigger another gene¦ etc. Each gene is typically controlled by two to ten other genes. Turning on just one gene may trigger an avalanche of effects. The genetic program is not a sequence of instructions but rather a regulatory network that behaves like a self-organizing system. By using a computer simulation of a cell-like network, Kauffman proved that, in any organism, the number of cell types must be approximately the square root of the number of genes. He starts where Langton ended. His "candidate principle" states that organisms change their interactions in such a way to reach the boundary between order and chaos. For example, the Danish physicist Per Bak studied the pile of sand, whose collapse under the weight of a new grain is unpredictable: the pile self-organizes. No external force is shaping the pile of sand, it is the pile of sand that organizes itself. Further examples include any ecosystem (in which organisms live at the border between extinction and overpopulation), the price of a product (which is defined by supply and demand at the border of where nobody wants to buy it and where everybody wants to buy it). Evolution proceeds towards the edge of chaos. Systems on the boundary between order and chaos have the flexibility to adapt rapidly and successfully. Living organisms are a particular type of complex adaptive systems. Natural selection and self-organization complement each other: they create complex systems poised at the edge between order and chaos, which are fit to evolve in a complex environment. At all levels of organization, whether of living organisms or ecosystems, the target of selection is a type of adaptive system at the edge between chaos and order. Kauffman's mathematical model is based on the concept of "fitness landscapes" (originally introduced by Sewall Wright). A fitness landscape is a distribution of fitness values over the space of genotypes. Evolution is the traversing of a fitness landscape. Peaks represent optimal fitness. Populations wander driven by mutation, selection and drift across the landscape in their search for peaks. It turns out that the best strategy for reaching the peaks occurs at the phase transition between order and disorder, or, again, at the edge of chaos. The same model applies to other biological phenomena and even nonbiological phenomena, and may therefore represent a universal law of nature. Adaptive evolution can be represented as a local hill climbing search converging via fitter mutants toward some local or global optimum. Adaptive evolution occurs on rugged (multipeaked) fitness landscapes. The very structure of these landscapes implies that radiation and stasis are inherent features of adaptation. The Cambrian explosion and the Permian extinction (famous paradoxes of the fossil record) may be the natural consequences of inherent properties of rugged landscapes. Kauffman also noted how complex (nonlinear dynamic) systems which interact with the external world classify and know their world through their attractors. Kauffman's view of life can be summarized as follows: autocatalytic networks (networks that feed themselves) arise spontaneously; natural selection brings them to the edge of chaos; a genetic regulatory mechanism accounts for metabolism and growth; attractors lay the foundations for cognition. The requirements for order to emerge are far easier than traditionally assumed. The main theme of Kauffman's research is the ubiquitous trend towards self-organization. This trend causes the appearance of "emergent properties" in complex systems. One such property is life. There is order for free. Far from equilibrium, systems organize themselves. The way they organize themselves is such that it creates systems at higher levels, which in turn tend to organize themselves. Atoms organize in molecules that organize in autocatalytic sets that organize in living organisms that organize in ecosystems. The whole universe may be driven by a principle similar to autocatalysis. The universe may be nothing but a hierarchy of autocatalytic sets. Autonomous Systems The Chilean neurologist Francisco Varela has adapted Maturana's thought to the theory of autonomous systems, by merging the themes of autonomy of natural systems (i.e. internal regulation, as opposed to control) and their informational abilities (i.e., cognition) into the theme of a system possessing an identity and interacting with the rest of the world. The organization of a system is the set of relations that define it as a unity. The structure of a system is the set of relations among its components. The organization of a system is independent of the properties of its components. A machine can be realized by many sets of components and relations among them. Homeostatic systems are systems that keep the values of their variables within a small range of values, i.e. whose organization makes all feedback internal to them. An autopoietic system is a homeostatic system that continously generates its own organization (by continously producing components that are capable of reproducing the organization that created them). Autopoietic systems turn out to be autonomous, to have an identity, to be unities, and to compensate external perturbations with internal structural changes. Living systems are autopoietic systems in the physical space. The two main features of living systems follow from this: self-reproduction can only occur in autopoietic systems, and evolution is a direct consequence of self-reproduction. Every autonomous system is organizationally closed (they are defined as a unity by their organization). The structure constitutes the system and determines its behavior in the environment; therefore, information is a structural aspect, not a semantic one. There is no need for a representation of information. Information is "codependent". Mechanisms of information and mechanisms of identity are dual. The cognitive domain of an autonomous system is the domain of interaction that it can enter without loss of closure. An autonomous unit always exhibits two aspects: it specifies the distinction between self and notself, and deals with its environment in a cognitive fashion. The momentous conclusion that Varela reaches is that every autonomous system (ecosystems, societies, brains, conversations) is a "mind" (in the sense of cognitive processes). A Science of Prisms Alternatives to traditional science now abound. One is interesting because it starts with a completely different approach towards reality and it encompasses more than just matter. In the 1970's the American physicist Buckminster Fuller developed a visionary theory, also called "synergetics", that attacked traditional science at its very roots. "Synergy" is the behavior of a whole that cannot be explained by the parts taken separately. Synergetics, therefore, studies system in a holistic (rather than reductionistic) way. The way it does this, is by focusing on form rather than internal structure. Because of its emphasis on shape, Synergetics becomes a branch of Geometrics, the discipline of configurations (or patterns). Synergetics employs 60-degree coordination instead of the usual 90-degree coordination. The triangle (and tetrahedron) instead of the square (and the cube) is the fundamental geometric unit. Fuller's thought is inspired by one of his own inventions, the "geodesic" dome (1954), a structure that exploits a very efficient way of enclosing space and that gets stronger as it gets larger. The bottom line is that reality is not made of "things", but of angle and frequency events. All experience can be reduced to only angles and frequencies. Fuller finds "prisms" to be ubiquitous in nature and in culture. All systems contained in the universe are polyhedra, "universe" being the collection of all experiences of all individuals. Synergetics rediscovers, in an almost mystical way, most of traditional science, but mainly through topological considerations (with traditional topology extended to "omnitopology"). For example, Synergetics proves that the universe is finite and expanding, and that Planck's constant is a "cosmic relationship". The Emergence of a Science of Emergence Prigogine's non-equilibrium Thermodynamics, Haken's synergetics, Von Bertalanffi's general systems theory and Kauffman's complex adaptive systems all point to the same scenario: the origin of life from inorganic matter is due to emergent processes of self-organization. The same processes account for phenomena at different levels in the organization of the universe, and, in particular, for cognition. Cognition appears to be a general property of systems, not an exclusive of the human mind. A science of emergence, as an alternative to traditional, reductionist, science, could possibly explain all systems (living and not).
Further Reading Buchler Justus: METAPHYSICS OF NATURAL COMPLEXES (Columbia University Press, 1966) Bunge Mario: TREATISE ON BASIC PHILOSOPHY (Reidel, 1974-83) Cohen Jack & Steward Ian: THE COLLAPSE OF CHAOS (Viking, 1994) Coveney Peter: FRONTIERS OF COMPLEXITY (Fawcett, 1995) Dalenoort G.J.: THE PARADIGM OF SELF-ORGANIZATION (Gordon & Breach, 1989) Dalenoort G.J.: THE PARADIGM OF SELF-ORGANIZATION II (Gordon & Breach, 1994) Davies Paul: GOD AND THE NEW PHYSICS (Penguin, 1982) Eigen Manfred & Schuster Peter: THE HYPERCYCLE (Springer Verlag, 1979) Forrest Stephanie: EMERGENT COMPUTATION (MIT Press, 1991) Fuller Richard Buckminster: SYNERGETICS: EXPLORATIONS IN THE GEOMETRY OF THINKING (Macmillan, 1975) Fuller Buckminster: COSMOGRAPHY ( Macmillan, 1992) Gell-Mann Murray: THE QUARK AND THE JAGUAR (W.H.Freeman, 1994) Gleick James: CHAOS (Viking, 1987) Goldberg David: GENETIC ALGORITHMS (Addison Wesley, 1989) Haken Hermann: SYNERGETICS (Springer-Verlag, 1977) Holland John: ADAPTATION IN NATURAL AND ARTIFICIAL SYSTEMS (Univ of Michigan Press, 1975) Holland John: HIDDEN ORDER (Addison Wesley, 1995) Kauffman Stuart: THE ORIGINS OF ORDER (Oxford University Press, 1993) Kauffman Stuart: AT HOME IN THE UNIVERSE (Oxford Univ Press, 1995) Koestler Arthur: THE GHOST IN THE MACHINE (Henry Regnery, 1967) Langton Christopher: ARTIFICIAL LIFE (Addison-Wesley, 1989) Laszlo Ervin: INTRODUCTION TO SYSTEMS PHILOSOPHY (Gordon & Breach, 1972) Lewin Roger: COMPLEXITY (Macmillan, 1992) Mandelbrot Benoit: THE FRACTAL GEOMETRY OF NATURE (W.H.Freeman, 1982) Nicolis Gregoire & Prigogine Ilya: SELF-ORGANIZATION IN NON-EQUILIBRIUM SYSTEMS (Wiley, 1977) Nicolis Gregoire & Prigogine Ilya: EXPLORING COMPLEXITY (W.H.Freeman, 1989) Nicolis Gregoire: INTRODUCTION TO NONLINEAR SCIENCE (Cambridge University Press, 1995) Pattee Howard Hunt: HIERARCHY THEORY (Braziller, 1973) Prigogine Ilya: INTRODUCTION TO THERMODYNAMICS OF IRREVERSIBLE PROCESSES (Interscience Publishers, 1961) Prigogine Ilya: NON-EQUILIBRIUM STATISTICAL MECHANICS (Interscience Publishers, 1962) Prigogine Ilya & Stengers Isabelle: ORDER OUT OF CHAOS (Bantham, 1984) Salthe Stanley: EVOLVING HIERARCHICAL SYSTEMS (Columbia University Press, 1985) Salthe Stanley: DEVELOPMENT AND EVOLUTION (MIT Press, 1993) Thom Rene': MATHEMATICAL MODELS OF MORPHOGENESIS (Horwood, 1983) Thom Rene': STRUCTURAL STABILITY AND MORPHOGENESIS (Benjamin, 1975) Toffoli Tommaso & Margolus Norman: CELLULAR AUTOMATA MACHINES (MIT Press, 1987) Turing Alan Mathison: MORPHOGENESIS (North-Holland, 1992) Varela Francisco: PRINCIPLES OF BIOLOGICAL AUTONOMY (North Holland, 1979) Von Bertalanffy Ludwig: GENERAL SYSTEMS THEORY (Braziller, 1968) Von Neumann John: THEORY OF SELF-REPRODUCING AUTOMATA (Princeton Univ Press, 1947) Waldrop Mitchell: COMPLEXITY (Simon & Schuster, 1992) Zeeman Erich Christian: CATASTROPHE THEORY (Addison-Wesley, 1977)
Re:Someone please explain to me "Emergent properties"
« Reply #6 on: 2003-02-10 19:23:00 »
I have had a few days to mull this subject over now, and am happy to report great progress. Though I am sure there is some greater detail which I have a lot ot learn about, i feel that I have grasped the basic concept somewhat.
The problems I was encountering seemed mostly to be related to semantics and a general issue of tackling the problem from the wrong end.
I'll try to phrase this as best I can, and then hopefully one of you will tell me if I am headed in the right direction. I'll stick with molecular examples (water in particular) as they seem simple to grasp.
Hydrogen and Oxygen are gasses that require near absolute zero temps to turn to a liquid. They have properties; Hydrogen is explosive, Oxygen is a catalyst, etc...
When Hydrogen and Oxygen form the common molecule of water (H2O) then several new properties come into being that were not obvious from just looking at H and O. (I am not sure if predictibility plays a part - is something still an emergent property if it is a predictable outcome of the combination? a wall from concrete and bricks for instance).
Now water has qualities that are very different than those of it's constituent atoms. It's liquid at room temperature and pressure, non-flamable, mixes nicely with loads of other atoms and molecules, etc... These different (different than H or O) qualities are what make water (H2O) an emergent property of H and O when H and O are combined to make water.
If I am getting this right, then there must be a lot of sticky points regarding emergence when the new property takes on an intangible effect. In the case of water, it is easy to measure these properties. I can see why consciousness must fall into this category for now as well. There must be many others in the biological realm, though I can't think of any right now.
Thanks everyone for putting up with my question and helping.
It's helpful (tough not entirely accurate) to think of emergent properties as side effects (though this tends to imply that there is some intended primary effect). The properties of water are the side effect to mixing two hydrogen atoms with one oxygen atom (the intended primary effect, as opposed to a "side" effect, would merely be the mixing itself, not the properties of the mixing). The side effect, or emergent properties, of mixing a sperm cell and an ovum is the formation of a zygote. Trillions of neurons firing combined with the release of several enzymes and hormones results in human thought, an emergent property of said actions.
the higher level of the system acts with greater freedom as it is less enslaved by its environment and the systems that share the environment, though eventually, through a process of natural selection, the stability of predictably chaotic interactions between the formerly highest level systems is all that remains, and a new system has emerged
as an example: I think and philosophise, and perhaps act on my philosophies. While to me this seems a free action, with meaning, it is also both the product of smaller processes, and the component of larger processes, all operating under the same physical laws. the politics of a person describe three things-> 1. how the person was "brought up" by their environment, 2. what the person freely "thinks" and believes-eg. the nature of their experience and observation, and 3. the impact they are likely to have on their environment- eg. their role in a higher level system
sticking with the chemical model-> the properties of oxygen and hydrogen not only become the properties of water, the old properties are "lost", the new properties "emerge", and the new properties help other properties "emerge" in systems around them-> perhaps an animal drinks the water-> the animal stays alive -> it's experience evolves -> new qualities of its experience "emerge"
Just when I thought I was out-they pull me back in
Re:Someone please explain to me "Emergent properties"
« Reply #9 on: 2004-04-16 00:46:58 »
Depending on where you "live" in spacetime, your "free will" is just someone elses history. And to prove it, go break something cheap, like a stick, or a brick or anything that would have remained whole had I NOT ask you to initiate your "free will" and break it.
It has "already" been recorded as broken in someone elses "spacetime".
Fun, isn't it?
Walter
[JeffCreel spoke thusly] While to me this seems a free action, with meaning, it is also both the product of smaller processes, and the component of larger processes, all operating under the same physical laws. the politics of a person describe three things-> 1. how the person was "brought up" by their environment, 2. what the person freely "thinks" and believes-eg. the nature of their experience and observation, and 3. the impact they are likely to have on their environment- eg. their role in a higher level system