The Principle of Computational Equivalence

In everyday experience, it is quite intuitive to expect simple systems to present simple behaviors, while complex systems present complex behaviors. There is of course no single definition to what complexity is, there are many interesting ways to think about complexity, and perhaps I will dedicate a post to it sometimes in the future. For now, let us use a simplified idea of the term: A behavior of a system can be understood as a dynamic correspondence relation between its inputs and outputs. The complexity of a system corresponds more or less to how difficult it is to implement the simplest computer program that fully simulates this behavior.

Thinking of it, when we try in everyday life to figure the behavior of a system whether it is a washing machine, a new gadget, or even a fellow human, we usually try to create a model, to find a pattern that will help us to predict what responses should we expect to what stimuli. We 'push the buttons' and see what happens. And if the same happens for the same buttons, we figure the pattern. The more regular and repetitive the pattern, the easier for us to figure it, and the easier it is to devise a computer program that reproduces the pattern. This is how our intuitive sense regarding complexity is translated into the more formal language of pattern and algorithm. Even if we leave computer aside, we can still gain a fairly clear sense of complexity by asking ourselves how difficult would it be to provide the simplest verbal description of some given behavior. The longer this description becomes, the more complex is the described phenomenon.

It is quite straight forward to make the analogy that higher complex behavior corresponds to more computation, and simple behavior corresponds little computation. Even if we use only verbal descriptions of a system or a phenomenon, the length of a description is a measure of the computation we make in our heads to describe the said system or phenomenon.

Another assumption that we use quite intuitively in everyday affairs, is that complex behaviors are generally produced by complex mechanisms, that is by complex structures. Complex behaviors if so means/correlates to complex structures, while simple behaviors means/correlates to simple structures. In computers, for example, structure can be roughly translated to algorithmic structure, that is how complex a program is.

Since we generally tend think about intelligence in terms of complexity of behavior, there is here an immediate implication that intelligence arises in corresponding complex structures. If we think about artificial intelligence for example we usually think about it in terms of the complexity of computers and algorithms that we might need to give rise to such intelligence.

About five years ago Stephen Wolfram, a maverick mathematician and scientist, published in his book "A new kind of science" a controversial hypothesis that challenges the intuitions mentioned above. It is called 'The principle of computational equivalence'. The essence of the suggested principle is that there is no such correspondence between complex behaviors and complex structures (or computations), or in other words, as Wolfram shows in a few appealing cases, very simple structures can give rise to an extremely complex behavior. Not only this, in fact there are some special simple computational structures that can give rise to behaviors of

Why it's interesting? If Wolfram is correct, and there is amounting evidence that he is, it means that there are general limitations on our ability to understand and to predict the behavior of systems. When we think about the concept of understanding in regards to a phenomenon or a system, we usually mean figuring the pattern of that system's or that phenomenon's behavior. We also think about it in terms of scales of complexity, that is complex systems can 'understand' (figure the pattern of behavior) of more simple systems, but not vice versa.

We scale and measure intelligence in terms of figuring complex patterns. But the principle of computational equivalence shatters this idea. If a very simple system can present a behavior which is computationally equivalent to a very complex system, there is no way that our understanding of 'understanding' stands. It means that what we came to understand from the universe are all those very particular phenomena where the patterns of behavior are of limited complexity, while the vast majority of the universe cannot be described in terms of limited patterns thus is not open to the kind of understanding and research we work with.

Since patterns and formula are the very instruments by which we predict the future behavior of all phenomena, it seems as a consequence that most of the universe is forever unpredictable, because most behaviors are too complex to describe with a limited pattern or a limited computation.

Think about this: The universe might be deterministic in principle, however in practice we cannot fully figure its pattern. Moreover, no matter how intelligent we or our machines might become, the principle of computational equivalence leaves much space(most of it actually) to the unknown.

Think about this as well: According to the principle of computational equivalence, understanding boils down to raw computational power. There are no machines that are

Thinking of it, when we try in everyday life to figure the behavior of a system whether it is a washing machine, a new gadget, or even a fellow human, we usually try to create a model, to find a pattern that will help us to predict what responses should we expect to what stimuli. We 'push the buttons' and see what happens. And if the same happens for the same buttons, we figure the pattern. The more regular and repetitive the pattern, the easier for us to figure it, and the easier it is to devise a computer program that reproduces the pattern. This is how our intuitive sense regarding complexity is translated into the more formal language of pattern and algorithm. Even if we leave computer aside, we can still gain a fairly clear sense of complexity by asking ourselves how difficult would it be to provide the simplest verbal description of some given behavior. The longer this description becomes, the more complex is the described phenomenon.

It is quite straight forward to make the analogy that higher complex behavior corresponds to more computation, and simple behavior corresponds little computation. Even if we use only verbal descriptions of a system or a phenomenon, the length of a description is a measure of the computation we make in our heads to describe the said system or phenomenon.

Another assumption that we use quite intuitively in everyday affairs, is that complex behaviors are generally produced by complex mechanisms, that is by complex structures. Complex behaviors if so means/correlates to complex structures, while simple behaviors means/correlates to simple structures. In computers, for example, structure can be roughly translated to algorithmic structure, that is how complex a program is.

Since we generally tend think about intelligence in terms of complexity of behavior, there is here an immediate implication that intelligence arises in corresponding complex structures. If we think about artificial intelligence for example we usually think about it in terms of the complexity of computers and algorithms that we might need to give rise to such intelligence.

About five years ago Stephen Wolfram, a maverick mathematician and scientist, published in his book "A new kind of science" a controversial hypothesis that challenges the intuitions mentioned above. It is called 'The principle of computational equivalence'. The essence of the suggested principle is that there is no such correspondence between complex behaviors and complex structures (or computations), or in other words, as Wolfram shows in a few appealing cases, very simple structures can give rise to an extremely complex behavior. Not only this, in fact there are some special simple computational structures that can give rise to behaviors of

*any degree of complexity*. Still in other words, it means that systems with different degrees of complexity, from very simple to very complex are actually computationally equivalent or structurally equivalent.Why it's interesting? If Wolfram is correct, and there is amounting evidence that he is, it means that there are general limitations on our ability to understand and to predict the behavior of systems. When we think about the concept of understanding in regards to a phenomenon or a system, we usually mean figuring the pattern of that system's or that phenomenon's behavior. We also think about it in terms of scales of complexity, that is complex systems can 'understand' (figure the pattern of behavior) of more simple systems, but not vice versa.

We scale and measure intelligence in terms of figuring complex patterns. But the principle of computational equivalence shatters this idea. If a very simple system can present a behavior which is computationally equivalent to a very complex system, there is no way that our understanding of 'understanding' stands. It means that what we came to understand from the universe are all those very particular phenomena where the patterns of behavior are of limited complexity, while the vast majority of the universe cannot be described in terms of limited patterns thus is not open to the kind of understanding and research we work with.

Since patterns and formula are the very instruments by which we predict the future behavior of all phenomena, it seems as a consequence that most of the universe is forever unpredictable, because most behaviors are too complex to describe with a limited pattern or a limited computation.

Think about this: The universe might be deterministic in principle, however in practice we cannot fully figure its pattern. Moreover, no matter how intelligent we or our machines might become, the principle of computational equivalence leaves much space(most of it actually) to the unknown.

Think about this as well: According to the principle of computational equivalence, understanding boils down to raw computational power. There are no machines that are

*in principle*more clever that the machines that we already have (including the human brain). The difference will eventually be (actually has always been) a difference of speed, the ability to compute faster what everybody else can compute anyway. The universe at large cannot be fully computed by part of it, and if so, given this principle, we can never fully understand the universe, not even ourselves. There are only tiny bubbles of understanding within the vast chaos of patterns.
Mon, Dec 24, 2007 Permanent link

Categories: Computation, complexity, digital philosophy

Categories: Computation, complexity, digital philosophy

RSS for this post |