Member 1096
48 entries

Exploring the edge.
Immortal since Dec 19, 2007
Uplinks: 0, Generation 2

The Global Brain
"It is not guilty pride but the ceaselessly reawakened instinct of the game which calls forth new worlds." (Heraclitus Reloaded)
  • Affiliated
  •  /  
  • Invited
  •  /  
  • Descended
  • Spaceweaver’s favorites
    From syncopath
    eChoes ...
    From gamma
    Underground life in 2013...
    From Xaos
    Conversations With...
    From Wildcat
    Re-Be-Coming Human...
    From Xaos
    The Aesthetic Ground (the...
    Recently commented on
    From Wildcat
    Tilting at windmills or...
    From Wildcat
    Some nothings are like...
    From whiskey
    "A Case For The...
    From Xaos
    Conversations With...
    From Wildcat
    Archeodatalogy - Entwined,...
    Spaceweaver’s projects
    The human species is rapidly and indisputably moving towards the technological singularity. The cadence of the flow of information and innovation in...

    The Total Library
    Text that redefines...

    The great enhancement debate
    What will happen when for the first time in ages different human species will inhabit the earth at the same time? The day may be upon us when people...
    Now playing SpaceCollective
    Where forward thinking terrestrials share ideas and information about the state of the species, their planet and the universe, living the lives of science fiction. Introduction
    Featuring Powers of Ten by Charles and Ray Eames, based on an idea by Kees Boeke.
    From Spaceweaver's personal cargo

    The Principle of Computational Equivalence
    In everyday experience, it is quite intuitive to expect simple systems to present simple behaviors, while complex systems present complex behaviors. There is of course no single definition to what complexity is, there are many interesting ways to think about complexity, and perhaps I will dedicate a post to it sometimes in the future. For now, let us use a simplified idea of the term: A behavior of a system can be understood as a dynamic correspondence relation between its inputs and outputs. The complexity of a system corresponds more or less to how difficult it is to implement the simplest computer program that fully simulates this behavior.

    Thinking of it, when we try in everyday life to figure the behavior of a system whether it is a washing machine, a new gadget, or even a fellow human, we usually try to create a model, to find a pattern that will help us to predict what responses should we expect to what stimuli. We 'push the buttons' and see what happens. And if the same happens for the same buttons, we figure the pattern. The more regular and repetitive the pattern, the easier for us to figure it, and the easier it is to devise a computer program that reproduces the pattern. This is how our intuitive sense regarding complexity is translated into the more formal language of pattern and algorithm. Even if we leave computer aside, we can still gain a fairly clear sense of complexity by asking ourselves how difficult would it be to provide the simplest verbal description of some given behavior. The longer this description becomes, the more complex is the described phenomenon.

    It is quite straight forward to make the analogy that higher complex behavior corresponds to more computation, and simple behavior corresponds little computation. Even if we use only verbal descriptions of a system or a phenomenon, the length of a description is a measure of the computation we make in our heads to describe the said system or phenomenon.

    Another assumption that we use quite intuitively in everyday affairs, is that complex behaviors are generally produced by complex mechanisms, that is by complex structures. Complex behaviors if so means/correlates to complex structures, while simple behaviors means/correlates to simple structures. In computers, for example, structure can be roughly translated to algorithmic structure, that is how complex a program is.

    Since we generally tend think about intelligence in terms of complexity of behavior, there is here an immediate implication that intelligence arises in corresponding complex structures. If we think about artificial intelligence for example we usually think about it in terms of the complexity of computers and algorithms that we might need to give rise to such intelligence.

    About five years ago Stephen Wolfram, a maverick mathematician and scientist, published in his book "A new kind of science" a controversial hypothesis that challenges the intuitions mentioned above. It is called 'The principle of computational equivalence'. The essence of the suggested principle is that there is no such correspondence between complex behaviors and complex structures (or computations), or in other words, as Wolfram shows in a few appealing cases, very simple structures can give rise to an extremely complex behavior. Not only this, in fact there are some special simple computational structures that can give rise to behaviors of any degree of complexity. Still in other words, it means that systems with different degrees of complexity, from very simple to very complex are actually computationally equivalent or structurally equivalent.

    Why it's interesting? If Wolfram is correct, and there is amounting evidence that he is, it means that there are general limitations on our ability to understand and to predict the behavior of systems. When we think about the concept of understanding in regards to a phenomenon or a system, we usually mean figuring the pattern of that system's or that phenomenon's behavior. We also think about it in terms of scales of complexity, that is complex systems can 'understand' (figure the pattern of behavior) of more simple systems, but not vice versa.

    We scale and measure intelligence in terms of figuring complex patterns. But the principle of computational equivalence shatters this idea. If a very simple system can present a behavior which is computationally equivalent to a very complex system, there is no way that our understanding of 'understanding' stands. It means that what we came to understand from the universe are all those very particular phenomena where the patterns of behavior are of limited complexity, while the vast majority of the universe cannot be described in terms of limited patterns thus is not open to the kind of understanding and research we work with.

    Since patterns and formula are the very instruments by which we predict the future behavior of all phenomena, it seems as a consequence that most of the universe is forever unpredictable, because most behaviors are too complex to describe with a limited pattern or a limited computation.

    Think about this: The universe might be deterministic in principle, however in practice we cannot fully figure its pattern. Moreover, no matter how intelligent we or our machines might become, the principle of computational equivalence leaves much space(most of it actually) to the unknown.

    Think about this as well: According to the principle of computational equivalence, understanding boils down to raw computational power. There are no machines that are in principle more clever that the machines that we already have (including the human brain). The difference will eventually be (actually has always been) a difference of speed, the ability to compute faster what everybody else can compute anyway. The universe at large cannot be fully computed by part of it, and if so, given this principle, we can never fully understand the universe, not even ourselves. There are only tiny bubbles of understanding within the vast chaos of patterns.

      Promote (9)
      Add to favorites (3)
    Synapses (4)

    Kwiz     Mon, Dec 24, 2007  Permanent link
    This idea that excruciatingly complex patterns and phenomenon can arise from a simple set of rules is, to say the least, fascinating.

    John Conway's 'Game of Life'

    After getting a basic grasp of the established "rules" in the simulation I recommend setting the simulation speed to "hyper" and the cell size to "small." Scatter some dots and let it run.

    The difference will eventually be (actually has always been) a difference of speed, the ability to compute faster what everybody else can compute anyway. The universe at large cannot be fully computed by part of it, and if so, given this principle, we can never fully understand the universe, not even ourselves. There are only tiny bubbles of understanding within the vast chaos of patterns.

    Trying to perfectly simulate all the events within a closed system using the constituents of that system obviously isn't possible. That's the case with our universe at large, but why can't we fully understand all the internal processes of, say, a human brain? Compared to the size and mass of this universe, several kilograms of electro-chemical circuitry is relatively easy to comprehend.
    aeonbeat     Mon, Dec 24, 2007  Permanent link
    Check also this and this posts. Finding the one pattern/algorithm except solving the riddle may also make it lose its meaning, its purpose... Dazed and Confused...
    Wildcat     Mon, Dec 24, 2007  Permanent link
    The difference will eventually be (actually has always been) a difference of speed, the ability to compute faster what everybody else can compute anyway.

    yes, this particular statement raises a number of questions, chief amongst which is the following:
    isn't it the case that an increase in speed (quantity) eventually allows a difference in quality? (quality I take here to mean understanding or pattern recognition)
    again I am not certain i understand Wolfram's paradigm accurately, however, by analogy, isn't an increase in speed tantamount to an increase in the amount of information processed per a given amount of process of/in time?
    if (and thats a great if here) we can increase the speed of computation by a factor equal or greater than the speed of production of information in the universe, doesn't that imply that we can theoretically "understand the universe? and us in the process?
    aeonbeat     Mon, Dec 24, 2007  Permanent link
    too much theory means not enough action, in other words too much doubt causes losing focus = not the best results too. if completely understanding something means simply experiencing it, just to start practicing more of what we preach, having in mind the rush for novelty striving to reach the top and finally improve to a better (or best) shape, then don't we actually do whatever is needed not to reach the end and stay forever in the cycle, keeping the creative machine running? i mean, why do we want to understand the universe? and i want a really good answer! i know we all want it, but why?
    bpwnes     Mon, Dec 24, 2007  Permanent link
    I designed a simple computer program to calculate cellular automata. It is far too small and needs to be updated. I'll get around to it and try to share it with you all.
    Spaceweaver     Wed, Dec 26, 2007  Permanent link
    To Wildcat: The question whether quantity in terms of speed can be turned into quality in terms of understanding or intelligence, is a hard and very interesting question. In view of Wolfram's work I would not overstate if I will call it the 21st century alchemical question: Can we produce the modern philosopher's stone that will turn computational speed into the gold of intelligence? Or, should we demystify the concept of intelligence and realize it is just speed, nothing but speed... I am not entirely set about this very question. I lean with caution towards the principle of computational equivalence that essentially demystifies intelligence. However admittedly it is very difficult to accept that there is nothing over and above, and intrinsically different from computational speed, that we mean when we think about intelligence.

    As to your other remark, I think that the answer is negative, that is, if Wolfram's hypothesis is valid. I would point here the weakness of the principle: Wolfram presents the principle of computational equivalence as a scientific hypothesis and not as a mathematical theory. As such, there is no possible mathematical proof to it. But also as a scientific theory it is problematic because it offers no clear method of how it can be refuted. So, fact is that it cannot rise much above the status of a very strong intuition. Yet, assuming it is correct, the physical limits of our universe, will not allow to compute the universe fast enough because (this is a huge simplification of course) the universe is the fastest computation of itself. In other words, the principle of computational equivalence states that if we imagine the universe as a series of numbers, this series has no shortcut formula, so we cannot predict its unfoldment but only to simulate it, and any simulation of it is slower than its natural unfoldment.
    Spaceweaver     Wed, Dec 26, 2007  Permanent link
    To Gouranga: I want a good answer too :-) In a manner of approximating such an answer, I think that we want to understand the universe because we want to understand ourselves. The universe is just a good metaphor :-) As to why do we want to understand ourselves, this is the tough one! First, I doubt that it is a universal human trait, sometimes I think it is a kind of a mutation, an abnormality of sorts. My guess is that it has to do with the self referential nature of consciousness combined with the intrinsic need of consciousness to expand. Understanding is the experience of that expansion. I don't know if it's an answer, or if it's good, by it is a try, and I'll keep on trying... maybe it worth creating a collaborative project.
    3LSZVJA9     Wed, Dec 26, 2007  Permanent link
    Thanks for this thread Spaceweaver.


    What about the analogic/digital dicotomy?

    Are we past that yet?
    aeonbeat     Thu, Dec 27, 2007  Permanent link
    understanding in means of experiencing some truth is still an act, although just an inner chemical reaction. this is already practicing what we're preaching. first we don't see the door, then we don't know how to open it, but what's most interesting, we're not sure why we want to enter... i can't remember who said that, but when science climbs the mountain of knowledge, it will find a group of theologians on top of it... or some monks ;) i would gladly participate in such a project, finding out why do we want to enter the door.

    to 3LSZVJA9, please, continue!
    Wildcat     Thu, Dec 27, 2007  Permanent link
    so we cannot predict its unfoldment but only to simulate it, and any simulation of it is slower than its natural unfoldment.

    Granted, of course, there is doubt that a full simulation is the thing in itself, however and that is the point that is of interest to me, is it not the case that this assumes that the universe is a complete simulation of itself?
    Might it not be the case that the computation is multidimensional? and in this case THIS universe is the particular result of a much greater/larger computation, that of a multiverse? which might very well be finite but indefinite.

    I have a working assumption that a multidimensional computation (Q computation?) can solve/resolve the equation of a particular universe, such as ours
    3LSZVJA9     Thu, Dec 27, 2007  Permanent link
    Seems to me that the digital support depends more on models created through numbers.
    Models created through numbers will never be more than points periodically scattered through a dimensional system.

    Numbers themselves tell us that between one point and the other there are infinite other points. Those points will never be in the model, by definition.

    It's the same idea when we say that straight lines don't exist, they are our own invention, they are a very, very crude simplification of what's beyond them.

    Even fractals and randomized mathematical systems are little more than evolved descendants of the straight line.

    The only way the universe wouldn't tell the difference is if we processed at the speed of light.

    Only we can make sense of mathematical models; take the human off the output and mathematical models are completely useless.

    Mathematical models need us.

    You would say that our brains are digital too, they use pulses.
    But even a single pulse has an attack an a decay; it fades in and fades out.

    (within a point there are infinite points).

    Analogical supports don't rely on models, they absorb input according to their own material nature, they arrange themselves, they are their own model (thanks spaceweaver).

    And the slower the absorption, the better, that's why modernity (speed) didn't like them in the end.

    Often, analogical supports capture even types of input that were not meant to be captured, they make lousy laboratory tools.

    There aren't only impulses running through our brains, there are chemical substances, liquids, all kinds of complications.

    Note: Last night I would have been incapable of synthesizing all this ideas together,
    I did it on my sleep, without even knowing that I wanted to do it.

    Analogies can put together concepts that landed very far away from each other.

    Anyway, it seems very difficult to imagine how to design and create a fast, complex, analogical support for our cosmological computations.

    Well, we can use the ones we were given.

    The ones we are.

    (postmodernity loves and hates us)

    Spaceweaver     Thu, Dec 27, 2007  Permanent link
    Wildcat: Computation, is a borderline concept where the physical and metaphysical meet. The knowledge of how to implement a computation derives from our understanding of the physical universe. If our knowledge of physics expands, for example, quantum field theory, our computational capabilities expand in accordance, in this case to quantum computation. Suppose that we gain a working knowledge of the multiverse, as you suggest, and implement a multiversal computer, which is one way to understand quantum computing. Our known universe has expanded together with our capabilities, but in fact in terms of 'understanding' we are in the same place on a higher plane of intelligence, somewhat like the red queen in Alice. Still, the limits predicted by the principle of computational equivalence stand, because this principle is not about our computational capabilities, but rather about the general relations between the computations we can perform at any technological level. This is daunting and hard to accept, but so is the speed of light :-) It is interesting to note, that we can break the limits of our understanding and our computational capabilities an indefinite number of times without refuting this principle. My very far shot intuition is that this principle is of the same family of Godel's incompleteness theorem, but more general having to do with the very limits of conscious observation.
    Spaceweaver     Thu, Dec 27, 2007  Permanent link
    3LSZVJA9: You have a good point regarding the view of the universe as analog. Also it is interesting to note the shift in the general world view from analog descriptions to digital descriptions during the 20th century, as part of modernism and post modernism. It seems to me however that bottom line, analog and digital descriptions are ontologically equivalent, the difference between them being more of a representational difference. Our thought processes are a combination of analog and digital descriptions. While most of our raw perceptions are experienced as analog, at the moment verbal descriptions is involved it becomes digital or at least discrete. Whether analog experience circumvents the limits on understanding arising from our digital approach, my guess is that it is not, but I have to think about it. Thanks :-)
    michaelerule     Tue, Aug 19, 2008  Permanent link
    There is a theoretical problem in this post : Understanding can not boil down to raw computational power. There are problems with a complexity which grows faster than our ability to perform computation. This is based on the assumption that our economy, and secondarily our computational power grows exponentially.

    ( Some science fiction writers have predicted a technological singularity, but I do not believe that this will lead to super-exponential growth in any sector. A technological singularity will at best represent the creation of a super-human intelligence. However, the rate of progress will still be proportional to the current power of the system, hence still exponential growth )

    There are many many problems for which the only known solution grows super-exponentially. This means that the difficulty of the problem grows faster than our computational power, and so raw computational power can never surpass the complexity of the problem.

    I refer you to Render's post "Algorithms are the intellectual Currency of the Future," which suggests that it is not how many floating point operations per second we can perform, but how cleverly we make use of them.

    Additionally, even throwing exponentially increasing power at exponentially complex problems does not guarantee progress, since the size and rate constant of the problem could grow faster than our ability to perform computation ( a race between the rate of growth in problems and the rate of growth in computing power ).

    If the size of the dataset is held constant, then exponentially growing computational power will eventually be able to solve it, but there will always be more intractable problems out there.

    ( the whole of this comment might not hold true if there are significant advances in quantum computing, but I consider that a completely different entity )
    Spaceweaver     Fri, Sep 12, 2008  Permanent link
    Michaelerule: I can't see the problem you are referring to, since your argument is pretty much parallel to the one presented in the post. I agree that understanding in its wider sense cannot be boiled down to computation, at least not trivially. There is however an important sense in which understanding boils down to the ability to construct a computational model of a phenomenon, and simulate the phenomenon on a computer for the purpose of predicting its future activities or even discovering novel behaviors. Maxwell's equations, for example, do represent the understanding of electromagnetic phenomena. It is an important kind of understanding that can be translated into a computational model. Once we have such a model, its usefulness and effectiveness do translate to computation power. Likewise, are many other important examples.