Member 1242
61 entries

Immortal since Dec 25, 2007
Uplinks: 0, Generation 3

( related blog / mirror )
All original media ( images, animations, videos, writing ) on this account is licensed creative commons non-commercial attribution share alike 3.0. Please contact the author (mrule7404 at gmail dot com) for permission for commercial use.
  • Affiliated
  •  /  
  • Invited
  •  /  
  • Descended
  • michaelerule’s favorites
    From abhominal
    From feanne
    grow, grow
    From Andy Gilmore
    From feanne
    Venus, peacock moss, earth...
    From sarahs
    Recently commented on
    From michaelerule
    ∞ zoom
    From gamma
    Ducks fractals
    From michaelerule
    From Queston
    Physicists describe method...
    From eyeclipse
    WikiLeaks, you must know
    Now playing SpaceCollective
    Where forward thinking terrestrials share ideas and information about the state of the species, their planet and the universe, living the lives of science fiction. Introduction
    Featuring Powers of Ten by Charles and Ray Eames, based on an idea by Kees Boeke.
    From michaelerule's personal cargo

    notes : modular conscious entities
    Once conscious systems are virtualized, we will have the ability to do some unusual things, for example : separate two halves of the brain and putting them on different tasks, or turning off inhibition at will. Such events happen already, and are caused by damage to or sugical manipulation of the human brain. For a virtual conscious system ( or a heavily modified biological neural system ), we should be able to recreate these events reliably and reversibly.

    Furthermore, it may be possible to swap bits of different individuals. I am not sure if I mean individuals in the human sense or individuals as conscious software entities. A mature human consciousness would not take well to having its frontal lobe swapped out for that of another person, but it may be able to handle, with practice, swapping out sensory-motor modules. If you train modules from the start (from birth, or from first awakening for a software entity) to be able to be swapped, we could create a set of self assembling processing modules that could instantly adapt to new problems. However, If you try to use bits of "people" in this framework they might go insane rather quickly.

    What might be more feasible would be extremely high bandwidth communication between normal humans, using implantable electrode arrays. It would be like using a language more expressive than any spoken language. Depending on exactly how you link together separate individuals, you may get radically different results. For instance, wiring one man's motor cortex to another man's frontal lobe would seem like a very bad thing, since having your super-ego controlled by your friend's left toe is never a good idea. A person would probably be able to use a direct sensory-motor link though, since this would more closely resemble normal communication, like speech.

    It is also possible that strong, human level, artificial intelligence will first emerge from such a modular framework. I am not extremely familiar with the state of the art in software AI, but I believe that as the field progresses the training of a software solution for a task will become a significant part of the development process. I can imagine the day that someone takes a group of pre-trained software AIs and links them together under a common executive module, and accidentally creates something we might call conscious.

    Tue, Nov 18, 2008  Permanent link

      RSS for this post
      Add to favorites
    Synapses (1)

    shawn_sims     Tue, Nov 18, 2008  Permanent link
    "I can imagine the day that someone takes a group of pre-trained software AIs and links them together under a common executive module, and accidentally creates something we might call conscious."

    Interesting, something of a swarm swarm intelligence a conscious entity though? (Is a single ant conscious or simply responsive?) Can the single agent comprehend and evaluate its state of perception, thus spawning consciousness? You might could argue yes, not sure. One thing I am sure however, is that the training or pre-programming that occurs in AI will be the threshold through which we allow AI to become conscious, either solely or collectively.
    michaelerule     Wed, Nov 19, 2008  Permanent link
    Thats an good question, and I am not sure what to answer since the terms "conscious", "thought" and even "swarm intelligence" are poorly defined. Consciousness must occur on a continuum, though there may be some minimal structure required for what we would consider human consciousness. An ant is conscious of itself as an ant, though this may be a very limited experience. The ant may only be aware, rather than self-aware. An ant can not by definition comprehend the emergent behavior of its colony, since that would require the organism store more information than it can physically represent in its limited supply of neurons. We can however argue that the ant colony as a whole has a sense of awareness of its own, one that is much greater than that of a single ant.

    This reasoning carries over to the human nervous system. Each neuron is quite complex and can perform a considerable amount of computation on its own. However, a single neuron can hardly be aware of the activity in the entire brain. A neuron can be conscious of a subset, or a reduced summary of the activity in the whole system. You and I are swarm intelligences, and it seems quite clear that we are conscious. It is also true that we are comprised of several conscious modules. These are the spatially localized neural circuits for our various abilities, such as visual processing, speech recognition, and metaphor comprehension. It is possible to deactivate a module, ( via stroke, surgery, drugs, or meditation ), and not remove the conscious character of the system. This suggests that there is a unit of awareness, larger than a single ant or neuron, but smaller than an entire human consciousness.