Member 2649
6 entries

Contributor to project:
renata lemos morais (F, 9)
Immortal since May 31, 2010
Uplinks: 0, Generation 4

  • Affiliated
  •  /  
  • Invited
  •  /  
  • Descended
  • Recently commented on
    From relemorais
    levels of convergence -...
    From Wildcat
    The revolution will be...
    relemorais’ project
    The human species is rapidly and indisputably moving towards the technological singularity. The cadence of the flow of information and innovation in...
    Now playing SpaceCollective
    Where forward thinking terrestrials share ideas and information about the state of the species, their planet and the universe, living the lives of science fiction. Introduction
    Featuring Powers of Ten by Charles and Ray Eames, based on an idea by Kees Boeke.
    From relemorais's personal cargo

    levels of convergence - part 2
    Project: Polytopia

    by Renata Lemos, Lucia Santaella

    Part 2 - Mind and Matter; Intelligence and Life

    Converging technologies are indeed getting inside the human brain in applications that are deeply integrated to cognitive systems. However, this does not mean that any significant change in the inner levels of human consciousness is likely to occur. The terms and expressions used by Ray Kurzweil are packed with metaphors comparing the human mind to a computer. Software of intelligence. Reverse-engineering the human brain. It is clear, given Kurzweil´s terminology, that his approach is based on materialism and reductionism. Intelligence is some kind of biological software that will be replicated once we reverse-engineer the human brain. According to this view, the human brain is a biological computer. It follows that if consciousness seems to be a property of a biological computer then any other kind of computer able to fully replicate the functioning of the brain could be capable of consciousness.

    Those who believe in this possibility are defenders of a technology-based new era of evolution: machines will not only be able to replicate all human qualities, but will merge with humans and generate a new species of super-intelligent beings. This merger, together with the exponential speed of technological advancement, will eventually alter the very nature of reality, resulting in a technological Singularity (Kurzweil, 2005).

    There are many advocates of Strong AI. Marvin Minsky (1990) and Ray Kurzweil (1999, 2005, 2006) stand as two of its most prominent representatives. According to John Searle (1980), Strong AI refers to “the claim that the appropriately programmed computer literally has cognitive states and that the programs thereby explain human cognition” (Searle, 1980, p. 417). There are two main ways in which AI could achieve this goal. The first is based on programming that tries to represent the symbolic structures of human minds; the second is based on the study and artificial replication of neural networks within the brain.

    Minsky (1990) refers to the effort of trying to achieve Strong AI through symbolic research the “top-down approach”, and of trying to achieve Strong AI through connectionist research the “bottom-up” approach. The first strategy would depend highly on interpretation, context and self. The second depends on nothing but decoding the functions of neural networks and programming artificial ones. Therefore, the symbolic approach in AI has been vanishing, while the connectionist approach has continued to prosper.

    Bringing attention to the symbolic limitations of AI, Searle (2002) has compared the Chinese Room Argument to what happened with Deep Blue. When beating Kasparov, Deep Blue was not playing chess, because the concept of chess has symbolic layers of meaning attached to it. A computer couldn´t possibly have access to the symbolic level of a chess game given that “the symbols in the computer mean nothing at all to the computer” (Searle, 2002). So while Kasparov had an understanding of chess based on its symbolic meaning, Deep Blue was merely performing a function which was programmed to arrive at decisions based on calculations regarding possibilities.

    Searle´s view represents the Weak AI approach, which relies on the uniqueness of our aesthetic, religious, philosophical and deep symbolic/archetypical levels to rebuke the possibility of all-mighty programming and nano-engineering. Roger Penrose (1989, 1994) and William Dembsky (2002a, 2002b, 2007) are also defenders of Weak AI, although in different manners.

    The controversy surrounding this debate is so wide that even within the same approach there are important epistemological differences. Searle and Penrose are both connectionists, and would represent the equivalent, in neuroscience, to proponents of the “bottom-up” approach in AI. So while Searle and Penrose both stand up against the possibility that machines could fully replicate mind, they seem to believe that consciousness is an emergent property of biological neural networks within the brain.

    According to this view, the mind is a property of a biological system. The main argument against Strong AI would then be that only biological systems can possess emergent properties of consciousness. Connectionists understand that consciousness is a property of a certain level of biological complexity. Strong AI proponents understand that once computation achieves this certain level of complexity, artificial consciousness will emerge. There are similarities in both approaches.

    Dembsky, on the other hand, believes that reducing consciousness to a complex property of a biological neural network would be equivalent to the reductionism practiced by proponents of Strong AI. He states that “...nothing I've seen to date leads me to believe that intelligence can properly be subsumed under complexity or computation” (Dembsky, 2002a). In Dembsky´s perspective, wherever there is a first person, there is a non-reducible entity. The uniqueness of this subjective first person can not be artificially replicated. Mind can not be a property of matter according to Dembsky (2007), because all properties of matter would have to be material, since they come from matter in the first place. David Jakobsen (2005) has commented on the differences between the approaches of Kurzweil, Searle and Dembsky:

    Ray Kurzweil’s strongest argument ... is to point out the arbitrariness present in the distinctions of John Searle between silicon and biology. Thus the question is thrown back into another domain – the old mind/matter debate. A debate where the physicalist has the upper hand these days and views like the one of William Dembski can be defeated by calling it old fashioned”(Jakobsen, 2005).

    The main difference between symbolists (such as Dembsky) and connectionists (such as Kurzweil and Minsky) is that the first approach is centered on levels of meaning, and the second is centered on levels of information. NBIC convergence adds something to this debate. Converging technologies might change the grounds of this debate by enabling direct interference from artificial intelligent agents within the systems underlying conscious states of a first person, in Dembsky´s sense.

    The main focus of the debate might change from determining the possibility of Strong AI, to establishing the possibility of hybrid forms of intelligence, whose sense of self awareness is either established through or mediated by artificial agents. AI is a product of the biological evolution of human intelligence, however through NBIC convergence it will most certainly enhance human intelligence in a new sort of hybrid bio-technological evolutionary process.
    Consciousness remains grounded and limited to a biological platform; however, cognitive nano applications have the potential to artificially enhance and alter conscious states. Given the fact that these nano agents are endowed with artificial degrees of intelligence, a principle of hybridization is directly established between mental processes and artificial intelligence. This hybrid interface would simultaneously pervade mind and matter.

    The ways in which nano artifacts and neural cells interact are informational. A shared continuum of information and meaning thus represent the framework in which structures of hybrid systems of intelligence could be formed. All cognitive and mental processes have to do with the processing of information and the attribution of meaning. Consciousness is always about perception, perception is always about interpretation, and interpretation always refers to information. Matter is not only a vehicle of information, but also does, in itself, embody physical patterns of information. Within the context of NBIC convergence, the informational nature of reality becomes evident (Floridi, 2007a).

    The reason why we have been experiencing recently the rise of soft-materialism (Dembsky, 2007), is explained by the emergence of a revised materialism based on information. According to the soft-materialist view, if we can decode reality, we can recode ourselves. And since mind has an informational relation to matter, thereby it follows that if we decode matter, it will eventually lead us into mind. NBIC convergence is being heralded as the knight-in-shining-armor that will lead us in the conquest of mind by unlocking all “programming” secrets of matter. This is the “bottom-up” approach to AI.

    Aside from all differences between the approaches, stands the relationship between mind and matter as being informational at its core. In this context, symbolists such as Dembsky and connectionists such as Searle, Penrose - and even Kurzweil and Minsky – find a common ground. Biological evolution could possibly be converging with technological evolution because if in its essence all matter is informational, and if information determines the structural designs of matter in all its forms, biology and technology are therefore information-based processes which share a common semiotic nature. There seems to be a level of convergence between symbolic and material levels of reality, based on intersemiosis.

    3 Digital Levels

    There are other levels of convergence between biological and digital realities. AI is behind the development of Floridi´s philosophy of information (Floridi, 2002), which interprets NBIC technologies as forming elements of an information-based, all-encompassing environment: the infosphere. Within such an environment, permeated by intelligent processes, all beings and things acquire an informational ITentity. Philosophy of information interprets the ontological impact of AI and the “intelligentification” of external reality (Floridi, 2007b).

    Advances in RFID (radio frequency identification) technologies allow any physical object to acquire an informational identity, called ITentity by Floridi (2007b). These very small RFID tags are microchips that can be incorporated into living and non-living beings and objects, and provide Wi-Fi access to the Internet. This type of technology makes possible a new expanded hybrid network of digital and biological informational entities, one that is not restricted to any computational platform, but expands into the surrounding environment, configuring an infosphere. In this infospheric network, human consciousness relates and interacts with AI agents, forming new hybrid networks of collective intelligence. This combination between human intelligence and AI is expressed by the concept of inforg, informational organism (Floridi, 2007b).

    Assuming that by applying RFID technologies to objects it is then possible to confer to each object an ITentity (and that this digital inforg possesses a certain degree of AI being able to communicate and interact over the Net), then an “intelligentification” of things occurs. Beings acquire properties of electronic devices (digital expansion of human cognition) and electronic devices acquire properties of living creatures (intelligence and communication). NBIC developments are making the boundaries between on-line and off-line, digital and non-digital, to become less and less clear. Floridi´s ideas point to a convergence between multiple levels of reality in terms of the convergence between online and offline: be it digital or genetic, everything is code, everything is information - and if everything is information, everything communicates. Multiple levels of reality are being digitally connected and expanded.
    The development of information technologies literally creates new levels of reality, when modifying and expanding the cognitive reach of human consciousness.

    Cyberspace and Virtual Reality (VR) are digital immersion environments which can also be interpreted as parallel realities in the expression and flow of human consciousness. The complex interactions connecting AI agents and human agents modify the structure of reality itself, which seems to be constituted more and more by a technological mix between ever more integrated levels of reality. Digital becomes the common language uniting organic to non-organic.

    The digital expansion of human cognition is analyzed by Floridi (2007b) in its external aspects, such as the establishment of an infosphere. Ascott (2003) will also address this issue; however, his analysis is centered on the internal realm of human experience, by placing consciousness at the core of his research. Ascott (2003) presents the idea of convergence between levels of reality through the concept of Moist Reality: an inorganic, digital, Dry Reality vs. an organic, biological, Wet Reality. He grants to cyberspace the status of a level of reality of its own. In this cyber level of reality, human cognition is augmented digitally. To this electronically enhanced cognition he calls cyberception.

    Cyberception is about the convergence of new conceptual and cognitive aspects of human consciousness, triggered by the hyper connectivity of cyberspace (Ascott, 1994). The concept of Moist Reality, formed by the coupling of the "wet” dimension of biology to the “dry” dimension of digital technologies, is very close to the concept of infosphere. Ascott also identifies new forms of “artificial consciousness” emerging from these new forms of interaction between man and machine.

    Another important point of contact between Floridi and Ascott is that it is becoming more and more difficult to distinguish, in the universe as a whole, man from non-man. Hybrid cognitive interfaces between human and artificial intelligence are simultaneously internal and individual (neural) and external and collective (infosphere). The basic differences in the essence of organic and inorganic attributes start to be effaced by NBIC convergence, giving rise to a new ontological perspective of unity in diversity. This perspective is transdisciplinary and portrays converging technologies as the main element of a new philosophical ontology based on dynamics of information and meaning.

    Mon, May 31, 2010  Permanent link

    Sent to project: Polytopia
      RSS for this post
      Promote (7)
      Add to favorites (2)
    Synapses (1)

    Spaceweaver     Tue, Jun 1, 2010  Permanent link
    Thank you Renata for this comprehensive triple post.

    Mind can not be a property of matter according to Dembsky (2007), because all properties of matter would have to be material, since they come from matter in the first place.

    This seems to be a dualistic approach in disguise (or without disguise) and does not have a sound philosophical ground. We could say the same about life: Life cannot be a property of matter because all properties of matter would have to be material, since they come from matter in the first place... therefore life is essentially not material. However it has been established with increasing degree of detail that life arises as complex organization of matter. I do not see why mind would be essentially different. The analogy could go a bit further: if life is the general state of living - that is the special modes of activity and interactivity attributed to certain complex material organizations (organisms), so mind is the general state of minding - that is special modes of activity and interactivity attributed to certain complex material organizations (minds, mindful agents).

    What is special about organisms and minds as we know them is the very complex and generally intractable self organized process of their origination - evolution by natural selection that is. The question of whether we can or cannot create an artificial mind is malformed. There is no question that it is possible in principle based on our knowledge of the physical world. There is nothing essentially missing, no mysterious force or substance that is missing in our understanding. These are the details of the complex organization that gives rise to mind that we are missing...

    The clearer question we should ask instead is whether we can replicate (or even do better) by design the grand achievements of natural selection. In other words, can we reverse engineer not only natural evolution's products, which we already do to an extent, but also its very processes of production. Philosophically speaking it seems that intentional (conscious) design has an obvious advantage over blind selection. A deeper consideration however shows how profound is the question. Intentional selection (in distinction to natural selection) is indeed a very powerful paradigm but its power come at a dear cost: the extreme narrowing of options due to its methodical, goal directed nature. It is not clear at all whether we can find by design an ultimate shortcut to the intractable path evolution has serendipitously found. It is not given that we can reverse engineer or even understand that which was never engineered in the first place.

    Yet, we can always emulate evolution itself faster, much faster and hope to obtain a similar result. But than we would have to give up the intentional shaping of the end results... and admit the inherent limitations of design. This conundrum will most probably demand a radical shift of paradigms, a wild science, which I believe is already on its way.

    Within the context of NBIC convergence, the informational nature of reality becomes evident (Floridi, 2007a).

    We can say that the nature of knowledge is informational, that the nature of perception is informational, that the nature of cognition is informational, even that the nature of conscious observation is probably, at least in part, informational. We cannot claim however that the nature of reality is informational without clarifying the ontological grounds of such claim. The only approach that directly implies that the nature of reality is informational is the constructivist approach that claims (loosely speaking) that reality, any reality we can relate to, is a cognitive construction. It is indeed evident that NBIC convergence is first and foremost a cognitive construction and therefore projects its own informational nature unto reality whatever that might be.

    In other words, we chose to describe our reality as informational because we recognize the huge evolutionary advantage that comes with it. Advantage however is not an ontological principle, it is rather a special kind of blindness :-) By considering information as the foundation of our reality (and our identity) we define who we are and how our future is going to unfold. We are to become inforgs- creatures made of information.

    Nevertheless, for the info-creatures we are becoming, it is important, even critical to keep in mind that there might have been and actually there are and always will be many other unfathomable options to describe reality. Options that would take us on a completely different and not in the least less interesting evolutionary paths.

    btw, it would be nice to have the full references.

    relemorais     Tue, Oct 16, 2012  Permanent link
    hey, awesome comment.

    the full list of references is available here: 
    Spaceweaver     Fri, Oct 26, 2012  Permanent link
    Thanks Renata, I reread your posts and the comment dating more than two years back. Indeed awesome ;-)
    relemorais     Fri, Oct 26, 2012  Permanent link