Member 2649
6 entries
19294 views

 RSS
Contributor to project:
Polytopia
renata lemos morais (F, 6)
BR
Immortal since May 31, 2010
Uplinks: 0, Generation 4

nomadesign
  • Affiliated
  •  /  
  • Invited
  •  /  
  • Descended
  • Recently commented on
    From relemorais
    levels of convergence -...
    From Wildcat
    The revolution will be...
    relemorais’ project
    Polytopia
    The human species is rapidly and indisputably moving towards the technological singularity. The cadence of the flow of information and innovation in...
    Now playing SpaceCollective
    Where forward thinking terrestrials share ideas and information about the state of the species, their planet and the universe, living the lives of science fiction. Introduction
    Featuring Powers of Ten by Charles and Ray Eames, based on an idea by Kees Boeke.


    dear bruce [sterling],

    The reason for this interview is that during Early Atemporality - the posthistorical or ahistorical period you suggest we are living in - 'we are struggling with what it means and how it’s different from post-modernism'. Since you are one of the main proponents of this concept, your help in clarifying what it means is greatly appreciated. I suspect that previous versions of your ideas about atemporality might have been lurking in many of your works as a novelist, a top example being The Difference Engine. This novel hints on atemporal features of cultural evolution. William Gibson has said that 'one of the impulses that led to The Difference Engine was a sense Bruce Sterling and I had of the Industrial Revolution having been a far deeper and more intense shift than we ordinarily, culturally, give it credit for having been'.

    renata - Was atemporality already present at the time of the Industrial Revolution as a cultural phenomenon? If so, how does it differ from the atemporality which is based on contemporary network culture?

    bruces - *I wouldn't say that the Industrial Revolution had "atemporality." The Industrial Revolution was extremely keen on synchronization, on accurate railroad schedules, on time-zones for telegraphy. A conceptual disruption in timekeeping such as Einstein's relativity was decades ahead of them.

    *There were certainly episodes in the Industrial Revolution when people were agitated about time and space — for instance, anxiety about the disorienting speed of rail travel. However, they had firm ideas about historical development, especially compared to us. A network of the kinds we have today doesn't behave with the comprehensive mechanical timing of a railroad. We have to face new atemporal anxieties, such as the spasms and crashes of microsecond stock-trading, where it's literally impossible to determine what electronic event had strict temporal priority.



    renata - Your ideas on atemporality have ignited interesting commentaries, such as Kazys Varnelis's:

    If any observation about history defines our time, it's science fiction novelist Bruce Sterling's conclusion that network culture produces a form of historical consciousness marked by atemporality. By this, Sterling means that having obtained near-total instant access to information, our desire and ability to situate ourselves within any kind of broader historical structure have dissipated. The temporal compression caused by globalization and networking technologies, together with an accelerating capitalism, has intensified the ahistorical qualities of modernism and postmodernism, producing the atemporality of network culture - Kazys Varnelis

    Is your understanding of atemporality conditioned to the 'temporal compression caused by globalization and networking technologies', as Varnellis suggested?

    bruces - *The time compression is certainly part of the issue, but there are also time extensions in network culture. For instance, what is the difference between "the year 1955" and "the year 1955 as revealed to me by a Google Search"? Analog remnants of 1955 tend to be marred by entropy, but digitized clips of 1955 will load with same briskness and efficiency of digital clips from 1965, 1975, 1985 and so forth. In this situation, our relationship to history feels extended rather than compressed, because data from the past feels just as accessible as data generated yesterday. If you are re-using this material to create contemporary cultural artifacts, you don't just get "compression," you also get a skeuomorphism, a temporal creole — a Brazilian anthropophagy when all the decades are in one software stew-pot.

    renata - Is atemporality simply "a form of historical consciousness" produced by network culture?

    bruces - *I wouldn't call that process "simple." Also, the network culture we have now is temporary. With that said, it would be very hard to be or feel atemporal with only analog technology.

    *The network is required, although the network is not "consciousness," it's a variegated set of devices and services embedded in culture and transforming culture.

    *By talking about "atemporality," I'm arguing that the ways that cultures form historical consciousness are bound up in the ways that cultures access information — the ways we reason and argue about history and futurity. When one uses grand terms such as "history" and "consciousness," that suggests that people can touch absolute timeless realities outside the ways that human beings test and discuss history and consciousness. We might indeed have numinous, wordless encouters with reality, but we can't make them part of our culture unless we convey them to one another, and those methods of conveyance have been scrambled radically. We are still naive about some of those effects.

    *So, what's reality? I'm inclined to say that "history" would exist if Homo sapiens had never existed, and that there are potential forms of "consciousness" that aren't human. But, whatever those real things may be, we human beings never fully conquered metaphysics with ink on paper. Now we're losing ink on paper. So, why do we still pretend that our expressions about these things are stable, or timeless? They're no more stable than the artifacts by which we learn about them and promulgate them.

    *Ancient Egyptians had a "historical consciousness," but there were centuries when their hieroglyph writings were in full view, and no one had the least idea what they were saying. So Egyptian historical consciousness is not permanent, it's a very historically-contingent thing; sometimes it's there, and far more often it isn't.

    renata - Atemporality as 'a problem in the philosophy of history’, according to your definition, is a subjective experience dependent on technologically-mediated grounds of perception - or - is it an objective, all-encompassing dimension which has real existence outside perception? In any case, how do you see time in relation to technology; more specifically time in relation to social technologies?

    bruces - *I love that question. "Is reality really atemporal?" Being a science fiction writer, I always like to collect suggestions that space-time is not as we expect.

    *I wouldn't be so arrogant as to say that we human beings grasp the "objective, all-encompassing dimension that has real existence outside perception." Just for one instance: if matter and energy as we experience it is just four percent of a universe that is ninety-six percent Dark Matter and Dark Energy (as modern cosmology suggests), does it really behoove us to swan around making a lot of absolutist declarations about our subjective experiences? Maybe a proper metaphysical modesty is in order here.

    *With that said, I think that the Second Law of Thermodynamics is as firm a "law" as mankind is going to encounter. The passage of time is not a suggestion; time really passes, the days of your mortal lifetime do not return once they pass. If the passage of time was somehow arbitrary, then one would expect to see measurable effects on everyday physics, such as eggs unscrambling themselves, flowing water running uphill, and so forth. I frankly don't expect to ever witness even one of those. Atemporality is about our human, cultural apprehensions and expectations of time; it doesn't refute the laws of cause and effect.

    renata - Is the atemporal the realm of extreme multi-temporality or the realm of extreme connection via social media?

    bruces - *They're by no means "extreme" compared to what's coming. We just valorie them because they are part of our own unique experience nowadays. One tires of this corny new-media rhetoric when things are always named "extreme, mega, hyper, ultra." Of course they are extreme, but not for long.

    renata - You also say that network culture 'really changes the narrative, and the organized presentations of history in a way that history cannot recover from [...] it means the end of post-modernism'. How is it different from, or how does it relate to, Jean-François Lyotard's The Postmodern Condition?

    bruces - *Well, try to imagine a world where "atemporality" comes first, and then Lyotard writes "The Postmodern Condition." Culture wouldn't work that way, it's not possible.



    renata - You have written extensively on the New Aesthetics. Are there any atemporal attributes embedded into this movement? Is NA the aesthetics of atemporality?

    bruces - *Having seen many examples of the New Aesthetics, I feel confident now that there is a worldview waiting *beyond* atemporality. I said that atemporality was a temporary cultural point of view that would last about a decade. I still don't quite know what comes next, but I feel confident that my judgement there is about right. In the year 2022, "atemporality" will look-and-feel visibly old-fashioned. "Network society" will also be transformed. Not that it is refuted, or "wrong" — it's just that people will feel, "yes, life was indeed like that for a while, but then something else important happened, and now things look and feel quite different in some specific, identifiable way."

    renata - Is design an atemporal practice? Is there a specific kind of design practice which is conducive to atemporality?

    bruces - *Yes, I'd say that the Modernist search for timeless design solutions is sojourn opposed to temporality. So is the cultural conservatism of Arts and Crafts design. Atemporal design is marked by contemporary practices like mash-ups, collective intelligence, peer-to-peer production, re-usable software components, "favela chic" — I could go on, and I suppose that I will have to.

    bruces

    p.s. dear bruces, please do go on. best, renata
    Tue, Oct 16, 2012  Permanent link
    Categories: atemporal, post-modern, social networks, design
    Sent to project: Polytopia
      RSS for this post
      Promote (3)
      
      Add to favorites (1)
    Synapses (1)
     
    Fri, Mar 18, 2011  Permanent link

      RSS for this post
      Promote (5)
      
      Add to favorites
    Synapses (3)
     







    conditiones sine quibus non identitas indiscernibilium

    (on the technological engineering of the vanity and suffering of life)



    There are many distinct levels of suffering within our human experience. There are objective and subjective realms of suffering. There is physical, measurable pain; but there is also psychological, subjective pain. Much on the same way that our bodies are endowed with neural networks that respond to sensorial impulses according to a biological configuration which was predetermined by our genetic traces; decoding such impulses into “pleasure” or “pain”; we can also say that our emotional and intellectual bodies are endowed with certain specificities that were somehow predetermined by the sum of our life experiences. As a consequence of exposure to different life experiences, many of us develop psychological mechanisms to find pleasure in pain; or to find pain in pleasure.

    Suffering is, in itself, a perceptual experience. It exists only in relation to our perception, so the way we choose to perceive and decode an experience is what will determine its category. It is a fact that the human experience of happiness or suffering could be many times detached from the factual conditions of an event. Brazilians are a prime example of this: even when amidst social conditions of brutal poverty, injustice and violence, many individuals are somehow capable of finding joy in their daily struggles. Some cultures seem to be prone to a joyful uptake on suffering as an integral part of the magnificence of life... If life is unavoidable suffering, let’s embrace it as it is and celebrate anyway!

    Another very interesting case of manipulation of our perception of suffering is the use of antidepressants. Instead of a tropical, cultural deactivation of the experience of suffering by refusing to be depressed by its pain (Brazilians); Americans seem to have found a biotechnological way to avoid the suffering of life: Prozac. According to the American pharmaceutical way of life, all suffering is unnecessary. How bad does it hurt? A little? Take one pill. Really bad? Take three pills. No matter what your life circumstances are, eventually you will get numbed and/or excited, and a smile will come back to your face.

    But even this artificial pharmaceutical happiness could be seen, in Schopenhauer, as part of our conditiones sine quibus non. Even this sort of perceptual manipulation of the experience of suffering will soon come to an end, for when the intensity of pain is great enough, no mechanism of avoidance is able to numb it, be it through biochemical means or cultural means. And there are intense pains to be found everywhere in our human experience, especially in death – be it in the death of our beloveds or in our own… So the unavoidability of suffering is indeed a fact of life, but is there a way out of it?

    I seem to find an exit sign that is placed on my own identitas indiscernibilium.

    Renata Lemos
    Saas Fee, 2010
    Sat, Aug 21, 2010  Permanent link

      RSS for this post
      Promote (3)
      
      Add to favorites
    Create synapse
     


    LEVELS OF CONVERGENCE
    by Renata Lemos, Lucia Santaella

    Part 3 - Information and Meaning; Semiosis and Matter

    4 Information vs. Meaning

    Information pervades every level of reality, be it material, mental or emotional. It can be considered as being embodied in any type of pattern that could be perceived by, interpreted and transformed in other patterns (LIP, 2005). Patterns are only patterns in relation to an observer. While symbolists focus on the observer and on interpretation, connectionists focus on information structures, processes and dynamics. The cognitive hybrid interface between neuron and nano artifact is informational at its core. When it comes to hybrid systems of intelligence, the focus shifts toward the interface connecting the symbolic level of the first person to the material level of information processes, be they biological, cognitive or digital.
    Informational Realism and the theory of LoA (Levels of Abstraction) inform us that reality might be understood in terms of information structures (Floridi, 2007a). Epistemic Structural Realism (ESR) and Ontic Structural Realism (OSR) both refer to informational structures as the main instruments of knowledge acquisition. ESR implies that only through observing the informational interfaces between structures and systems can we understand reality. OSR, by the same token, states that we can only reach the essence of a given object via its informational structure. According to OSR, all structures are informational. Therefore, reality is about structure and structure is about information.

    However, Informational Realism does not account either for meaning or interpretation. The analysis made by philosophy of information is strictly constructionist, and could be aligned with the connectionist approach to mind. It is centered on the material aspects of reality: matter is information, so all information is material. According to philosophy of information, mind is an – informational – property of an – informational – system. It follows that if matter is information and all information is material – mind would also be material (?). This approach seems to fall into Dembsky´s category of soft-materialism, representing a new kind of soft-reductionism based on Informational Realism. Meaning is left out of this equation because it cannot be reduced to an object with an external independent existence. Meaning is always formulated by a first person. Meaning is always relative to a first person.

    The interface between matter and mind has data, information and meaning as its main elements. Mind only achieves knowledge (meaning) through the processing of information, information only gets to mind through perception, and perception interprets data in order to deliver information to the mind. As we can observe in Nitecki´s (1993) elucidative representation on Fig. 01, what connects matter to mind is a continuous flow of data, information and knowledge (meaning): (D = Datum i = Infoscript K = Knowledge)
    Continuity between Matter and Mind (NITECKI, 1993).

    The flow of information, being intrinsically connected to the flow of knowledge, still is not responsible for it. So while information is intrinsic to intelligence it does not account for intelligence. While meaning is always achieved through information, it is not reducible to information. The concept of infosphere does not encompass the dimension of meaning. Theories of information, however useful to the study of information processes, are not sufficient to the study of meaning. Having thus recognized the limitations of mathematical theories of information such as Shannon´s, and also of philosophy of information in the analysis of hybrid cognitive interfaces between mind and matter, we move on to exploring wider theoretical perspectives.

    5 Semiosis & Meaning

    The concept of semiosis was developed by C. S. Peirce in the context of his semiotics, the general theory of signs, where it was defined as follows:
    All dynamical action, or action of brute force, physical or psychical, either takes place between two subjects [whether they react equally upon each other, or one is agent and the other patient, entirely or partially] or at any rate is a resultant of such actions between pairs. But by 'semiosis' I mean, on the contrary, an action, or influence, which is, or involves, a cooperation of three subjects, such as a sign, its object, and its interpretant, this tri-relative influence not being in any way resolvable into actions between pairs (CP 5.484).

    Semiosis, the action of the sign, is the action of being interpreted in another sign. Perception is the door through which signs reach mind, being transformed into meaning, by means of the translation of one sign into another. This movement of sign as it goes from perception to interpretation is implied in semiosis. Although it is possible to visualize the mechanisms of perception, it is not so easy to visualize semiosis. While perception is about recognizing patterns of information, semiosis is about the symbolic meaning which will be attributed to them. There is no semiosis without transformation. Pattern recognition is transformed into meaning through semiosis – however, exactly where and how does it happen?

    Santaella (1998, p. 22) calls this question “the problem of perception” which goes beyond the mere reproducing and copying of patterns of information, it is mainly about continuous interpretation.

    As the interpreting process does not necessarily imply its embodiment in a human mind but may be performed by any subject with the capability of translating one sign or any signal into another, the concept of semiosis was incorporated by biologists. For these, semiosis can help us to answer several questions in biology, especially those concerning interpretation and meaning with which the quantitatively oriented mathematical theory of information cannot cope (Emmeche, 1991), Emmeche and Hoffmeyer (1991), Hoffmeyer and Emmeche (1991, 1999). Hence, semiosis fills the gap between information and meaning, by encompassing the first person and also intentionality. Brier (2006) describes semiosis in living systems in the following way:

    Molecules are composed of sequences of atoms and make three-dimensional shapes. They interact informationally through formal causality. Macromolecules are composed of minor molecules often put in sequences. Cells interpret the molecules as coded signs and interact with them through final causation in semiosis (Brier, 2006, p. 35).

    Concerned with the relations between life and meaning and the symbolic structures of living semiotic systems, biosemiotics considers that “the evolution of life is not only based on physical, chemical and even informational processes but on the development of semiotic possibilities” (Brier, 2006, p.35). Kull (1998) states that semiosis:

    ...could be defined as the appearance of a connection between things, which do not have a priori anything in common, in the sense that they do not interact or convert each other through direct physical or chemical processes. However, as far as the relation between them, once established (by a subject), is nevertheless intermediated by physical or chemical processes, this infers that the relation is semiotic as long as it is established through learning (Kull, 1998, p. 6).

    In sum: semiosis is the general technical term to cover the semantic field of terms such as intelligence, mind, thought – which can no longer be considered as privileges of the human kind. Whenever there may be a tendency to learn, toward self-correction processes, changes of habit, wherever there may be goal directed actions, there will be intelligence, wherever it may occur: in the pollen-grain which fertilizes the ovule of a plant, in the flight of a bird, in the immunological system, or in human reason. Thus it is that semiosis has to be understood side by side with concepts such as morphogenesis, teleonomy, autopoiesis, dissipative structures, self-organizing systems, as well as with the contemporary cybernetic concepts which have been studied by the new discipline of cybersemiotics.

    In that context, hybrid mind-matter interfaces denote a progressive convergence between biosemiotics and cybersemiotics, given that biological decoding engenders AI´s binary coding. Whenever biology is the object of research, then we must decode. Living structures must be decoded in order to be understood, manipulated and replicated. Whenever digital technology is the object at hand, then we must codify. Digital structures must be codified in order to come to existence as functional systems. The deeper we decode biology and the better we codify digital technology, we move closer to a single underlying code, which gives rise to new levels of semiosis. It is possible to infer that biology is also digital in its essence:
    Information technologies have been considered ... as extensions of man. However, the transformation of the human body has consequences also on the cultural human environment. Under these premises, human beings are seen as part of a complex system of natural and artificial messages that function on a digital basis. In this sense the human body can be seen as data (EGE, 2005, p.27).

    It is therefore possible to understand hybrid interfaces in terms of the interplay between data, text and messages. An organism would be a genetic message composed of DNA text, which is translated into a body. A digital artifact would be a technological message composed of software text, which is translated into a functional structure. Be it digital or not, semiosis is continuous translation that travels across multiple levels of reality. Semiosis is material and symbolic at the same time. It encompasses simultaneously a material vehicle and a symbolic meaning. Semiosis carries the force of evolution, the force of movement and change. On many different levels of reality, one can find semiosis at work: it is through semiosis that trees experience growth, concepts experience development, systems experience evolution. There are layers of sign action permeating material systems.

    According to Kull (1998, p. 4), semiosis is “a process of translation, which makes a copy of a text, suitable to replace the original text in some situations, but which is also so different from the original text that the original cannot be used (either spatially, or temporally, or due to the differences in text-carrier or language) for the same functions”. Semiosis is what happens at hybrid cognitive interfaces between matter and mind: a back and forth dynamics of digital translation and symbolic interpretation from one level to another.

    Peirce was the first to develop the notion of a naturalist semiotics, which considers the universe to be perfused with signs. This universal nature of semiosis has a lot to say about the merger between the fields of biology, cognition and artificial intelligence (Santaella, 2004). This semiotic merger occurs around the concept of effete mind, a concept to be understood in the context of Peirce´s synechism.

    6 Peirce: Continuity & Effete Mind

    Synechism is defined as “that tendency of philosophical thought which insists upon the idea of continuity as of prime importance in philosophy”. The continuum, on its turn, is defined as “something whose possibilities of determination no multitude of individuals can exhaust” (CP 6.169-170). A rudimentary form of continuity is generality, since continuity is nothing but perfect generality of a law of relationship (CP 6.172). Besides the development of his synechistic ideas, Peirce also gave ample thought to tychism or absolute chance. This latter was proposed because he considered mechanistic and deterministic explanation insufficient in the light of his doctrine of categories (Santaella, 2001).

    Given a choice between Cartesian dualism and some variety of monism, for Peirce, philosophy must adopt the latter. There are three possible directions in which monism can be developed.: (a) neutralism, which takes physical and psychical laws as independent of each other and steming from some third Urstoff; (b) materialism, which takes the psychical laws to be derived from the physical and (c)idealism, which take the physical as derived from the psychical. Occam’s razor guided Peirce against neutralism and the first principle of scientific thought, that is, do not resort to the ultimate and inexplicable as an explanation (CP 6.24), guided him against materialism. Objective idealism is the only rational alternative: matter is effete mind. The main interpretation of the concept of effete mind associates it to living matter (Mladenov, 2003). Peirce, however, never restricted this notion to a particular kind of matter. Universal semiosis is implicit in his principle of continuity (Rosa, 2003), which is the basis of his doctrine of synechism.

    If matter is effete mind, and physical laws are derived from psychical, there is only one kind of stuff in the universe and that is mind, the great law of the universe is that of mind. What is the law of mind? It is the tendency to generalize and to form associations which is also the tendency to form habits, itself a habit (CP 6.612). What Peirce found out in nature and in thought is a general tendency of possibilities or chance events to turn into sequences of events that coalesce by taking habits. This is relational generality from which dynamism and growth generate. The prototype of this tendency is in the human mind, in the way ideas are associated in our minds which is analogous to the probabilistic laws of nature (Hulswit, 2000; Ency, p.7).

    Hence, his monism on mind or objective idealism is not just an inversion of the physicalist conception of mind according to which mental states are simply physical states. What Peirce asserted is that all of reality, in an infinite series of differentiations, is governed by the law of mind. He did not mean that matter has the substance of mind, neither “substance” in the old sense of a thing nor in the modern chemical sense.

    Objective idealism transcends Plato´s duality between matter and form, by interpreting matter as a product of an all-encompassing Mind (Hegel, Schelling, Fichte). The greatest objective idealist was Hegel, who saw reality as an expression of a continuous Absolute, in which there can be no true separation between levels. Hegel described the Absolute in terms of an underlying unity made of continuous movement and change. Hegel´s concept of Absolute is implied on the contemporary idea of semiosphere:

    ...all semiotic space may be regarded as a unified mechanism (if not organism). In this case, primacy does not lie in one or another sign, but in the “greater system”, namely the semiosphere. The semiosphere is that same semiotic space, outside of which semiosis itself cannot exist (Lotman 2005, p.208).

    Lotman´s “greater system” – semiosphere - is absolute continuity from one level of reality to another. There is a certain correspondence between Peirce´s principle of continuity and the continuous state of change characterizing the Hegelian (Absolute) unity of matter and mind within a semiosphere. Peirce´s law of mind, being based on synechism – continuity - and tychism – absolute chance – also shows a clear resemblance to contemporary scientific quantum theories.

    Nicolescu (2005) has pointed to striking correspondences between Peircean concepts and quantum physics: Peirce´s concept of Primacy relates to quantum events; tychism relates to non-determinism and quantum mechanics; the idea of continuity relates to bootstrap theory; Peirce´s atomic theory relates to string theory. Also, according to Nicolescu (2005), there are other correspondences between classical physics and Peircean concepts, such as time-space continuum corresponding to Peirce´s category of Secondness, and cosmic evolution corresponding to Peirce´s concepts of Thirdness and Final Causation.

    Another correspondences are found on Nicolescu´s Logic of the Included Middle (Nicolescu, 2001), which unites physical levels of reality (in this context seen as data, information) to ideal levels of abstraction (symbolic meaning). According to the Logic of the Included Middle, in every relation involving two separate levels of experience, there is a hidden third that belongs simultaneously to both. If in accordance with the Logic of the Included Middle there is a converging point T between A and Non-A (Nicolescu, 2001), the point connecting info to semiosphere, digital to non digital; human intelligence to AI seems to be hidden in the third realm of continuous semiosis. Complexity is the context in which continuous semiosis takes place, enabling convergence among levels.

    The Logic of Included Middle points to a principle of continuity between multiple levels of reality and to the underlying level of convergence between them. The digital fluidity of virtual worlds, where information configures multiple sensory realities through one same binary code, is a powerful metaphor to the physical fluidity of the material world, where one same dynamic polarity of quantum wave to particle configures multiple bodies, shapes and environments.
    Complexity can be found in all these levels and inter-relations. Information technologies act as platform and pertinent metaphor to the Logic of the Included Middle: the perceptual dematerialization of reality is not only physical, but also digital. Perception of multiple levels of reality occurs in the quantum space as much as in cyberspace and nanospace. Peirce´s idea of continuous semiosis deeply resonates to Nicolescu´s idea of continuity among multiple levels of reality, especially concerning continuity between mind and matter.

    7 Quantum Approaches

    Hybrid cognitive interfaces are complex and function as non-linear systems, given that they are mediated by converging technologies. In fact, hybrid interfaces bring about a conceptual revolution (Thagard, 1992) which is very similar to complexity, because both portray matter as information flow. The unity of matter in NBIC is made possible by nanotechnology´s mapping of informational flux, which is then re-configured by nanodevices. The flow of quanta can also be interpreted as a type of information flow. This is not only a metaphor, because if a flow of information is always material in its embodiment, and if there is a quantum level in all forms of matter, then we might deduce that information lies at the very quantum heart of matter. NBIC convergence only makes it more evident.

    Hybrid interfaces connecting nano artifacts to neural networks are open gates between mind and matter, and are platforms of complex digital interaction within biological (neural) informational processing networks. Artificial intelligence within converging technologies is mostly based on non-linear computation (Bernstein et al., 2006). Complexity is omnipresent in the semiotic integration of hybrid interfaces. Complex properties such as uncertainty are found in all levels of matter. Connectionists believe that mind is an emergent property of biological complexity (Searle, 2002; Penrose, 1994), and that all processes of evolution are a consequence of the increase of complexity among various layers of information (Kurzweil, 2005; Minsky, 1990).

    Non-linear principles of computation bring about yet another approach to the mind-matter relation. We have described in this paper the perspectives of pan-informationalism (an all encompassing infosphere) and of pan-semioticism (an all encompassing semiosphere). Both of them represent new ontologies. There is yet another perspective, however, that quantum scientists such as David Deutsch (2003) and Seth Lloyd (2006) are currently developing, which is the view of pan-computationalism (Dodig-Crnkovic, 2006) - an all encompassing universal matrix. According to this view, reality is established by the continuous and complex universal processing of information.

    David Deutsch states that “the world is made of qubits” (Deutsch, 2003, p.13). Deutsch´s It from Qubit hypothesis (Deutsch, 2003) is a quantum computational version of Wheeler´s It from Bit hypothesis (Wheeler, 1990). Wheeler is a pan-informationalist; Deutsch is a pan-computationalist. Seth Lloyd (2006) is also a pan-computationalist. His approach is similar to Kurzweil´s (2005), in that he stands in favor of Strong AI, and believes that, in theory, anything already is or could become a quantum computer. Quantum computation would make hybrid interfaces between mind and matter even more pervasive and porous than what any kind of nanotechnology could make. In the context of hybrid cognitive interfaces, the evolution in quantum computation would represent a shift from mediated non-linear dynamics within nano artifacts to direct quantum programming within hybrid cognitive structures.

    8 Conclusion

    The essence of reality lies hidden in the intersection of life and its material platform, of intelligence and its vehicle; of mind and its material embodiment. NBIC convergence begins to approach this complex interface. We are witnessing the emergence of a technontology (Lemos et al., 2007), which within the context of transdisciplinarity (Nicolescu, 2001), operates a new kind of ontic convergence.

    We are unveiling the hidden codes and structures of matter; penetrating underneath the surface of what seems solid and finding out how fluid matter really is. Nature seems more and more to operate according to an ensemble of codes, much like the codes that enable computer technologies. Nature is literally a “system of systems”, and so we find ourselves as “systems within systems” (Bunge, 2003). We can only approach levels in relation to systems, be they material or conceptual. It is the nature of the code which gives structure to all systems and determines the boundaries of each level of reality.

    Previous boundaries between levels of reality become permeable, thus the lines separating levels of reality begin to blur. The only line of separation which seems to remain is found in the juxtaposition of mind and matter. Mapping the interface between mind and matter is probably the greatest scientific challenge of our times. However, even the distance between cognitive levels of reality and material levels of reality becomes shorter in the context of converging technologies.

    Hybrid cognitive interfaces represent a possible new level of convergence between matter and mind. Converging technologies act as a bridge between natural and artificial systems of information processing. Such cognitive integration is technological and is happening in many levels simultaneously, as it can be seen, for example, in the interactive behavioral patterns of populations in digital virtual worlds (Ascott, 2003); in the “intelligentification” of objects through RFID technologies (Floridi, 2002); in the biologically-inspired nano robotic cognitive architectures (Bernstein et al., 2006); and finally in the possibility of direct quantum computation and programming within all kinds of material structures (Lloyd, 2006).

    Technological convergence is taking place at an accelerating speed (Kurzweil, 2005) and it is changing our inner and outer landscapes. The concepts of information and meaning are omnipresent in this process. Convergence happens through a process of simultaneous coding and decoding. The evolution of AI is of particular importance in this context, because through the engineering of artificial systems that can act according to rational principles, a new kind of reductionism appears. Intelligence could a priori be reduced to a computational capacity, resulting in the controversy around the possibility of Strong AI. Introducing AI elements into nano neural applications is the seed of hybrid forms of intelligence, which would be mediated or established through NBIC integration.

    Intelligent artificial interference within biological systems establishes a principle of trans-interoperability between organic and digital levels of reality. Trans-interoperability between biological and non-biological systems, in order to perform a common task or function, allows communication and therefore enables active interference across levels. Trans-interoperability expresses technologically what Peirce´s principle of continuity has expressed ontologically through universal semiosis.

    Peirce´s universal semiotics, together with Nicolescu´s Logic of the Included Middle, emerge as crucial theoretical standpoints in the study about the new hybrid interfaces between mind and matter. Peirce´s law of mind represents to the study of mind what quantum theories represent to the study of matter: a dramatic upheaval and serious challenge to the materialistic conceptions which are the basis of reductionism.
    Mon, May 31, 2010  Permanent link

    Sent to project: Polytopia
      RSS for this post
      Promote (7)
      
      Add to favorites (3)
    Create synapse
     





    LEVELS OF CONVERGENCE
    by Renata Lemos, Lucia Santaella

    Part 2 - Mind and Matter; Intelligence and Life


    Converging technologies are indeed getting inside the human brain in applications that are deeply integrated to cognitive systems. However, this does not mean that any significant change in the inner levels of human consciousness is likely to occur. The terms and expressions used by Ray Kurzweil are packed with metaphors comparing the human mind to a computer. Software of intelligence. Reverse-engineering the human brain. It is clear, given Kurzweil´s terminology, that his approach is based on materialism and reductionism. Intelligence is some kind of biological software that will be replicated once we reverse-engineer the human brain. According to this view, the human brain is a biological computer. It follows that if consciousness seems to be a property of a biological computer then any other kind of computer able to fully replicate the functioning of the brain could be capable of consciousness.

    Those who believe in this possibility are defenders of a technology-based new era of evolution: machines will not only be able to replicate all human qualities, but will merge with humans and generate a new species of super-intelligent beings. This merger, together with the exponential speed of technological advancement, will eventually alter the very nature of reality, resulting in a technological Singularity (Kurzweil, 2005).

    There are many advocates of Strong AI. Marvin Minsky (1990) and Ray Kurzweil (1999, 2005, 2006) stand as two of its most prominent representatives. According to John Searle (1980), Strong AI refers to “the claim that the appropriately programmed computer literally has cognitive states and that the programs thereby explain human cognition” (Searle, 1980, p. 417). There are two main ways in which AI could achieve this goal. The first is based on programming that tries to represent the symbolic structures of human minds; the second is based on the study and artificial replication of neural networks within the brain.

    Minsky (1990) refers to the effort of trying to achieve Strong AI through symbolic research the “top-down approach”, and of trying to achieve Strong AI through connectionist research the “bottom-up” approach. The first strategy would depend highly on interpretation, context and self. The second depends on nothing but decoding the functions of neural networks and programming artificial ones. Therefore, the symbolic approach in AI has been vanishing, while the connectionist approach has continued to prosper.

    Bringing attention to the symbolic limitations of AI, Searle (2002) has compared the Chinese Room Argument to what happened with Deep Blue. When beating Kasparov, Deep Blue was not playing chess, because the concept of chess has symbolic layers of meaning attached to it. A computer couldn´t possibly have access to the symbolic level of a chess game given that “the symbols in the computer mean nothing at all to the computer” (Searle, 2002). So while Kasparov had an understanding of chess based on its symbolic meaning, Deep Blue was merely performing a function which was programmed to arrive at decisions based on calculations regarding possibilities.

    Searle´s view represents the Weak AI approach, which relies on the uniqueness of our aesthetic, religious, philosophical and deep symbolic/archetypical levels to rebuke the possibility of all-mighty programming and nano-engineering. Roger Penrose (1989, 1994) and William Dembsky (2002a, 2002b, 2007) are also defenders of Weak AI, although in different manners.

    The controversy surrounding this debate is so wide that even within the same approach there are important epistemological differences. Searle and Penrose are both connectionists, and would represent the equivalent, in neuroscience, to proponents of the “bottom-up” approach in AI. So while Searle and Penrose both stand up against the possibility that machines could fully replicate mind, they seem to believe that consciousness is an emergent property of biological neural networks within the brain.

    According to this view, the mind is a property of a biological system. The main argument against Strong AI would then be that only biological systems can possess emergent properties of consciousness. Connectionists understand that consciousness is a property of a certain level of biological complexity. Strong AI proponents understand that once computation achieves this certain level of complexity, artificial consciousness will emerge. There are similarities in both approaches.

    Dembsky, on the other hand, believes that reducing consciousness to a complex property of a biological neural network would be equivalent to the reductionism practiced by proponents of Strong AI. He states that “...nothing I've seen to date leads me to believe that intelligence can properly be subsumed under complexity or computation” (Dembsky, 2002a). In Dembsky´s perspective, wherever there is a first person, there is a non-reducible entity. The uniqueness of this subjective first person can not be artificially replicated. Mind can not be a property of matter according to Dembsky (2007), because all properties of matter would have to be material, since they come from matter in the first place. David Jakobsen (2005) has commented on the differences between the approaches of Kurzweil, Searle and Dembsky:

    Ray Kurzweil’s strongest argument ... is to point out the arbitrariness present in the distinctions of John Searle between silicon and biology. Thus the question is thrown back into another domain – the old mind/matter debate. A debate where the physicalist has the upper hand these days and views like the one of William Dembski can be defeated by calling it old fashioned”(Jakobsen, 2005).

    The main difference between symbolists (such as Dembsky) and connectionists (such as Kurzweil and Minsky) is that the first approach is centered on levels of meaning, and the second is centered on levels of information. NBIC convergence adds something to this debate. Converging technologies might change the grounds of this debate by enabling direct interference from artificial intelligent agents within the systems underlying conscious states of a first person, in Dembsky´s sense.



    The main focus of the debate might change from determining the possibility of Strong AI, to establishing the possibility of hybrid forms of intelligence, whose sense of self awareness is either established through or mediated by artificial agents. AI is a product of the biological evolution of human intelligence, however through NBIC convergence it will most certainly enhance human intelligence in a new sort of hybrid bio-technological evolutionary process.
    Consciousness remains grounded and limited to a biological platform; however, cognitive nano applications have the potential to artificially enhance and alter conscious states. Given the fact that these nano agents are endowed with artificial degrees of intelligence, a principle of hybridization is directly established between mental processes and artificial intelligence. This hybrid interface would simultaneously pervade mind and matter.

    The ways in which nano artifacts and neural cells interact are informational. A shared continuum of information and meaning thus represent the framework in which structures of hybrid systems of intelligence could be formed. All cognitive and mental processes have to do with the processing of information and the attribution of meaning. Consciousness is always about perception, perception is always about interpretation, and interpretation always refers to information. Matter is not only a vehicle of information, but also does, in itself, embody physical patterns of information. Within the context of NBIC convergence, the informational nature of reality becomes evident (Floridi, 2007a).

    The reason why we have been experiencing recently the rise of soft-materialism (Dembsky, 2007), is explained by the emergence of a revised materialism based on information. According to the soft-materialist view, if we can decode reality, we can recode ourselves. And since mind has an informational relation to matter, thereby it follows that if we decode matter, it will eventually lead us into mind. NBIC convergence is being heralded as the knight-in-shining-armor that will lead us in the conquest of mind by unlocking all “programming” secrets of matter. This is the “bottom-up” approach to AI.

    Aside from all differences between the approaches, stands the relationship between mind and matter as being informational at its core. In this context, symbolists such as Dembsky and connectionists such as Searle, Penrose - and even Kurzweil and Minsky – find a common ground. Biological evolution could possibly be converging with technological evolution because if in its essence all matter is informational, and if information determines the structural designs of matter in all its forms, biology and technology are therefore information-based processes which share a common semiotic nature. There seems to be a level of convergence between symbolic and material levels of reality, based on intersemiosis.





    3 Digital Levels

    There are other levels of convergence between biological and digital realities. AI is behind the development of Floridi´s philosophy of information (Floridi, 2002), which interprets NBIC technologies as forming elements of an information-based, all-encompassing environment: the infosphere. Within such an environment, permeated by intelligent processes, all beings and things acquire an informational ITentity. Philosophy of information interprets the ontological impact of AI and the “intelligentification” of external reality (Floridi, 2007b).

    Advances in RFID (radio frequency identification) technologies allow any physical object to acquire an informational identity, called ITentity by Floridi (2007b). These very small RFID tags are microchips that can be incorporated into living and non-living beings and objects, and provide Wi-Fi access to the Internet. This type of technology makes possible a new expanded hybrid network of digital and biological informational entities, one that is not restricted to any computational platform, but expands into the surrounding environment, configuring an infosphere. In this infospheric network, human consciousness relates and interacts with AI agents, forming new hybrid networks of collective intelligence. This combination between human intelligence and AI is expressed by the concept of inforg, informational organism (Floridi, 2007b).

    Assuming that by applying RFID technologies to objects it is then possible to confer to each object an ITentity (and that this digital inforg possesses a certain degree of AI being able to communicate and interact over the Net), then an “intelligentification” of things occurs. Beings acquire properties of electronic devices (digital expansion of human cognition) and electronic devices acquire properties of living creatures (intelligence and communication). NBIC developments are making the boundaries between on-line and off-line, digital and non-digital, to become less and less clear. Floridi´s ideas point to a convergence between multiple levels of reality in terms of the convergence between online and offline: be it digital or genetic, everything is code, everything is information - and if everything is information, everything communicates. Multiple levels of reality are being digitally connected and expanded.
    The development of information technologies literally creates new levels of reality, when modifying and expanding the cognitive reach of human consciousness.

    Cyberspace and Virtual Reality (VR) are digital immersion environments which can also be interpreted as parallel realities in the expression and flow of human consciousness. The complex interactions connecting AI agents and human agents modify the structure of reality itself, which seems to be constituted more and more by a technological mix between ever more integrated levels of reality. Digital becomes the common language uniting organic to non-organic.

    The digital expansion of human cognition is analyzed by Floridi (2007b) in its external aspects, such as the establishment of an infosphere. Ascott (2003) will also address this issue; however, his analysis is centered on the internal realm of human experience, by placing consciousness at the core of his research. Ascott (2003) presents the idea of convergence between levels of reality through the concept of Moist Reality: an inorganic, digital, Dry Reality vs. an organic, biological, Wet Reality. He grants to cyberspace the status of a level of reality of its own. In this cyber level of reality, human cognition is augmented digitally. To this electronically enhanced cognition he calls cyberception.

    Cyberception is about the convergence of new conceptual and cognitive aspects of human consciousness, triggered by the hyper connectivity of cyberspace (Ascott, 1994). The concept of Moist Reality, formed by the coupling of the "wet” dimension of biology to the “dry” dimension of digital technologies, is very close to the concept of infosphere. Ascott also identifies new forms of “artificial consciousness” emerging from these new forms of interaction between man and machine.

    Another important point of contact between Floridi and Ascott is that it is becoming more and more difficult to distinguish, in the universe as a whole, man from non-man. Hybrid cognitive interfaces between human and artificial intelligence are simultaneously internal and individual (neural) and external and collective (infosphere). The basic differences in the essence of organic and inorganic attributes start to be effaced by NBIC convergence, giving rise to a new ontological perspective of unity in diversity. This perspective is transdisciplinary and portrays converging technologies as the main element of a new philosophical ontology based on dynamics of information and meaning.
    Mon, May 31, 2010  Permanent link

    Sent to project: Polytopia
      RSS for this post
      Promote (7)
      
      Add to favorites (2)
    Synapses (1)
     












    LEVELS OF CONVERGENCE

    By Renata Lemos, Lucia Santaella.

    PART 1

    Converging Technologies


    1 Introduction

    We are living in an unique time in the history of humanity. This moment, this turning point, is unprecedented. In order to face the new possibilities of our future it is of utmost importance to be prepared to make wise choices about how will we shape our future as a new technologically enhanced and driven species. Science has taken us this far, but complexity has shown us that science alone will not take us much further. Technology has pervaded each and every aspect of our lives, to a point in which it has gotten literally beneath our skins. Many edifices are falling apart.

    It is not only about a paradigm shift. It is about a much more profound kind of shift, one that alters dramatically our ideas, our values, our bodies, our perceptions, everything and every aspect of human life. In these times of transition, we must try to bridge the gap between disciplines, find a common language that fosters cooperation, and most importantly, open our minds to fresh and daring perspectives. As we do so, trying to catch up with the overwhelming speed of the winds of change, hopefully we will contribute to a better understanding of the challenges that lie ahead.

    Recent advances in nanotechnology, and its subsequent application in a variety of academic fields, have given rise to an unprecedented phenomenon of technological convergence. The importance of NBIC (Nano-Bio-Info-Cogno) convergence comes from the fact that “all matter – living and non-living – originates at the nanoscale. The impact of technologies controlling this realm cannot be overestimated: control of nanoscale matter is control of nature’s elements” (ETC Group, 2003, p.6).

    NBIC convergence integrates three main levels of material reality through nanotechnology; namely biology, computing, and neuroscience.
    The distinction between different levels of reality can be done according to various types of criteria. Poli (1998) has skillfully drawn on Chwistek, Brouwer, Husserl, Hartmann, and Luhmann to arrive at the classification of three main strata of reality, each originating different levels: social (history ↔ art ↔ law, etc), psychological, and material (biology, chemistry, physics). In the light of NBIC convergence, however, a more fundamental dichotomy between levels starts to emerge around the dimensions of mind and matter. For the first time in the history of science, there is a technology-mediated convergence between material levels of reality and cognitive levels of human experience. The “unity of nature on the nanoscale” provides brand new grounds for this new kind of interface:

    NBIC convergence requires, and is made possible by, the radically new capabilities to understand and to manipulate matter that are associated with nanoscience and nanotechnology. The integration of technology will be based on the unity of nature at the nanoscale, as well as an information system that would cross disciplines and fields of relevance. (Bainbridge & Roco 2006, p.2).



    Unity of matter through NBIC makes possible the integration of biological and neural systems to artificial systems, including systems of Artificial Intelligence (AI). The unity of matter at the nanoscale makes possible the systemic integration between biological and non-biological entities; in order to perform a task or to enhance a human capacity, for example. Future prospects are overwhelming. These prospects are not solely related to enhancing the human species biologically. They include the enhancement of human cognitive states as well. Organism becomes artifact, and artifact becomes organism:

    Nanotechnology enables one to engineer at the nanoscale and thereby perhaps to reconfigure everything molecular. From the point of view of nanotechnology, what used to be separate domains of biomedicine, information technology, chemistry, photonics, electronics, robotics, and materials science come together in a single engineering paradigm (Nordmann, 2004, p.12).

    The main concept behind NBIC convergence is the universal dimension of information. Such accelerating rate of human-machine hybridization brings about serious ethical and philosophical implications. New ontological questions arise from the radical concept of technological unity of matter, such as questions regarding the very meaning of what is the essence of being human. Understanding biological processes as codes, as sign systems, scientists are now codifying and programming their artificial replications. Decodification of micro bio-systems serves as a road map for the codification of artificial agents that will mirror and interact with them. Because it is now possible to understand matter in terms of information, scientists have become able to re-configure and re-engineer all kinds of matter through nano technology.

    Nanotechnology enables the integration between all material structures, be they biological or non-biological. The ways in which such total integration becomes possible are similar to the ways information flows through all kinds of systems. Much in the same way in which information pervades everything, nano technologies could also pervade every level of material reality. Converging technologies share four main characteristics:

    Embeddedness: invisible infrastructure which could be implemented anywhere.
    Unlimited Reach: unrestricted access to all levels of material structures.
    Engineering of Body and Mind: new interferences bridging physical and cognitive processes are enabled.
    Specificity: high precision and control

    (Adapted from Nordmann, 2004, p.14-15)

    From the four characteristics named above, the third is perhaps the most complex and interesting one. Converging technologies´s greatest achievement is the advancement being done on the development of new kinds of interface between body and mind, or between intelligence and its material platform, the brain. Matter and mind are two levels of reality that are inextricably together, and still their interface remains largely uncharted.
    So far there have been many attempts made on the conquest of AI, all of which fall short of coming close to replicating consciousness.

    However, when it comes to NBIC convergence, the question is a bit different. It is not about creating independent artificial minds, but about enhancing human minds artificially to an extent in which the categories of human intelligence and artificial intelligence could blend in. Converging technologies are making possible the emergence of hybrid forms of intelligence through the technological enhancement of our own.
    Levels of convergence among different realities begin to emerge. These levels of convergence occur at the interfaces between physical and mental states. Where one reality touches another, interacts with another - and therefore transforms another – we find convergence taking place.





    2 Neurotechnology and Artificial Intelligence

    By engineering and programming nano robots which possess certain amounts of AI, scientists are becoming able to introduce intelligent agents into all types of material and molecular structures. When these nano intelligent artifacts enter within the neural networks of a brain, they become part of the conscious experience of that brain. This is the new field of neurotechnology (Khushf, 2006). Intelligent nano agents are now close to being able to interfere directly within the consciousness experiences of a brain.

    An example of this kind of matter/mind hybrid interface which is emerging as a result of NBIC convergence is the research currently being done on Biologically-Inspired Robotic Cellular Architectures (Bernstein et al., 2006, p.134-135). Through the mapping of the neural circuits within the brain, nano artifacts are being produced that simulate the behavior of a neuron, being able to interact with and be integrated to systems of cells. In this case, the neurons are those on the visual cortex, specifically those responsible for image-formation. Nano devices are being developed that could interfere directly with image-formation within the brain.
    Our understanding of brain functioning begins to transcend biology when nano robotic neurons have been taught how to speak the language of biological neurons. Cognitive processes are then understood as natural language processes. If these nano agents can successfully interact with and perform the functions of biological cognitive agents on a physical level, could they also perform mental activities? This is a question that no one has answered satisfactorily yet.
    Mon, May 31, 2010  Permanent link

    Sent to project: Polytopia
      RSS for this post
      Promote (9)
      
      Add to favorites (3)
    Synapses (2)
     
          Cancel