Member 1096
48 entries
507696 views

 RSS
Exploring the edge.
Immortal since Dec 19, 2007
Uplinks: 0, Generation 2

K21st
The Global Brain
"It is not guilty pride but the ceaselessly reawakened instinct of the game which calls forth new worlds." (Heraclitus Reloaded)
  • Affiliated
  •  /  
  • Invited
  •  /  
  • Descended
  • Spaceweaver’s favorites
    From syncopath
    eChoes ...
    From gamma
    Underground life in 2013...
    From Xaos
    Conversations With...
    From Wildcat
    Re-Be-Coming Human...
    From Xaos
    The Aesthetic Ground (the...
    Recently commented on
    From Wildcat
    Tilting at windmills or...
    From Wildcat
    Some nothings are like...
    From whiskey
    "A Case For The...
    From Xaos
    Conversations With...
    From Wildcat
    Archeodatalogy - Entwined,...
    Spaceweaver’s projects
    Polytopia
    The human species is rapidly and indisputably moving towards the technological singularity. The cadence of the flow of information and innovation in...

    The Total Library
    Text that redefines...

    The great enhancement debate
    What will happen when for the first time in ages different human species will inhabit the earth at the same time? The day may be upon us when people...
    Now playing SpaceCollective
    Where forward thinking terrestrials share ideas and information about the state of the species, their planet and the universe, living the lives of science fiction. Introduction
    Featuring Powers of Ten by Charles and Ray Eames, based on an idea by Kees Boeke.
    From Spaceweaver's personal cargo

    Contemplating Singularity
    Project: Polytopia
    Recently I came across a very interesting article by Timothy Lenoir, bringing a fresh perspective on the concept Singularity and posthumanist future.

    In the introductory note Lenoir writes:

    Most researchers agree that there is no reason in principle why we will not eventually develop conscious machines that rival or surpass human intelligence. If we are crossing to a new era of the posthuman, how have we gotten here? And how should we understand the process?

    Cultural theorists have addressed the topic of the posthuman singularity and how, if at all, humanity will cross that divide. Most scholars have focused on the rhetorical and discursive practices, the metaphors and narratives, the intermediation of scientific texts, science fiction, electronic texts, film, and other elements of the discursive field enabling the posthuman imaginary. While recognizing that posthumans, cyborgs and other tropes are technological objects as well as discursive formations, the focus has been directed less toward analyzing the material systems and processes of the technologies and more toward the narratives and ideological discourses that empower them. We speak about machines and discourses “co-constituting” one another, but in practice, we tend to favor discursive formations as preceding and to a certain extent breathing life into our machines. The most far-reaching and sustained analysis of the problems has been offered by N. Katherine Hayles in her two recent books, How We Became Posthuman and My Mother Was a Computer. Hayles considers it possible that machines and humans may someday interpenetrate. But she rejects as highly problematic, and in any case not yet proven, that the universe is fundamentally digital, the notion that a Universal Computer generates reality, a claim that is important to the positions staked out by proponents of the posthuman singularity such as Morowitz, Kurzweil, Wolfram and Moravec. For the time being, Hayles argues, human consciousness and perception are essentially analog, and indeed, she argues, currently even the world of digital computation is sandwiched between analog inputs and outputs for human interpreters. How we will become posthuman, Hayles argues, will be through interoperational feedback loops between our current mixed analog-digital reality and widening areas of digital processing. Metaphors, narratives and other interpretive linguistic modes we use for human sense-making of the world around us do the work of conditioning us to behave as if we and the world were digital.

    I propose to circumvent the issue of an apocalyptic end of the human and our replacement by a new form of Robo Sapiens by drawing upon the work of anthropologists, philosophers, language theorists, and more recently cognitive scientists shaping the results of their researches into a new argument for the co-evolution of humans and technics, specifically the technics of language and the material media of inscription practices. The general thrust of this line of thinking may best be captured in Andy Clark’s phrase, “We have always been cyborgs.” From the first “human singularity” to our present incarnation, human being has been shaped through a complicated co-evolutionary entanglement with language, technics and communicational media.


    In the article, Lenoir argues that in some very relevant and real sense, the Singularity has already taken place a few millennia in our past when the human brain evolved the capacity for abstract symbolic representation. This capacity has enabled culture, complex social organizations, technology, and open ended concept formation (evolution of knowledge). Though he is not explicit about it, this argument leads to the proposition that what we witness as acceleration towards a future Singularity and transition into a posthuman era is only a consequence of this capacity.

    Following Lenoir's line of thought, to achieve Artificial General Intelligence (AGI) we need to find a way to endow our computing machines with an autonomous capacity for abstract symbolic representation. In autonomous I mean that this capacity will become independent from human symbolic interpretation. As formidable as our computing systems are becoming, they possess very rudimentary capacity for autonomous symbolic representation. This is why we need to design and program them instead of letting them to learn and evolve autonomously. Most successful AI systems existing today are based on domain specific symbolic representation that allows such systems to learn within a specific and narrow domain of knowledge. Once we manage to endow machines with general abstract (domain independent) symbolic representation, machines will become intelligent and possibly sentient (capable of at least some level of self representation and self reflectivity). Such machines will be capable to evolve independently and probably much faster than their biological ancestors/creators. This seems a very plausible scenario though far from being trivial, as we still do not understand how exactly such capacity evolved in the first place. This is still one of evolution's most kept secrets.

    Indeed it seems that autonomous abstract symbolic representation is a necessary capacity of a general intelligence, biological or artificial. It is not clear however if it is a sufficient capacity. It is entirely not clear if such capacity is sufficient, for example, to achieve sentience or even consciousness. I will try to address these riddles in my following posts on a new model of mind.

    It is interesting to note that from this perspective, the concept of Singularity as associated with the emergence of Artificial Intelligent machines with capacities that exceed the human, is a developmental phase transition rather than an evolutionary transition in the sense that the fundamental enabling capacity discussed above has been achieved by biological systems quite long ago. What we may witness in the future Singularity is if so only the full blown fruition of what basically made us distinctively human at the dawn of history.

    Read the rest of the interesting article here: Contemplating Singularity

    Mon, Aug 10, 2009  Permanent link
    Categories: posthumanism, Artificial Intelligence
    Sent to project: Polytopia
      RSS for this post
    1 comment
      Promote (11)
      
      Add to favorites (3)
    Synapses (1)
     
    Comments:


    chris arkenberg     Tue, Aug 18, 2009  Permanent link
    In a sense, you and Lenoir seem to be tugging at (or perhaps avoiding) the interdependence of language and human cognition. The co-evolution of self-awareness and linguistic structures (ie abstract symbolic representation) is not likely something that would simply arise in an appropriately programmed AI. Yet any sentient intelligence seems to necessarily evolve through the acquisition of and interaction with a representational language in order for it to evolve a true sense of Self & Other. Furthermore, the construction & evolution of language, and the attendant construction of the self, is a social phenomena requiring interaction, not simply good programming.

    But I'm now re-iterating the great AI debate... Nevertheless, it's interesting that this article acknowledges the primacy of language in cognition and sentience.
     
          Cancel