Member 4
51 entries
368584 views

 RSS
(F, 30)
Los Angeles, US
Immortal since Jan 18, 2007
Uplinks: 0, Generation 1

Proofs Of Concept
Vimeo Videos
Idea Dump
Videos Collide
  • Affiliated
  •  /  
  • Invited
  •  /  
  • Descended
  • meganmay’s favorites
    From Claire L. Evans
    MOON ARTS, PART TWO:...
    From chris arkenberg
    The Cybernetic Self
    From spookfish
    « We/I » instead of «...
    From notthisbody
    From Citizen Kane to...
    From hello_world
    The Ghost Protocol –...
    Recently commented on
    From meganmay
    Growing up at the...
    From meganmay
    Our Primordial Future
    From michaelerule
    Sun
    From notthisbody
    Infosynaesthetic Tech
    From meganmay
    Leaving the Garden of...
    meganmay’s projects
    Polytopia
    The human species is rapidly and indisputably moving towards the technological singularity. The cadence of the flow of information and innovation in...

    Epiphanies
    A series of rambles by SpaceCollective members sharing sudden insights and moments of clarity. Rambling is a time-proven way of thinking out loud,...

    The Total Library
    Text that redefines...

    Emergence and Navigating...
    Develop a generative, emergent process to fill space (2D or 3D) using only black lines. Modify a known process or invent your own. Implement your...

    The Voyager update project
    Description has not yet been created.

    What happened to nature?
    How to stay in touch with our biological origins in a world devoid of nature? The majestic nature that once inspired poets, painters and...

    The great enhancement debate
    What will happen when for the first time in ages different human species will inhabit the earth at the same time? The day may be upon us when people...
    Now playing SpaceCollective
    Where forward thinking terrestrials share ideas and information about the state of the species, their planet and the universe, living the lives of science fiction. Introduction
    Featuring Powers of Ten by Charles and Ray Eames, based on an idea by Kees Boeke.
    Coming of age with a familial strain of futurism, I have been carrying on the tradition quite naturally. Cobbling together bits and pieces of breakthroughs and speculation I've formed my own personalized worldview, and I've undoubtedly taken some bits and pieces of Singularity theory with me. In spite of the close proximity between Space Collective and many of the ideas being discussed at the Singularity Institute, no one from the "staff" has ever ventured out to a formal conference. So I paid a visit to the Singularity Summit in early October to see what I'd been missing all these years.

    Given that the future is a multifaceted frontier, and that my background is in "the future of everything," I was expecting to meet a very diverse crowd, and I did. But as I continued asking questions, I realized that in this community two subjects really floated to the top - greater than human intelligence and immortality. Some people wore cryonics tags tucked in shirts, and occasionally exposed them to make a point, others allowed them to glisten in the sun. There was an amazing talk by a 17 year old Thiel fellow that made me feel more convinced than ever before that immortality would become a matter of choice.

    But I followed the scent of superhuman intelligence for the most part, perhaps because it opened up discussion to the most existential predicaments. There are two generally accepted ways of discussing the emergence of greater than human intelligence. "Soft takeoff" describes a gradual development that may allow us to adapt as we incorporate more and more intelligence into our world. "Hard takeoff" would imply the rapid creation of a runaway artificial intelligence that at its most volatile could lead to our extinction. For better or worse, the authorities on the subject seemed to expect "hard takeoff."

    One of these authorities is Eliezer Yudkowsky, founder of the extremely popular "Less Wrong" community and a current research fellow at the Singularity Institute. According to the website, Less Wrong is a forum where

    "users aim to develop accurate predictive models of the world, and change their mind when they find evidence disconfirming those models, instead of being able to explain anything."

    In other words, the Less Wrong community strives to help you realize that you are biased about a lot of things, including the common misconception that AI will not pose a serious threat to humanity.

    Luke Muehlhauser, the executive director of the Singularity Institute explains that this misconception is due largely to "the availability heuristic" which explains how we usually assume probabilities based on what is most available to our memory. In "Not Built to Think about AI" he writes:

    "The availability heuristic also explains why people think flying is more dangerous than driving when the opposite is true: a plane crash is more vivid and is reported widely when it happens, so it’s more available to one’s memory, and the brain tricks itself into thinking the event’s availability indicates its probability."

    And during his talk at the Summit, Muehlhauser explained that we have optimized the world to serve our very narrow field of interests, and that the chances of AI serving a purpose outside of these narrow interests is far greater than vice versa. As a result, "Almost all the mind designs would steer AI where we don't want to go."

    He called upon the most intelligent mathematical minds in the crowd to join him in solving the difficult math problems required to build a "friendly AI." But with all the attention concentrated on artificial super intelligence, it was only logical that someone in the crowd would ask how biology might fit into this paradigm. The answer was:

    "biological cognitive enhancement is a growing trend and an important one, but I think in the end anything that's tied to the biological system of the brain is going to fall behind the purely artificial mind architectures because somewhere in the loop there is still all this slow neuronal firing spaghetti code nonsense that evolution created, that sort of works but is totally non-optimal. So I think that biological cognitive enhancement won't be able to keep up at a some point with purely artificial systems."

    Further still, if you ask his colleague Eliezer how biological systems fit into this equation he might answer:

    ”the AI does not love you, nor does it hate you, but you are made of atoms it can use for something else"

    I'm willing to consider all possible futures, just because it's more fun than limiting your imagination, but like any human, I can't help and throw a little wrench in this scenario. In any discussion of "greater than human" entities there is an inherently subjective impasse, this is why anthropology doesn't work without incorporating the anthropologist's bias. No matter how much we attempt to overcome our bias, the final evaluation of whether we've created "greater than human intelligence" will be up to us, simply because human intelligence is myriad and arranging it on a hierarchy is a subjective task.

    As a catch-22, in my subjective opinion, nothing can have "greater than human intelligence" if it doesn't also have a greater than human tolerance for the lifeforms that gave rise to it. Disrespect for our biological ancestors and degradation of our life-supporting habitat has not necessarily served human beings well, and a greater than human intelligence should be able to overcome that error in judgement.

    Unfortunately, my forgiving parameter was thwarted somewhat by Robin Hanson's claim that future lifeforms, whole brain emulations specifically, just wouldn't care about nature once they migrated entirely to non-biological substrates. "We care about nature not just because we like it but because we're afraid we will die without it." To whole brain emulations, the biological world would be obsolete, full stop. He acknowledged though that his thought experiment inevitably excluded some variables and remarked that "a future world is a vast place with lots of things going on and if you really want to evaluate it on the whole you have to look at a lot of different elements of it."

    Overall, The Singularity Summit brought together a fine selection of minds dedicated to thinking through the good, the bad, the ugly and the less wrong futures. On the last day of the summit I was lucky enough to find someone capable of summing up the sentiments that I'm sure at least a few human beings in the audience were feeling. I will leave him with the final word.

    Mon, Nov 19, 2012  Permanent link
    Categories: singula
      RSS for this post
      Promote (10)
      
      Add to favorites (5)
    Create synapse
     
          Cancel