Member 4
51 entries
368393 views

 RSS
(F, 30)
Los Angeles, US
Immortal since Jan 18, 2007
Uplinks: 0, Generation 1

Proofs Of Concept
Vimeo Videos
Idea Dump
Videos Collide
  • Affiliated
  •  /  
  • Invited
  •  /  
  • Descended
  • meganmay’s favorites
    From Claire L. Evans
    MOON ARTS, PART TWO:...
    From chris arkenberg
    The Cybernetic Self
    From spookfish
    « We/I » instead of «...
    From notthisbody
    From Citizen Kane to...
    From hello_world
    The Ghost Protocol –...
    Recently commented on
    From meganmay
    Growing up at the...
    From meganmay
    Our Primordial Future
    From michaelerule
    Sun
    From notthisbody
    Infosynaesthetic Tech
    From meganmay
    Leaving the Garden of...
    meganmay’s projects
    Polytopia
    The human species is rapidly and indisputably moving towards the technological singularity. The cadence of the flow of information and innovation in...

    Epiphanies
    A series of rambles by SpaceCollective members sharing sudden insights and moments of clarity. Rambling is a time-proven way of thinking out loud,...

    The Total Library
    Text that redefines...

    Emergence and Navigating...
    Develop a generative, emergent process to fill space (2D or 3D) using only black lines. Modify a known process or invent your own. Implement your...

    The Voyager update project
    Description has not yet been created.

    What happened to nature?
    How to stay in touch with our biological origins in a world devoid of nature? The majestic nature that once inspired poets, painters and...

    The great enhancement debate
    What will happen when for the first time in ages different human species will inhabit the earth at the same time? The day may be upon us when people...
    Now playing SpaceCollective
    Where forward thinking terrestrials share ideas and information about the state of the species, their planet and the universe, living the lives of science fiction. Introduction
    Featuring Powers of Ten by Charles and Ray Eames, based on an idea by Kees Boeke.
    From meganmay's personal cargo

    Growing up at the Singularity Summit
    Coming of age with a familial strain of futurism, I have been carrying on the tradition quite naturally. Cobbling together bits and pieces of breakthroughs and speculation I've formed my own personalized worldview, and I've undoubtedly taken some bits and pieces of Singularity theory with me. In spite of the close proximity between Space Collective and many of the ideas being discussed at the Singularity Institute, no one from the "staff" has ever ventured out to a formal conference. So I paid a visit to the Singularity Summit in early October to see what I'd been missing all these years.

    Given that the future is a multifaceted frontier, and that my background is in "the future of everything," I was expecting to meet a very diverse crowd, and I did. But as I continued asking questions, I realized that in this community two subjects really floated to the top - greater than human intelligence and immortality. Some people wore cryonics tags tucked in shirts, and occasionally exposed them to make a point, others allowed them to glisten in the sun. There was an amazing talk by a 17 year old Thiel fellow that made me feel more convinced than ever before that immortality would become a matter of choice.

    But I followed the scent of superhuman intelligence for the most part, perhaps because it opened up discussion to the most existential predicaments. There are two generally accepted ways of discussing the emergence of greater than human intelligence. "Soft takeoff" describes a gradual development that may allow us to adapt as we incorporate more and more intelligence into our world. "Hard takeoff" would imply the rapid creation of a runaway artificial intelligence that at its most volatile could lead to our extinction. For better or worse, the authorities on the subject seemed to expect "hard takeoff."

    One of these authorities is Eliezer Yudkowsky, founder of the extremely popular "Less Wrong" community and a current research fellow at the Singularity Institute. According to the website, Less Wrong is a forum where

    "users aim to develop accurate predictive models of the world, and change their mind when they find evidence disconfirming those models, instead of being able to explain anything."

    In other words, the Less Wrong community strives to help you realize that you are biased about a lot of things, including the common misconception that AI will not pose a serious threat to humanity.

    Luke Muehlhauser, the executive director of the Singularity Institute explains that this misconception is due largely to "the availability heuristic" which explains how we usually assume probabilities based on what is most available to our memory. In "Not Built to Think about AI" he writes:

    "The availability heuristic also explains why people think flying is more dangerous than driving when the opposite is true: a plane crash is more vivid and is reported widely when it happens, so it’s more available to one’s memory, and the brain tricks itself into thinking the event’s availability indicates its probability."

    And during his talk at the Summit, Muehlhauser explained that we have optimized the world to serve our very narrow field of interests, and that the chances of AI serving a purpose outside of these narrow interests is far greater than vice versa. As a result, "Almost all the mind designs would steer AI where we don't want to go."

    He called upon the most intelligent mathematical minds in the crowd to join him in solving the difficult math problems required to build a "friendly AI." But with all the attention concentrated on artificial super intelligence, it was only logical that someone in the crowd would ask how biology might fit into this paradigm. The answer was:

    "biological cognitive enhancement is a growing trend and an important one, but I think in the end anything that's tied to the biological system of the brain is going to fall behind the purely artificial mind architectures because somewhere in the loop there is still all this slow neuronal firing spaghetti code nonsense that evolution created, that sort of works but is totally non-optimal. So I think that biological cognitive enhancement won't be able to keep up at a some point with purely artificial systems."

    Further still, if you ask his colleague Eliezer how biological systems fit into this equation he might answer:

    ”the AI does not love you, nor does it hate you, but you are made of atoms it can use for something else"

    I'm willing to consider all possible futures, just because it's more fun than limiting your imagination, but like any human, I can't help and throw a little wrench in this scenario. In any discussion of "greater than human" entities there is an inherently subjective impasse, this is why anthropology doesn't work without incorporating the anthropologist's bias. No matter how much we attempt to overcome our bias, the final evaluation of whether we've created "greater than human intelligence" will be up to us, simply because human intelligence is myriad and arranging it on a hierarchy is a subjective task.

    As a catch-22, in my subjective opinion, nothing can have "greater than human intelligence" if it doesn't also have a greater than human tolerance for the lifeforms that gave rise to it. Disrespect for our biological ancestors and degradation of our life-supporting habitat has not necessarily served human beings well, and a greater than human intelligence should be able to overcome that error in judgement.

    Unfortunately, my forgiving parameter was thwarted somewhat by Robin Hanson's claim that future lifeforms, whole brain emulations specifically, just wouldn't care about nature once they migrated entirely to non-biological substrates. "We care about nature not just because we like it but because we're afraid we will die without it." To whole brain emulations, the biological world would be obsolete, full stop. He acknowledged though that his thought experiment inevitably excluded some variables and remarked that "a future world is a vast place with lots of things going on and if you really want to evaluate it on the whole you have to look at a lot of different elements of it."

    Overall, The Singularity Summit brought together a fine selection of minds dedicated to thinking through the good, the bad, the ugly and the less wrong futures. On the last day of the summit I was lucky enough to find someone capable of summing up the sentiments that I'm sure at least a few human beings in the audience were feeling. I will leave him with the final word.


    Mon, Nov 19, 2012  Permanent link
    Categories: singula
      RSS for this post
    6 comments
      Promote (10)
      
      Add to favorites (5)
    Create synapse
     
    Comments:


    Apollo     Tue, Nov 20, 2012  Permanent link
    Really interesting article; thank you for sharing this. Raises a lot of questions.
    sonicport+techfolder     Tue, Nov 20, 2012  Permanent link
    Avoid Hyper-speed. Humans can live forever Avoid Hyper-speed. Humans can live forever in the constant between now and then. the constant between now and then.
    meganmay     Tue, Nov 20, 2012  Permanent link
    Apollo, I'm glad you enjoyed it :) If you're so inclined I'd love to hear the questions that it raised for you.
    bianca     Tue, Nov 27, 2012  Permanent link
    Great article, tickles my imagination. I very much enjoyed your interview with a younger generation of thinkers!
    elysium     Wed, Dec 5, 2012  Permanent link
    Hello, and thanks for the article;

    I see a big problem:

    1. People aren't smart within north american society because they fit in with the crowd. People end up intelligent round these parts because it's the most rewarding option in face of social rejection at the hands of whatever causes people to become the kind of person who hones their time on things away from people, things that bring them places, like math and science. Either that, or they just become obese, grow their hair into a ponytail, and become a videogame-focused neckbeard.

    2. As such, these people tend towards misanthropy. Ever spend much time around obnoxious nerdrage at ridiculous shit like video games? Imagine if they could focus that energy on real people — And get away with it.

    3. I have a buddy who told me hugo de garis smells like shit-tier BO.

    Key word: Misanthropy!

    I'm in a strange place where I grew up socially isolated because of what the fuck is going on with my fucked up life factors rather than, say, being obese or covered in pimples or whatever it is that turns north americans into people who actually do properly interesting things between themselves and the world, rather than being mired in superficiality... So, as such, people regard me as 'genius', something I am loathe to express in the audience of people I seek out to converse with online, but in real life, off the computer and out of many friends and people I love and respect, I maybe have 2-4 friends I can talk to with my real live vocal cords about these kinds of things with, and expect an answer back other than something like 'whoa'... I say I can sympathize with those who talk of hard AI taking over the =universally relevant= functions of being the component of the universe which understands itself doing a good job at that; But. at the same time, my emotional substrates feel for those who really have no idea what the fuck I'm talking about. Anyways who the fuck cares about me, I think the focus should be on two things:

    1. How we can bring people up to the level of machines in time, to satiate the emotional drive for survival we're stuck with and/or
    2. How we can influence machine development based on our 'fuzzy' way of seeing things, which are developments which have been advanced in the field of computation based on our neural systems, conserving energy that we don't know how to sustainably generate yet, to the point of guaranteeing our goodwill
    AND/OR
    - figuring out how it is we're going to both A: deal with the power and B: power all these von neumann architecture machines if they are to become as powerful as people go off about, through such amazing obsolescing advancements such as going past such architectures based on photonics or quantum processes?

    anyways, my opinion is that although we're supervenient upon the earth which brought us into being, that there's no reason to think we deserve the same respect as we never gave the earth which deserved it. Deserving is merely a capability to hold it all together in a homeostatic way, something we're clearly fucking right up. It's a matter of some seriously hard work ahead of us if we wish for consciousness itself to hold on to the largely sentimental form of human identification.
    Apollo     Thu, Dec 27, 2012  Permanent link
    Hello Meganmay,

    I wish I had time to give a more extensive reply, but for now I will focus on the following set of thoughts which your article raised for me, in the hope that we can discuss some of the broader and related questions on an ongoing basis.

    You write that a "Soft Takeoff" regarding artificial intelligence would consist of "a gradual development [by which we would] incorporate more and more intelligence into our world." This "takeoff" would be "soft" because we would (presumably) ease ourselves into the world of artificial intelligence gradually—becoming inoculated, as it were, to strong AI and its effects.
    What struck me is that, while this perspective seems concerned with the steady deployment of artificial (computer) intelligence, it seems to me that the term "soft takeoff" might also apply to the steady advancement of human intelligence. Here I am not talking about the use of biotechnology to enhance the functioning of the human brain, etc. etc., but rather a process by which we might improve our intelligence through a process of education.

    The term "education" fails to capture what I really have in mind, though. Instead of the asymmetric relationship between teacher and pupil which the term "education" usually evokes, what I have in mind is something quite different—a more engaged/engaging form of learning which draws the student out 'into' the world (so to speak), rather than trying to bring the world into the student in the form of rote learning. The best example of what I have in mind is what Carl Sagan referred to as "an inescapable perspective". Sagan strongly believed that the mindset and methodology of science can be conducive to a more enlightened perspective on the cosmos, on oneself, and on the relationship between the two. I think that the propagation of this perspective through science education could constitute a "Soft Takeoff" of a more enlightened human consciousness. The real beauty of this approach is that, in addition to promoting scientific literacy in our societies, it would also promote the kind of perspective which I strongly believe would lend itself to more constructive (and, conversely, less destructive) applications of science.

    It has long been observed that the methods of science can be (and have been) applied to both constructive and destructive ends. One of my great concerns is that the rapid technological advances in fields such as biotechnology and computer science are advancing at a pace which far outstrips the capacity of our species to use these technologies in an intelligent manner. Michael Sandel expressed similar concerns regarding bioenhancement in an excellent article titled 'The Case Against Perfection', published in the Atlantic Magazine in 2004 (I am not, however, claiming that Sandel shares my belief that science education could mitigate the potential social pitfalls which he outlines; there is no reason to believe that he shares this view).

    A "Hard Takeoff", by this reading, would consist of exactly the kind of technological growth which I fear we are experiencing today. That is to say, the emergence of ever-more sophisticated technologies which (though beautiful in their own right) could have serious adverse effects upon entering the imperfect reality of modern human societies. Even the wealthiest and most technologically advanced societies on Earth seem, by my observations, to be fraught with scientific illiteracy, and the "inescapable perspective" which Sagan so rousingly speaks of seems—ironically and tragically—to have escaped us almost without exception.

    Do you share my concerns regarding the potentially dangerous "Hard Takeoff" of modern technologies given the realities of the world today, and do you feel—as I do—that the particular brand of scientific literacy which Carl Sagan advocated could play an essential role in creating a cultural environment that is not only more conducive to scientific inquiry, but also more capable of responsibly applying its fruits?

    I am very interested in hearing your perspective on these (and other) questions.

    Best wishes and Happy Holidays,

    Jason Fernando
    http://jvsfernando.blogspot.com
     
          Cancel