Member 83
49 entries

Xárene Eskandar
Los Angeles, US
Immortal since Apr 4, 2007
Uplinks: 0, Generation 1

Atelier XE
VJ book
VJ Culture
  • Affiliated
  •  /  
  • Invited
  •  /  
  • Descended
  • Xarene’s favorites
    From Wildcat
    Some nothings are like...
    From Claire L. Evans
    Footprints on the Moon
    From Wildcat
    A short Sci-Fi tale of...
    From Wildcat
    Look Honey, how beautiful...
    From Wildcat
    A Nano-Personhood Love...
    Recently commented on
    From Xarene
    Web Compartmentivity
    From Schmuck
    Should SpaceCollective Be...
    From notthisbody
    Polytopia - Our Mind...
    From meganmay
    My Life as a Severed Head
    From bpwnes
    Talk to Strangers
    Xarene’s projects
    The human species is rapidly and indisputably moving towards the technological singularity. The cadence of the flow of information and innovation in...

    A series of rambles by SpaceCollective members sharing sudden insights and moments of clarity. Rambling is a time-proven way of thinking out loud,...

    The Total Library
    Text that redefines...

    What happened to nature?
    How to stay in touch with our biological origins in a world devoid of nature? The majestic nature that once inspired poets, painters and...

    Design Media Arts at UCLA
    In the 1970s space colonies were considered to be a viable alternative to a life restricted to planet Earth. The design of cylindrical space...
    Now playing SpaceCollective
    Where forward thinking terrestrials share ideas and information about the state of the species, their planet and the universe, living the lives of science fiction. Introduction
    Featuring Powers of Ten by Charles and Ray Eames, based on an idea by Kees Boeke.
    From Xarene's personal cargo

    "If you could clone yourself, would you?"
    ... Erin asked. "Yes." I said, without hesitation.

    We each began fantasizing the benefits of having our own second 'me': She would not be second to me, she would be a duplicate 'me', equal in every aspect. I don't want her for her organs—I can grow those individually. I want her for her mind, her abilities, all the things that make me 'me' and which I want more of. She would be in charge of tasks I only trust myself with because she would do and decide as I would. I also trust her to be meticulous in her craft and detailing. Being another me, she obviously has the same interests I do, so I can send her off to read a book which we would download later in the evening, either through physical jacks or more poetically, synching through our dreams. She could work while I party... we just hit a bump here... She's me and I'm her; no one is the boss of anyone. So we both work and party.

    The imaginary relationship works for me, but what happens when my clone learns of her mortality. We don't carry anger from knowing we die, because there is no one to be angry at or to blame. But for my clone, I am her Creator (along with the scientist cohorts who made it possible), so she does have someone to be very mad at for making and bestowing her with human finitude. Is it the same anger harbored by teenagers towards their parents? Will we enter a version of Blade Runner, she and I, duking it out one rainy day? Aside from the problems that may arise between me and her on this one detail, I have a partner who would very likely leave me and me to each other and walk out. So the question shifts to "At what expense would you clone yourself?" How do we confront questions of mortality and morality? Is the second 'me' too close for comfort? Will it be confusing as to who is who and which does what? Will I fall into a self-absorbed, perfect relationship with myself?

    Probably. That's why we need robots, not clones. A clone is the same kind, a twin really, just one delayed in its conception and birth. With robots, on the other hand, we would expect there to be a level of detachment because of the materiality of the robot, as opposed to the flesh and consciousness of our clone. But let me give you a three very real examples how that won't work either.

    Yes, Paro, the healing robot seal. I met Paro in 2005. I was petting him and gently testing his reactions when a group of 9 year-olds came running over and almost immediately began taunting and teasing him. His movements were bewildered, his cries were for help. I was distraught. Paro was not having fun and his responses were so real, that I wanted to scream "Stop!" but didn't and just walked away. I still carry the guilt of not helping Paro...

    Paro's purpose is exactly that, to generate and foster emotions, though not the emotions I had due to the specifically cruel circumstances Paro and I met.

    Aiko, in pop-culture terms, is aspiring to be a 'skin-job'. Aiko's web of sensors beneath a soft skin can be very confusing. The confusion is that we know for a fact that what we are experiencing is not a life-form, yet somewhere between our eyes, our brain and our emotional response to what we are seeing, information gets confused. Or we allow ourselves to be fooled, a momentarily lapse into another reality. Realdolls are also such example. They are realistic looking dolls, and though without any of the sensory interaction as Aiko, here is a testimonial "that says it all":

    January 10, 2010

    The reasons why I decided to buy a doll were various: I was (pretty happy) single, but once I realised this doll could really make a difference to a life of solitude, I started searching the net. I came out by Abyss... I didn't doubt anymore... made my choices and ordered a doll... Then the waiting period...

    When you are fully committed to a purchase like this, it's a long time, but the customer service is no less than perfect.

    The day she arrived I wrote the following passage to Debra and Amanda:

    "She is so much much more beautiful then I expected from the face-picture taken on her birthday. I read testimonials, saw documentaries, etc. but it is really astonishing how this is possible. She's here now for approx. 4 hours and everytime I walk in the room I get a little scare as if someone's really sitting there. Which means she gives me the feeling of company from the first minute , and I could never really believe that that could be possible. Maybe you remember I told you that I was afraid my cat would feel tempted to set his claws into her flesh and you said the cat in your atelier didn't show an ounce of intrest in the dolls. Well, believe it or not, from the moment Lily sat on my couch, my cat came to her and gave her little knob-heads as if she was a real person. That says it all."

    We are some days further now and I can say: it is getting better and better. The things you discover... The things you can or must do: go shopping for her, taking care of her (washing, powdering), dressing her up, moving her,... Kissing her, caressing her, cuddle her, laying next to her, holding her hand, brushing her wig,... too much to mention :-)

    Not to mention her design and her looks. When you see her 'in person', all pictures furfill their expectations. In fact, no picture can capture her beauty and her sweetness. I am so happy to have her with me!

    Thanks to Abyss and to all of its staff... [ed.] Thank you for making this possible!

    In a separate conversation on this topic, soCinematic brought the Uncanny Valley to my attention, the area where extreme likeness to a human, but not a human, is met with repulsion. He supplemented it with this graph as well:

    Erin pointed out that regardless of all the sci-fi narratives of humankind destroyed or enslaved by AI, we are on an inevitable path to developing these very gadgets of our demise. I disagree that we are moving towards developing better AI with any more determination than we are in keeping ourselves in a stagnant place, not moving forward in our emotional development. The problem is when we position highly advanced beings (man-made) across from humans which have really had no significant change in the past 8000+ years, other than getting taller and fatter (though I read somewhere that we're getting shorter again).

    The core issue is our inability to emotionally cope with clones/robots/transhumans/etc and, of course, with questions of mortality. I understand and have argued for human emotion, mainly it's result, empathy. But to close the first Valley, and to jump over the second Valley, we need to advance emotionally, without losing our unique ability to empathize. Our ability to understand and connect with other beings is so important that, for example, the Japanese have a specific word for the sense of connectedness between humans (and only humans): ふれあい (fureai). This kind of specificity in language blocks our emotions from extending to fit future scenarios. Even now, it would be incorrect to use the word fureai in relation to the RealDoll, while the emotions felt by the human opposite a RealDoll can easily be extended to meet emotions expressed to a real human companion, as in the testimonial above.

    The bigger problem, aside from specificity in language and underdeveloped emotions, is that we have questions of morality, ethics and religious beliefs bogging down science and progress. From the above examples I gather that we have the capacity to emotionally advance. I will get to what I mean by emotional advancement and why in our trans/posthuman pursuit it is important to alter and enhance affect, and only by achieving advanced emotional states can we advance morally, ethically and in return evolutionary. By moral and ethical advancement, I mean that moral and ethical questions must lose their grip on our decision-making process. The reason they raise questions, and usually at the start of an uncharted or unconventional process, is because we cannot emotionally handle the very situations they question. We must change ourselves to meet advanced emotional states. And, in order to do so, morality must change. But what are the measures we should go by to advance (or shrink away) morality? I agree that science can answer those questions. (Mr.Harris makes good points but he can sound imperialistic and pretentious.) However, science alone cannot be used; it's objective and morality is subjective.

    Sam Harris: Science can answer moral questions

    Morality is more of a concern now that we know life from synthetic matter is possible, as well as possibly life from dead matter. This debunks any basis for the existence of a God that has us functioning on fate, and in return rewards and punishes based on our decisions.

    But the point I want to get back to is how should we emotionally progress before any new life-forms are introduced to our midst, whether clone, robot, resurrected, or synthetic? How do we deal with human, but human of another kind? With life of another kind, we don't encounter the uncanny valley between two peaks—we start off in an endless valley of the uncanny where a human corpse may be more comforting when faced with the other.

    Beyond physical and cognitive enhancements, as well as genetic enhancements which deal with disease and appearances, the future 'me' must possess objective emotions.

    Now, I began this post end of March 2010 and had the bulk of it written in a day. It is July and I have been stuck in this corner, which I have placed myself in single-handedly: objective emotions. I will try and follow up in a future post on how do we achieve a balance with objective emotions. For now, the moral of this post is: "ditch morality, advance emotionally."

    Fri, Jul 23, 2010  Permanent link

      RSS for this post
    Add comment
      Promote (3)
      Add to favorites (2)
    Create synapse