Member 2163
13 entries
91570 views

 RSS
Contributor to project:
Polytopia
Alexander Kruel (M, 35)
Gütersloh, DE
Immortal since Mar 10, 2009
Uplinks: 0, Generation 3

XiXiDu Info
Transhumanist, atheist, vegetarian who's interested in science fiction, science, philosophy, math, language, consciousness, reality...
  • Affiliated
  •  /  
  • Invited
  •  /  
  • Descended
  • XiXiDu’s favorites
    From Wildcat
    Look Honey, how beautiful...
    From Wildcat
    Presentation to the Titan...
    From BenRayfield
    No Math Intuition
    From Wildcat
    Polytopia as Rhizomatic...
    From Wildcat
    Gilles Deleuze, Francis...
    Recently commented on
    From XiXiDu
    Knowing beyond science and...
    From XiXiDu
    Are you a waste of space?
    From XiXiDu
    Mathematics as a way of...
    From XiXiDu
    Our fundamental dependency...
    From Wildcat
    Hybrid futures, Knowmads...
    XiXiDu’s project
    Polytopia
    The human species is rapidly and indisputably moving towards the technological singularity. The cadence of the flow of information and innovation in...
    Now playing SpaceCollective
    Where forward thinking terrestrials share ideas and information about the state of the species, their planet and the universe, living the lives of science fiction. Introduction
    Featuring Powers of Ten by Charles and Ray Eames, based on an idea by Kees Boeke.
    From XiXiDu's personal cargo

    The Nature of Self

    In this post I try to fathom an informal definition of Self, the "essential qualities that constitute a person's uniqueness". I assume that the most important requirement for a definition of self is time-consistency. A reliable definition of identity needs to allow for time-consistent self-referencing since any agent that is unable to identify itself over time will be prone to make inconsistent decisions.

    Data Loss

    Obviously most humans don't want to die, but what does that mean? What is it that humans try to preserve when they sign up for Cryonics? It seems that an explanation must account and allow for some sort of data loss.

    The Continuity of Consciousness

    It can't be about the continuity of consciousness as we would have to refuse general anesthesia due to the risk of "dying" and most of us will agree that there is something more important than the continuity of consciousness that makes us accept a general anesthesia when necessary.

    Computation

    If the continuity of consciousness isn't the most important detail about the self then it very likely isn't the continuity of computation either. Imagine that for some reason the process evoked when "we" act on our inputs under the control of an algorithm halts for a second and then continues otherwise unaffected, would we don't mind to be alive ever after because we died when the computation halted? This doesn't seem to be the case.

    Static Algorithmic Descriptions

    Although we are not partly software and partly hardware we could, in theory, come up with an algorithmic description of the human machine, of our selfs. Might it be that algorithm that we care about? If we were to digitize our self we would end up with a description of our spatial parts, our self at a certain time. Yet we forget that all of us possess such an algorithmic description of our selfs and we're already able back it up. It is our DNA.

    Temporal Parts

    Admittedly our DNA is the earliest version of our selfs, but if we don't care about the temporal parts of our selfs but only about a static algorithmic description of a certain spatiotemporal position, then what's wrong with that? It seems a lot, we stop caring about past reifications of our selfs, at some point our backups become obsolete and having to fall back on them would equal death. But what is it that we lost, what information is it that we value more than all of the previously mentioned possibilities? One might think that it must be our memories, the data that represents what we learnt and experienced. But even if this is the case, would it be a reasonable choice?

    Indentity and Memory

    Let's just disregard the possibility that we often might not value our future selfs and so do not value our past selfs either for that we lost or updated important information, e.g. if we became religious or have been able to overcome religion.

    If we had perfect memory and only ever improved upon our past knowledge and experiences we wouldn't be able to do so for very long, at least not given our human body. The upper limit on the information that can be contained within a human body is 2.5072178×10^38 megabytes, if it was used as a perfect data storage. Given that we gather much more than 1 megabyte of information per year, it is foreseeable that if we equate our memories with our self we'll die long before the heat death of the universe. We might overcome this by growing in size, by achieving a posthuman form, yet if we in turn also become much smarter we'll also produce and gather more information. We are not alone either and the resources are limited. One way or the other we'll die rather quickly.

    Does this mean we shouldn't even bother about the far future or is there maybe something else we value even more than our memories? After all we don't really mind much if we forget what we have done a few years ago.

    Time-Consistency and Self-Reference

    It seems that there is something even more important than our causal history. I think that more than everything we care about our values and goals. Indeed, we value the preservation of our values. As long as we want the same we are the same. Our goal system seems to be the critical part of our implicit definition of self, that which we want to protect and preserve. Our values and goals seem to be the missing temporal parts that allow us to consistently refer to us, to identify our selfs at different spatiotempiral positions.

    Using our values and goals as identifiers also resolves the problem of how we should treat copies of our self that are featuring alternating histories and memories, copies with different causal histories. Any agent that does feature a copy of our utility function ought to be incorporated into our decisions as an instance, as a reification of our selfs. We should identify with our utility-function regardless of its instantiation.

    Stable Utility-Functions

    To recapitulate, we can value our memories, the continuity of experience and even our DNA, but the only reliable marker for the self identity of goal-oriented agents seems to be a stable utility function. Rational agents with an identical utility function will to some extent converge to exhibit similar behavior and are therefore able to cooperate. We can more consistently identify with our values and goals than with our past and future memories, digitized backups or causal history.

    But even if this is true there is one problem, humans might not exhibit goal-stability.


    Add comment
      Promote (3)
      
      Add to favorites (2)
    Create synapse
     
     
          Cancel