Member 2664
108 entries
391527 views

 RSS
(M)
US
Immortal since Jun 17, 2010
Uplinks: 0, Generation 4
mad-scientist and computer programmer looking for something more interesting than most people accept as their future
  • Affiliated
  •  /  
  • Invited
  •  /  
  • Descended
  • BenRayfield’s favorites
    From AsylumSeaker
    Christopher Langan
    From Yissar
    Technology Progress vs....
    From XiXiDu
    The Nature of Self
    From QESelf
    View Point Room Argument...
    From Jorgen
    My Paper on Computer...
    Recently commented on
    From gamma
    Is brain a computer?
    From BenRayfield
    Elections should be done...
    From BenRayfield
    The most dangerous thing...
    From BenRayfield
    Why is there no Content...
    From BenRayfield
    How can a set of computers...
    BenRayfield’s projects
    Polytopia
    The human species is rapidly and indisputably moving towards the technological singularity. The cadence of the flow of information and innovation in...

    The Total Library
    Text that redefines...

    Start your own revolution
    Catching up with the future. All major institutions in the world today are grappling to come to terms with the internet. The entertainment...

    Proposal for a multimedia...
    A musical mindstorm on the nature of sound, light, space and subjective experience powered by locally produced energy, heralding the ending of the...
    Now playing SpaceCollective
    Where forward thinking terrestrials share ideas and information about the state of the species, their planet and the universe, living the lives of science fiction. Introduction
    Featuring Powers of Ten by Charles and Ray Eames, based on an idea by Kees Boeke.
    From BenRayfield's personal cargo

    Intelligence Equals This Simple Game
    Project: Polytopia
    2 players each have 1 coin. Each round of the game, each player secretly lays their coin down heads or tails. Its a choice, not a random flip. One player is called EQUAL and the other is called XOR (eXclusive-OR, means not equal). If both coins are heads or both coins are tails, the EQUAL player gets 1 point. If 1 is heads and 1 is tails, the XOR player gets 1 point. Repeat many times. The player with the highest score at the end wins.

    That game is the simplest possible intelligence test. It is the exact definition of intelligence.

    It is also the simpler version of the game "Rock Paper Scissors", where each player secretly chooses rock, paper, or scissors (instead of heads or tails), then who wins 1 point is decided by: rock crushes scissors, scissors cut paper, paper covers rock. Nobody wins a point if the 2 choices are equal. My EQUAL XOR game has 2 things to choose instead of 3 but measures intelligence the same way.

    If player1 chooses rock more often than paper or scissors, then player 2 will learn to choose paper more often. Complex patterns will form between 2 intelligent players of "Rock Paper Scissors". Except for my simpler version of it (EQUAL XOR), Rock Paper Scissors is the most strategic and intelligent game ever created. Its the exact definition of intelligence except it has an unnecessary third choice.



    What can this game be used for?...

    I build artificial intelligence (AI) software, the kind that can eventually become what we see in the movies, except for the parts where it tries to take over the Earth and kill everyone.

    The Friendly AI paradox ( http://en.wikipedia.org/wiki/Friendly_AI  ) is how to build an AI that is allowed to modify itself in any way but chooses only to modify itself in ways that work toward its original goal more effectively. Example: You are at a party. You want to dance with some girl but instead sit in a chair talking about how good she looks. To accomplish your goal of dancing with her, you order a beer and think maybe you will feel more like dancing after drinking it. You modified yourself by drinking the beer. A side-effect of that modification is a desire to drink more beer and run your mouth, which may lead to other things you did not predict. This is an analogy between AI and people. Most people learn how much to drink at a party, but in AI, it is a serious research problem, not specificly about drinking at parties, but about how an AI can modify itself without having unexpected side-effects that build up until the whole system crashes or results in the AI wanting to kill everyone or other hard-to-predict things.

    Quote from:http://en.wikipedia.org/wiki/Three_Laws_of_Robotics

    (1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.

    (2) A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.

    (3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

    The "3 laws of robotics" were an attempt to solve the Friendly AI paradox by forcing an AI (in a robot) to think certain ways, but that strategy will never work because AI will eventually become smart enough to modify itself. Its the same reason Humans do not do what animals command, even though simpler animals created Humans through evolution.

    Today that area of research is called "Friendly AI" but it is still very speculative.http://en.wikipedia.org/wiki/Friendly_AI

    As I define it, a Friendly-AI is an AI that has the ability to modify itself (including its goals) and intelligently predicts what a possible modification would cause in the near and far future, and considers all that before modifying itself, which results in it creating new goals that more effectively work toward its original goals, and does not result in significantly changing its original goals, and to satisfy the "friendly" part, its original goals are similar to the goals that the most number of people could agree on.

    The best strategy to build a Friendly-AI that we know of is to define its thought processes as a simulation of some new kind of physics that we define as math equations. Strategies like the "3 laws of robotics" will not result in a Friendly-AI. Those strategies are more likely to result in the kind of destructive AIs we see in movies. The correct strategy is to build it in a way that it wants to do certain things, not to put in a system to control it to do that. If it wants to do it, and if its smart enough, then it will not try to change itself in a way that it stops wanting to do its original goals.

    Below, I will explain the progress I have made in designing a "simulation of some new kind of physics that we define as math equations" for the long-term goal of solving the Friendly-AI paradox:

    Start with the EQUAL XOR game I describe above. Bits in computer memory can be substituted for coins, and artificial intelligence code can be substituted for each 2 players.

    First, I'll explain some math. A vector in N dimensions is a list of N numbers. A 3-dimensional vector is a direction and length in 3d space, like pointing your finger in some direction and saying how far to go. A 2-dimensional vector is the same thing except without the up/down part. A 1-dimensional vector is the same thing but only forward and backward. A 0-dimensional vector is nothing. I'm going to use N-dimensional vectors, and it does not matter what N is. The more dimensions you have, the more choices there are in how to play the game. You only need 1 dimension, but its more flexible with more.

    I'm going to remove some of the flexibility that is not needed. All vectors must be length 1, so in 2 dimensions, its a point anywhere on the perimeter of a circle of radius 1. In 3 dimensions, its anywhere on the surface of a sphere of radius 1. Here's the surprising part: In 1 dimension, since it has to be length 1, the only choices available are -1 and 1, and that exactly equals the EQUAL XOR game described in the first paragraph above. Just say 1 is EQUAL and -1 is XOR, or the opposite would work too. This makes the EQUAL XOR game work in any number of dimensions. I haven't changed what the game does. I've only added a way to use it gradually instead of all-or-nothing. I started with TRUE/FALSE and defined the idea of a continuous dimension wrapped around a circle/sphere/etc.

    What does it mean to play the EQUAL XOR game on the perimeter of a circle? Each player chooses a point somewhere on the perimeter of the circle. If the points are near, the EQUAL player wins more. If the points are far from each other, the XOR player wins more.

    There is a way to write that in math: The dot-product of the 2 vectors (points on the perimeter of the circle) is the amount of score that moves from the XOR player to the EQUAL player. The dot-product is some number between -1 and 1, depending on which 2 vectors the players choose each round of the game.

    If the vectors are separated by a 90 degree angle, the dot-product is 0. If the vectors equal, the dot-product is 1. If the vectors are exactly on opposite sides of the circle, the dot-product is -1. The dot-product is the cosine between the 2 vectors.

    In this vector-based version of the EQUAL XOR game (which is a simplified version of the Rock Paper Scissors game), it is more accurate to call the EQUAL player the COSINE player, and call the XOR player the NEGATIVE-COSINE player. We could expand the game by adding other geometry functions like SINE, but simple is better. Its simply the dot-product (the overlap when viewed at a perpendicular angle) between the 2 choices of the 2 players.

    All the basic logic operations (equal, xor, and, or, not...) can be done on the surface of circles/spheres/etc this way as gradual/continuous changes instead of all-or-nothing like logic is normally done.

    That is the exact definition of intelligence and how to measure it as a game.

    3 comments
      Promote (1)
      
      Add to favorites
    Synapses (1)
     
    Comments:


    BenRayfield     Sun, Oct 24, 2010  Permanent link
    The following news article explains an example of the Rock Paper Scissors game in real evolution. The game is played between large groups of lizards in different areas, where each area has different proportions of the 3 types of male lizards. Combined with other things that affect evolution, the lizards evolve in more intelligent ways than you would normally expect from evolution because of their use of the Rock Paper Scissors competitions in the form of population ratio. The news article does not say that part, but there has to be some reason it evolved that way, and that is the only benefit that could come from it.



    Quote from:http://blogs.nationalgeographic.com/blogs/news/chiefeditor/2010/02/rock-paper-scissors-evolution.html

    The side-blotched lizard, Uta stansburiana, has three morphs differing in color and mating behavior, the UCSC explained in a news release this week.

    * Males with orange throats can take territory from blue-throated males because they have more testosterone and body mass. As a result, orange males control large territories containing many females.
    * Blue-throated males cooperate with each other to defend territories and closely guard females, so they are able to beat the sneaking strategy of yellow-throated males.
    * Yellow-throated males are not territorial, but mimic female behavior and coloration to sneak onto the large territories of orange males to mate with females.

    As this rock-paper-scissors game of the lizards cycled over the decades he studied them, Sinervo observed that the dominant morph in the population changed every four to five years. "It's like an evolutionary clock ticking between rock, paper, scissors then back to rock," he said in the UCSC statement.

    Now a new study, funded in part by the National Geographic Society 's Committee for Research and Exploration, demonstrates that when the lizard's mating game collapses on one or two strategies it can begin to lead to a new species.
    sonicport+techfolder     Wed, Nov 3, 2010  Permanent link
    That's making sense, definitely a good answer to 'what is intelligence?' Thanks!
    BenRayfield     Wed, Nov 3, 2010  Permanent link
    Sonicport, if we combine our software and music related projects as I proposed in your thread http://spacecollective.org/sonicport/6357/Plural-Thinking  then this algorithm for measuring and evolving intelligence could be part of some new music systems we would design together. What you call "plural thinking" is very similar to a http://en.wikipedia.org/wiki/Bayesian_network  which is closely related to this very technical definition of intelligence.
     
          Cancel