2 players each have 1 coin. Each round of the game, each player secretly lays their coin down heads or tails. Its a choice, not a random flip. One player is called EQUAL and the other is called XOR (eXclusive-OR, means not equal). If both coins are heads or both coins are tails, the EQUAL player gets 1 point. If 1 is heads and 1 is tails, the XOR player gets 1 point. Repeat many times. The player with the highest score at the end wins.

That game is the simplest possible intelligence test. It is the exact definition of intelligence.

It is also the simpler version of the game "Rock Paper Scissors", where each player secretly chooses rock, paper, or scissors (instead of heads or tails), then who wins 1 point is decided by: rock crushes scissors, scissors cut paper, paper covers rock. Nobody wins a point if the 2 choices are equal. My EQUAL XOR game has 2 things to choose instead of 3 but measures intelligence the same way.

If player1 chooses rock more often than paper or scissors, then player 2 will learn to choose paper more often. Complex patterns will form between 2 intelligent players of "Rock Paper Scissors". Except for my simpler version of it (EQUAL XOR), Rock Paper Scissors is the most strategic and intelligent game ever created. Its the exact definition of intelligence except it has an unnecessary third choice.

What can this game be used for?...

I build artificial intelligence (AI) software, the kind that can eventually become what we see in the movies, except for the parts where it tries to take over the Earth and kill everyone.

The Friendly AI paradox ( http://en.wikipedia.org/wiki/Friendly_AI ) is how to build an AI that is allowed to modify itself in any way but chooses only to modify itself in ways that work toward its original goal more effectively. Example: You are at a party. You want to dance with some girl but instead sit in a chair talking about how good she looks. To accomplish your goal of dancing with her, you order a beer and think maybe you will feel more like dancing after drinking it. You modified yourself by drinking the beer. A side-effect of that modification is a desire to drink more beer and run your mouth, which may lead to other things you did not predict. This is an analogy between AI and people. Most people learn how much to drink at a party, but in AI, it is a serious research problem, not specificly about drinking at parties, but about how an AI can modify itself without having unexpected side-effects that build up until the whole system crashes or results in the AI wanting to kill everyone or other hard-to-predict things.

Quote from:http://en.wikipedia.org/wiki/Three_Laws_of_Robotics

(1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.

(2) A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.

(3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The "3 laws of robotics" were an attempt to solve the Friendly AI paradox by forcing an AI (in a robot) to think certain ways, but that strategy will never work because AI will eventually become smart enough to modify itself. Its the same reason Humans do not do what animals command, even though simpler animals created Humans through evolution.

Today that area of research is called "Friendly AI" but it is still very speculative.http://en.wikipedia.org/wiki/Friendly_AI

As I define it, a Friendly-AI is an AI that has the ability to modify itself (including its goals) and intelligently predicts what a possible modification would cause in the near and far future, and considers all that before modifying itself, which results in it creating new goals that more effectively work toward its original goals, and does not result in significantly changing its original goals, and to satisfy the "friendly" part, its original goals are similar to the goals that the most number of people could agree on.

The best strategy to build a Friendly-AI that we know of is to define its thought processes as a simulation of some new kind of physics that we define as math equations. Strategies like the "3 laws of robotics" will not result in a Friendly-AI. Those strategies are more likely to result in the kind of destructive AIs we see in movies. The correct strategy is to build it in a way that it wants to do certain things, not to put in a system to control it to do that. If it wants to do it, and if its smart enough, then it will not try to change itself in a way that it stops wanting to do its original goals.

Below, I will explain the progress I have made in designing a "simulation of some new kind of physics that we define as math equations" for the long-term goal of solving the Friendly-AI paradox:

Start with the EQUAL XOR game I describe above. Bits in computer memory can be substituted for coins, and artificial intelligence code can be substituted for each 2 players.

First, I'll explain some math. A vector in N dimensions is a list of N numbers. A 3-dimensional vector is a direction and length in 3d space, like pointing your finger in some direction and saying how far to go. A 2-dimensional vector is the same thing except without the up/down part. A 1-dimensional vector is the same thing but only forward and backward. A 0-dimensional vector is nothing. I'm going to use N-dimensional vectors, and it does not matter what N is. The more dimensions you have, the more choices there are in how to play the game. You only need 1 dimension, but its more flexible with more.

I'm going to remove some of the flexibility that is not needed. All vectors must be length 1, so in 2 dimensions, its a point anywhere on the perimeter of a circle of radius 1. In 3 dimensions, its anywhere on the surface of a sphere of radius 1. Here's the surprising part: In 1 dimension, since it has to be length 1, the only choices available are -1 and 1, and that exactly equals the EQUAL XOR game described in the first paragraph above. Just say 1 is EQUAL and -1 is XOR, or the opposite would work too. This makes the EQUAL XOR game work in any number of dimensions. I haven't changed what the game does. I've only added a way to use it gradually instead of all-or-nothing. I started with TRUE/FALSE and defined the idea of a continuous dimension wrapped around a circle/sphere/etc.

What does it mean to play the EQUAL XOR game on the perimeter of a circle? Each player chooses a point somewhere on the perimeter of the circle. If the points are near, the EQUAL player wins more. If the points are far from each other, the XOR player wins more.

There is a way to write that in math: The dot-product of the 2 vectors (points on the perimeter of the circle) is the amount of score that moves from the XOR player to the EQUAL player. The dot-product is some number between -1 and 1, depending on which 2 vectors the players choose each round of the game.

If the vectors are separated by a 90 degree angle, the dot-product is 0. If the vectors equal, the dot-product is 1. If the vectors are exactly on opposite sides of the circle, the dot-product is -1. The dot-product is the cosine between the 2 vectors.

In this vector-based version of the EQUAL XOR game (which is a simplified version of the Rock Paper Scissors game), it is more accurate to call the EQUAL player the COSINE player, and call the XOR player the NEGATIVE-COSINE player. We could expand the game by adding other geometry functions like SINE, but simple is better. Its simply the dot-product (the overlap when viewed at a perpendicular angle) between the 2 choices of the 2 players.

All the basic logic operations (equal, xor, and, or, not...) can be done on the surface of circles/spheres/etc this way as gradual/continuous changes instead of all-or-nothing like logic is normally done.

That is the exact definition of intelligence and how to measure it as a game.

That game is the simplest possible intelligence test. It is the exact definition of intelligence.

It is also the simpler version of the game "Rock Paper Scissors", where each player secretly chooses rock, paper, or scissors (instead of heads or tails), then who wins 1 point is decided by: rock crushes scissors, scissors cut paper, paper covers rock. Nobody wins a point if the 2 choices are equal. My EQUAL XOR game has 2 things to choose instead of 3 but measures intelligence the same way.

If player1 chooses rock more often than paper or scissors, then player 2 will learn to choose paper more often. Complex patterns will form between 2 intelligent players of "Rock Paper Scissors". Except for my simpler version of it (EQUAL XOR), Rock Paper Scissors is the most strategic and intelligent game ever created. Its the exact definition of intelligence except it has an unnecessary third choice.

What can this game be used for?...

I build artificial intelligence (AI) software, the kind that can eventually become what we see in the movies, except for the parts where it tries to take over the Earth and kill everyone.

The Friendly AI paradox ( http://en.wikipedia.org/wiki/Friendly_AI ) is how to build an AI that is allowed to modify itself in any way but chooses only to modify itself in ways that work toward its original goal more effectively. Example: You are at a party. You want to dance with some girl but instead sit in a chair talking about how good she looks. To accomplish your goal of dancing with her, you order a beer and think maybe you will feel more like dancing after drinking it. You modified yourself by drinking the beer. A side-effect of that modification is a desire to drink more beer and run your mouth, which may lead to other things you did not predict. This is an analogy between AI and people. Most people learn how much to drink at a party, but in AI, it is a serious research problem, not specificly about drinking at parties, but about how an AI can modify itself without having unexpected side-effects that build up until the whole system crashes or results in the AI wanting to kill everyone or other hard-to-predict things.

Quote from:http://en.wikipedia.org/wiki/Three_Laws_of_Robotics

(1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.

(2) A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.

(3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The "3 laws of robotics" were an attempt to solve the Friendly AI paradox by forcing an AI (in a robot) to think certain ways, but that strategy will never work because AI will eventually become smart enough to modify itself. Its the same reason Humans do not do what animals command, even though simpler animals created Humans through evolution.

Today that area of research is called "Friendly AI" but it is still very speculative.http://en.wikipedia.org/wiki/Friendly_AI

As I define it, a Friendly-AI is an AI that has the ability to modify itself (including its goals) and intelligently predicts what a possible modification would cause in the near and far future, and considers all that before modifying itself, which results in it creating new goals that more effectively work toward its original goals, and does not result in significantly changing its original goals, and to satisfy the "friendly" part, its original goals are similar to the goals that the most number of people could agree on.

The best strategy to build a Friendly-AI that we know of is to define its thought processes as a simulation of some new kind of physics that we define as math equations. Strategies like the "3 laws of robotics" will not result in a Friendly-AI. Those strategies are more likely to result in the kind of destructive AIs we see in movies. The correct strategy is to build it in a way that it wants to do certain things, not to put in a system to control it to do that. If it wants to do it, and if its smart enough, then it will not try to change itself in a way that it stops wanting to do its original goals.

Below, I will explain the progress I have made in designing a "simulation of some new kind of physics that we define as math equations" for the long-term goal of solving the Friendly-AI paradox:

Start with the EQUAL XOR game I describe above. Bits in computer memory can be substituted for coins, and artificial intelligence code can be substituted for each 2 players.

First, I'll explain some math. A vector in N dimensions is a list of N numbers. A 3-dimensional vector is a direction and length in 3d space, like pointing your finger in some direction and saying how far to go. A 2-dimensional vector is the same thing except without the up/down part. A 1-dimensional vector is the same thing but only forward and backward. A 0-dimensional vector is nothing. I'm going to use N-dimensional vectors, and it does not matter what N is. The more dimensions you have, the more choices there are in how to play the game. You only need 1 dimension, but its more flexible with more.

I'm going to remove some of the flexibility that is not needed. All vectors must be length 1, so in 2 dimensions, its a point anywhere on the perimeter of a circle of radius 1. In 3 dimensions, its anywhere on the surface of a sphere of radius 1. Here's the surprising part: In 1 dimension, since it has to be length 1, the only choices available are -1 and 1, and that exactly equals the EQUAL XOR game described in the first paragraph above. Just say 1 is EQUAL and -1 is XOR, or the opposite would work too. This makes the EQUAL XOR game work in any number of dimensions. I haven't changed what the game does. I've only added a way to use it gradually instead of all-or-nothing. I started with TRUE/FALSE and defined the idea of a continuous dimension wrapped around a circle/sphere/etc.

What does it mean to play the EQUAL XOR game on the perimeter of a circle? Each player chooses a point somewhere on the perimeter of the circle. If the points are near, the EQUAL player wins more. If the points are far from each other, the XOR player wins more.

There is a way to write that in math: The dot-product of the 2 vectors (points on the perimeter of the circle) is the amount of score that moves from the XOR player to the EQUAL player. The dot-product is some number between -1 and 1, depending on which 2 vectors the players choose each round of the game.

If the vectors are separated by a 90 degree angle, the dot-product is 0. If the vectors equal, the dot-product is 1. If the vectors are exactly on opposite sides of the circle, the dot-product is -1. The dot-product is the cosine between the 2 vectors.

In this vector-based version of the EQUAL XOR game (which is a simplified version of the Rock Paper Scissors game), it is more accurate to call the EQUAL player the COSINE player, and call the XOR player the NEGATIVE-COSINE player. We could expand the game by adding other geometry functions like SINE, but simple is better. Its simply the dot-product (the overlap when viewed at a perpendicular angle) between the 2 choices of the 2 players.

All the basic logic operations (equal, xor, and, or, not...) can be done on the surface of circles/spheres/etc this way as gradual/continuous changes instead of all-or-nothing like logic is normally done.

That is the exact definition of intelligence and how to measure it as a game.

Sun, Oct 24, 2010 Permanent link

Categories: intelligence, software, friendlyai, math, game, skynet, terminator, matrix, extinction

Sent to project: Polytopia

Categories: intelligence, software, friendlyai, math, game, skynet, terminator, matrix, extinction

Sent to project: Polytopia

RSS for this post |