Sun, Apr 17, 2011
The main difference between the 2d visual grid and the 2d sound grid is the sound grid is aligned to time but never space, and the visual grid is mostly aligned to space and some to time. The audio grid moves forward with time, but you can also skip ahead (like you can pay attention to whats on the left side of your vision, skipping in space) or slow down what you expect to hear so you're surprised when you hear it sooner, or listen to the same sounds slower or faster and still recognize them. The 2d audio grid has stretching and bending, but it does not normally have rotation.
To understand how the visual grid is connected to time, stand up and spin in a circle until you get dizzy. Then stop and look straight ahead. Whatever you're looking at will continuously move to the side at the same speed you were spinning, but when it gets too far it jumps back. That's because your 2d visual grid has a speed dimension, but its not a full dimension because each part of the 2d grid only has 1 speed and direction (and no acceleration, that's just in the 3d+1d grid). Each thing you're looking at moves in some direction. If it instantly stops, you will see it continue moving for a fraction of a second and then become surprised as memories are accessed to try to match the new thing you're looking at (the stopped object).
The main differences between the 2d visual grid and the 2d audio grid are (1) the audio grid does not normally do rotation, and (2) the audio grid's time dimension is the same as your visual grid's horizontal space dimension when you're dizzy from spinning horizontally. After realizing the similarity, I thought about creating a software which would see through a camera and translate it to audio, with tone as one dimension and time as the other, scanning over the video a few times per second as you hear it, which could be used to see through your ears. I did a search and found it already exists:
I downloaded it on my Android phone for free. Things lower in the phone's vision have a lower tone. Its hard to use because the moving horizontal line scans the picture too slow, and they could have chosen different sound effects for translating what it sees, but its close to what I was thinking, based on how I recently realized brains work. If you want to understand how the 2d visual grid and the 2d sound grid of your mind are similar and can be connected, search Android apps for "The vOICe for Android" or other systems. The fact that some blind people now see through their ears confirms my 2d audio grid theory and what I wrote here:
Unlike visual neurons, which are connected to your 3d grid neurons, there is no rotation between the 2 dimensions of your sound grid. Here's what separates Human minds from animal minds. We can learn new types of thinking. We are born with a brain shape that tends toward learning to rotate things visually. 2d grid is a type of thinking. 3d+1d_for_time is a type of thinking. You can think about 4 notes at once, as 4 dots in your vision. Align your 2d visual grid to your 2d sound grid, and imagine a sound the way you imagine it in your vision. Vertically the 4 dots play 4 notes at the same time. Rotate them horizontally and they play the same note 4 times. Rotate them 45 degrees and they play 4 different notes (closer together in tone than if they were completely vertical) at 4 different times (also closer together in time for the same reason). Think about that long enough and you'll learn a rotation ability in your 2d audio grid. You would re-use the 3d+1d grid for rotating between tone and time, the same way you use the 3d+1d grid for rotating your vision. If you do this thought-experiment enough, you would recognize if you heard sound rotated that way.
There is much disagreement on if brains have any quantum or nonlocal abilities, even in small amounts. If you search youtube for "psi wheel in a box 2" you'll see a video of me moving a piece of aluminum foil with my mind, so some of us have direct experience to answer that question. You can read about the debate here:
In the context of how Human intelligence works, I'll explain where that fits in. Telekinesis (moving things with your mind) starts using the 3d+1d grid in a very general way, with a centered mind. In my mind I see a 3d moving field, a smooth surface, without colors, and my thoughts bend it in complex ways. When I change my subconscious thoughts in a way that makes the "3d moving field" become flat and stop moving, then I am in a very centered state of mind and get the more advanced access to reality: Whatever I look at, and form ideas in my mind of (like shoe-over-elephant is an object, described above) whatever I'm looking at, by directing my attention to each part of the object individually and thinking about what it would feel like to touch each part of it... When I have a representation of the target object in my mind, and I continue to adjust my subconscious thoughts to keep the "3d moving field" balanced and motionless, then after that the other parts of telekinesis are done exactly like dreaming. Example: Rotate the object in your mind the same way you normally think about objects from different angles, and if you did it right, the object rotates in reality. It feels like reality is an other part of your mind, like you're dreaming but have only a very weak ability to change the dream, but that does not mean that's what reality really is. It means that's a way you can think to learn how to do telekinesis. Psychic abilities (including telepathy and telekinesis) are done mostly in the 3d+1d grid and a little connected to the 2d audio grid (temporal lobe) parts of my brain. I don't see or hear it, but those are the types of thinking its most similar to. Most people have no rotation ability in their 2d audio grid, but they can learn it. Similarly, most people never learn to use their telepathic or telekinetic abilities, while they do have the brain parts to access it. There is no brain part that only does psychic things. Each part has many functions.
How we understand physics. A basketball moving toward a wall is matched to memories of basketball moving toward a wall, adjusted in the speed dimension of the 3d+1d grid, the time of the sound is predicted, and similar memories are updated depending on if the sound is experienced before or after the expected time of the sound. Whichever idea happens first, the expected sound or the experienced sound, is wired toward the other, which defines the relative timing of memories. If you always expect sounds to happen a little after they really do, your expectation will slowly adjust. Since your experience of the ball hitting the wall happened before your older memory of it, next time you see a ball going toward a wall, and adjust the memory to match its speed (in the space and time stretching dimensions of the 3d+1d grid), your prediction of the time of the sound will be earlier because the earlier event was wired toward the older memory which makes it happen sooner.
The 3d+1d grid is 4 dimensions. Time is a dimension because you can think of the same 3d objects in different 3d positions in the same thought, and you can easily move forward or backward in time to see the 3d objects in different positions, and you can easily see all such 3d positions overlapping in the same thought. The most central part of Human intelligence is the 4d grid of x y z and time. Its used to combine all ideas of dimensions, numbers, fractions, rotation, stretching, bending, cause and effect, representation of your body and movements, and for translating between different kinds of dimensions like the tone dimension and the horizontal visual dimension.
The 3d+1d grid (sometimes I call it the 4d grid, but only for ideas that use time) is a grid of numbers, while a dimension can be just 1 number. There is a grid of positions of objects in space and time, in your mind. Every idea that has position or rotation etc shares neurons with many other such objects, and their association, as a directed-network of neurons, forms the 3d+1d grid. There are more than 3+1 dimensions for some ideas. There are 3 space dimensions, 3 rotation dimensions, and for each of those 6 dimensions, there are position, speed, and acceleration. Similar for time, but no rotation, and just speed, no acceleration. That is 18+2 dimensions. Example: speed of rotation around the x dimension, which modifies ideas in the 3d+1d grid. When you think about a coin spinning, it has negative acceleration as it slows its spinning. You can represent such slowing of spinning easily in your mind. That is an idea which has a negative position in a rotation acceleration dimension of your mind, a cached (a computer word) idea of a specific motion of a coin, so you can think about similar motions of coins faster and easier.
But not all dimensions are grids. The 3 space position and 3 space rotation and 1 time position dimensions are grids, while other dimensions are normally only single positions. The difference between a grid and a single point in a dimension is how many examples of it are in your mind. Grids are created from the networking of many examples together, aligning parts of them together.
For this part, you would need to understand more math and artificial intelligence: Language is a depth-first traversal of the directed-networks of ideas where each node in such networks is an idea or wildcard, and they fit together through the wildcards. If you use the 2 chatbots (1 at word granularity and 1 at letter granularity, same algorithm) in my CodeSimian software (in the Intelligent Chatbots tab, copy/paste text into the top and click the button to give it something to talk about first), then you will see a depth-first traversal of directed-networks of words (or letters, which form into words the same way words form into sentences). Its a very simple algorithm but creates surprisingly Human-like responses if you have a conversation with it. It appears to understand grammar and has a short memory of what you were just talking about, without having anything except a statistical memory of what words lead to what other words and a simple neural network of such directed-network.
Similarly, goals are also a directed-network. When a cycle is found in such network, the goals can not be accomplished until something changes. In such a network, a node points at its subgoals. Goals are an emergent property of networks of memories and many other parts of your mind. Goals are simply the recursion of ideas that lead to more memories of pleasure and less memories of pain.
Game-theory, where people (or robots) think about what others are thinking about others... is simply an other form of directed-network where some of the nodes are yourself and some are other people/robots/etc. Ethics comes from your goals about yourself being translated through such a directed-network so you can think about others having those same goals, and recursively how many people/robots/etc can work together to all accomplish their common goals. The kind of ethics most people have are this simple equation, which I've described in a non-technical way. That is not the only kind of ethics, but its a good approximation of most peoples' ethics, and it can be coded into an artificial intelligence by evolving software toward the ways of thinking I've described.
Here's a thought-experiment to show an other use of the 2d visual grid in representing an abstract idea. How do you represent the idea of clear glass in your mind? It's not a color, but in some ways it is. How do you represent the part where you can see through it? When I asked myself that question, I got a visual of a person on the right, a glass window in the middle, a box on the left, and a straight line from the person's eyes to the box through the glass. When I thought about something thats not clear, the glass in the middle was replaced by it, and the line was cut off so it was only from the middle to the person's eyes. That's how I visually represent the difference between clear and not clear, as a line-of-sight being cut off or not. A complex idea becomes simple geometry of intersecting straight lines. No new type of thinking is needed to represent the idea of clear. Its represented in the 2d visual grid using geometry, and more than that, as many memories of looking through glass.
I've explained how ideas of 3d objects are made of relative positions and rotations of other 3d objects, like the idea of an elephant is made of the elephant's parts in relative positions to each other, and each part is made of smaller parts. In the next thought-experiment, we will learn that brains can think in fractals by curving a hierarchy around to loop into itself. In math, that's called turning a "tree" into a "directed network".
This thought-experiment is about fractals. Think of an apple with small bananas pointing out from its surface. On each banana there are smaller apples. Like the elephants thought-experiments, the bananas are blurry or do not appear until you pay attention to the specific part of the apple where they should be, and recursively for the smaller apples on each banana. This is a directed-network of apple points at repeat_banana points at banana points at repeat_apple points at apple2. You don't see bananas on the smaller apples until you decide that apple and apple2 are the same object. That's how you change the hierarchy into a fractal. Then it goes as deep, apple banana apple banana etc, as you choose to look into it. It displays on your visual neurons as a fractal. Directed-networks with cycles are fractals. Directed networks without cycles are hierarchies, also known as trees. A man may sleep with many women who each sleep with many men. That is also a fractal because its a directed network with a cycle between man and woman.
How the same object can be drawn at different parts of the 2d visual grid. How is the part selected? Whichever part of the 2d grid is most active or has the closest pattern to the object to be drawn, as in matching a pattern from a partial pattern.
What makes Humans smarter than animals the most is their ability to create new "types of thinking" in their minds. Example: Probability/statistics is something we did not evolve to think. We estimate chances in a very inaccurate way. Most people answer the following question wrong: If I flipped 2 coins and at least 1 coin landed heads, then whats the chance both landed heads? Its 1/3, not 1/4 or 1/2 like most people think, because there are 4 ways 2 coins can land and I only excluded "both tails" when I said "at least 1 landed heads" so that leaves 3 possibilities and I asked what is the chance of 1 of those 3 things. Its 1/3. If you still don't believe it, flip 2 coins many times and only ask the question when at least 1 of them lands heads and you will see that 1/3 of the time you ask the question they both land heads. The flaw in Human minds is the need to choose 1 of the coins and say it certainly landed heads, but you can't do that since whichever coin you choose it may have landed tails, and it does change the answer to the question if you take that shortcut. This is the simplest example where AI usually beats Humans at a common sense question.
The "type of thinking" called probability, if understood and used at an intution level and as a permanent part of your thinking, is 1 of the main things that separates geniuses from average people. Its so nonintuitive that most people answer the 2 coin question (as I wrote above) wrong. 2 coins are not important, but what is important is what it means about your understanding of the combination of any 2 events. Most people estimate the chance of "both events happened" (or "both landed heads") as 1/2 or 1/4 chance instead of 1/3, and that is why gambling is illegal in many places... People are so bad at understanding the combination of 2 events that they feel a need to protect each other from their own lack of understanding of probability.
Everything I've written can be summarized into these "types of thinking":
* grid of numbers, where number could be brightness, loudness, electricity, how much, how relevant, etc.
* single point on a dimension.
* directed-network (of nodes pointing at nodes) with optional cycles, where nodes can be ideas, wildcards, or anything else your brain can represent.
* idea is a directed-network where each node (neuron in this case) has a specific electricity amount, plus the patterns those neurons tend to have depending on what electricity they receive.
Smell and taste are unordered numbers, an amount of each type of thing you smell or taste, a group of "single point on a dimension".
Your skin is a directed-network where each node is what each part of your skin can feel, and the connections are which parts of skin are next to which other parts of skin. The network forms that way when things rub across your skin. Skin that is beside other skin gets associated together that way. Pain in the skin (or other pain) activates certain neurons that release chemicals which cause neurons to learn the reverse of what they just learned (at least that's my theory), to avoid the pain later. Ideas form in directed-networks and goals are formed by avoiding things that lead to memories of pain and working toward things that lead to memories of pleasure. A directed-network can represent recursion, goals, relationships between ideas, and many other things.
Vision is a 2d grid connected to the 3d+1 grid which is really an 18d+2d grid when you include position, rotation, speed, acceleration, and time.
Sound is a 2d grid of tone and time.
All grids connect to all other grids as associations are made. Any idea or pattern can associate to any other, and such associations become usable as an idea that activates a directed-network. Example: Some of my ideas cause me to use my vision more than my hearing, and some cause me to use my movement and space/time skills more (like basketball). There are ideas which activate directed-networks (with amounts for each node) to influence how your brain operates at a high level. Information flows in many patterns, high to low and low level to high, and sometimes in fractal patterns, always learning from the top level goal of avoid memories of pain and go for memories of pleasure.
That is half of Human intelligence. The other half can be evolved with artificial intelligence applied to figuring out which patterns of simulated electricity in neural networks tend to create these thought patterns. Start with a simple 3d simulation, where the AI can see in whatever direction it points its simulated head, and put some cubes, triangles, straight lines, and things similar to baby's toys in the simulation, and first get the AI to represent positions and rotations of those simple objects. After that, add some more senses, and get it to learn ideas like counting, which objects are equal to which other objects, movement, acceleration, and so on... Teach the AI the same way you would teach a Human baby. What I've described is how Human intelligence works but is specific enough you could evolve an AI (artificial intelligence) to do the same thing, if you had enough processing speed and memory. It takes a lot of processing speed and memory because the sparse grids of relative object positions, which form memories as recursive objects and ideas, form a new memory for each rotation, each acceleration, each stretching, of such objects. Human brains are a huge cache repeating variations of the same objects and ideas, forming patterns based on such caches, and exploring those patterns. You need a lot of cpu speed and memory for it. On 1 computer, start by trying to simulate a parrot. Get it to listen through the microphone and make parrot sounds, and eventually learn to talk by example of you talking through the microphone. A 1 ghz computer should be able to simulate a parrot brain, probably 1 of the dumber parrots. For Human intelligence, you need a lot more, but we do have more than enough computers connected to the internet and running right now, if we would combine them for such a Human simulation.
I do not mean this writing as a complete explantion of Human intelligence, only as some examples of how to observe your own mind and figure out the rest. To explain everything I know about Human intelligence would be very long and I may have some problems putting it into words. The best way to learn how Human intelligence works is to start with your visual neurons, "what we think our eyes see", and work backward from there with thought-experiments. The scientists with their billions of dollars of research will catch up to us eventually, but by then we will have already created the specific kind of
that we want. I hope we can agree on what kind we want, because there may not be much time left to choose. Human intelligence is simply directed-networks of ideas and wildcards, and grids, and single points on dimensions. With a design that simple, exceeding Human intelligence in a machine, or amplifying it through machines and the internet, is closer than most people think.
Tue, Apr 19, 2011
What you think your eyes see is mostly a memory. People think they're seeing what's in front of their eyes, but the way we think while awake is more like a dream than what's in front of our eyes. I often look straight at something and don't see it because I don't remember it. Next time you see something unlike anything you've seen before, close your eyes and think about drawing a picture of it. If you saw 2 birds behind it, are you sure you didn't see 3 birds? Was the first bird flapping its wings up or down just before you closed your eyes? My picture would be very blurry. If you were standing beside me, you wouldn't be able to tell my picture was of the same thing you're looking at. I used to have a visual memory, able to draw such a picture very accurately, but I decided there were more advantages to not thinking in such a strict logical way and slowly lost the ability
Ben, I really liked the introduction. I see that you were inspired later with my writing. The interpretation of visual input ends up with the neurons that represent complex items. May god help us all, we have a neuron that stands for you.
In the brain, all neurons are reachable via 3 neurons. So, how would the brain know which information is important or what stands out? Logically, the debates travel through the feedback networks, which dominate the cortex.
The mental landscape is formed by magnifications and minifications, and ultimately selections of data.
The brain is obliged to be faithful to the outer world and its coordinate system at the input gateway (senses) and on the output (at the execution). In between, in the processing areas, the brain possesses other coordinate systems that are shaped in ways in which the neural surroundings are worlds to the neurons.
The challenge consists in answering why would any of these coordinate systems be faithful to something out there in the universe.
Sun, May 22, 2011
Here is an app that translates what it sees into sound, it is another case of the human learning the machine sound which is the main problem with these things. I think it'd be better if the light frequencies were translated into less raw data for the listener that was designed to be interpreted as cues, signs, icons, sizes etc