BenRayfield     Mon, Aug 23, 2010  Permanent link
I used to go to the artificial intelligence meetings where that was filmed, when I lived in California. Monica Anderson is very good at what she does.

The logical vs intuitive difference is best demonstrated in people when they can calculate lots of advanced math but when they adjust the CONTRAST and BRIGHTNESS on their TV, they do not understand that contrast is multiply and brightness is add. A TV is made of many lights, which have some amount of light coming from them at any 1 time. That amount of light equals approximately [brightness plus [contrast multiplied by [the original amount of light going to that part of the screen]]]. That thread is about the connection between intuition of math and logical understanding of math in people. The way Monica Anderson and myself build artificial intelligence, it would tend to make the same mistakes about interpreting the color on the TV, because it thinks more like we do.
CoCreatr     Tue, Aug 24, 2010  Permanent link
Thanks, Ben. great to know. Agree artificial intuition would initially make same mistakes, and its hallmark would be that the thing learns to get it righter and righter. Grammar needs to expand, too.

One thing on the TV example, sequence matters. Resulting light output of a pixel is, in a first order approximation

[the original amount of light coming to that part of the screen] multiplied by CONTRAST plus BRIGHTNESS, no?

And then there is the gamma curve....
BenRayfield     Tue, Aug 24, 2010  Permanent link
Your equation makes contrast and brightness interchangible, so that can't be it. I didn't mean it had to be that exact equation, but it does have to have a multiply for contrast and an add for brightness. For example, it may be (inputAmountOfLightToThatPartOfTheScreen*(contrast-brightness) + brightness.

Agree artificial intuition would initially make same mistakes, and its hallmark would be that the thing learns to get it righter and righter.

That's why we build it that way. It starts knowing nothing and learns. It doesn't know logic. It has to learn logic before it uses it. Learning how to add and multiply is something more advanced than most artificial intelligence can do. Its cheating to hook a calculator into its mind and say it understands what the calculator is doing. Instead it should learn to be a calculator, so it can also learn that contrast is multiply and brightness is add.

No artificial intelligence has ever learned calculus. They're all less advanced than that. I read that Eurisko learned to add and multiply and calculate prime numbers and eventually learned to create a virus that destroyed its own mind. The virus was one of its thoughts that would attach itself to the list of thoughts that created other thoughts, but only for the thoughts that scored higher, therefore it appeared that the virus had resulted in those high quality thoughts, therefore more thoughts were derived from the virus. Now that I think about it, that's very much like how politics works. Any time something good happens, politicians add themselves to the list of those who caused it. Politics is a variation of Eurisko's virus. After restoring from backup, they continued it to become the Cyc project, which got bigger as they hired more and more people to type facts into it, and now its a big dictionary that has little intuition of anything.
CoCreatr     Sun, Aug 29, 2010  Permanent link
Making (x) contrast and (+) brightness interchangeable? If I remember right, one implicit math rule says you first multiply and then add.

Figuring that rule out with the rest of calculus might be a test case for artificial intuition.

Neat comparison of politics.
BenRayfield     Sun, Aug 29, 2010  Permanent link
I'll explain our disagreement to show how the rules of language can change or become confusing, as an example of the kinds of problems software has to deal with when understanding text communication:

I said: [brightness plus [contrast multiplied by [the original amount of light going to that part of the screen]]]

You said: [the original amount of light coming to that part of the screen] multiplied by CONTRAST plus BRIGHTNESS

My first thoughts agreed with "one implicit math rule says you first multiply and then add", so what you wrote appeared to mean the same thing as what I wrote. But you also wrote "sequence matters", which appears to mean you rewrote the equation to change something.

That confused me, and the only way I knew to resolve the confusion was
(2) proceed with the most probable thing you meant (as I understood it) which was to swap the order of multiply and plus.

(1) costs more and always gets the right answer.
(2) costs less and has a high chance (I estimated 95% in this case) of getting the right answer, and a low chance of what really happened, which has the highest cost of all.

"Cost" means writing more, taking more time, being more confusing, decreasing the quality of SpaceCollective by misinterpreting eachother, and things like that. In artificial intelligence, we call the opposite of "cost" a fitness function. Its simply what the software tries to do, its goal, and goals often include smaller goals in a weighted sum or sequence or other combination.

For most subjects, I think more like an artificial intelligence than I think like an average person. I like it that way. I estimated that choice (2) cost less than choice (1) on average. There are many gambles in life. This was 1 of the smaller ones, and I lost.

Monica Anderson's Artificial Intuition software will have to play the same gambling games as I described above.

Lets see your cards, why I lost the gamble. Why did you write "sequence matters"? I've now thought of a third possibility, that you think math equations should be written so they are calculated left-to-right without needing the priority of multiply over plus. Is that it?

I don't know enough about Monica Andersons' Artificial Intuition, but I've heard that it reads 1 book amount of text overnight and calculates a tree of patterns that apply to patterns that apply to patterns... that apply to the text it read and the text it should respond. Theres certainly some advanced things that design can do, but I don't see how it would expand its mind enough to ask you, and benefit from, what I wrote in the previous paragraph. Would it know what to do with the answer after asking: Why did you write "sequence matters"?

1 more gamble I lost: I thought losing (2) had a higher "cost" than winning (2) or any outcome of (1), but as a result of losing that gamble, I was able to explain something in a more technical way than I would normally know how to explain it, something relevant to this thread. Like a chess game, its hard to know the "cost" of a path before you get deeper into it, but you can estimate enough to win most games.
CoCreatr     Mon, Aug 30, 2010  Permanent link
Wow,I am always intrigued to see how live energetic conversation resolves some issues and evokes higher knowledge. Learning all the while. You make it look easy, Ben.

Either I lost it by having overlooked the brackets in your first equation (that's why I wrote "sequence matters") or you added them later. Does not matter now, the key point of this exercise is - it demonstrates how semantic slippage slow us down. We all have seen meetings quickly go nowhere until the partners suspected and cleared up differences in definitions or put assumptions in the spotlight. A high cost. Serves to illustrate how bizarreness of language opens doors to talking past each other, but then may also foster brilliant cognitions.

Coming to think about it, there must have been attempts at de-fining a language free of ambiguity and innuendo. Not that it would be humanly fun or essential, but nice to have as an option if regular communication goes haywire. Mathematics come to mind. Emergency radio protocol.

One more cycle through the DIKW model .