Member 2664
108 entries

Immortal since Jun 17, 2010
Uplinks: 0, Generation 4
mad-scientist and computer programmer looking for something more interesting than most people accept as their future
  • Affiliated
  •  /  
  • Invited
  •  /  
  • Descended
  • BenRayfield’s favorites
    From AsylumSeaker
    Christopher Langan
    From Yissar
    Technology Progress vs....
    From XiXiDu
    The Nature of Self
    From QESelf
    View Point Room Argument...
    From Jorgen
    My Paper on Computer...
    Recently commented on
    From gamma
    Is brain a computer?
    From BenRayfield
    Elections should be done...
    From BenRayfield
    The most dangerous thing...
    From BenRayfield
    Why is there no Content...
    From BenRayfield
    How can a set of computers...
    BenRayfield’s projects
    The human species is rapidly and indisputably moving towards the technological singularity. The cadence of the flow of information and innovation in...

    The Total Library
    Text that redefines...

    Start your own revolution
    Catching up with the future. All major institutions in the world today are grappling to come to terms with the internet. The entertainment...

    Proposal for a multimedia...
    A musical mindstorm on the nature of sound, light, space and subjective experience powered by locally produced energy, heralding the ending of the...
    Now playing SpaceCollective
    Where forward thinking terrestrials share ideas and information about the state of the species, their planet and the universe, living the lives of science fiction. Introduction
    Featuring Powers of Ten by Charles and Ray Eames, based on an idea by Kees Boeke.
    From BenRayfield's personal cargo

    Artificial Intelligence learns music
    Project: Proposal for a multimedia playground
    Artificial Intelligence learns what music is and creates instruments you play with the mouse.

    I created this Java software that evolves Java software and uses it automatically. You move the mouse and it plays what it thinks is music, and by telling it "good" or "bad" over a few minutes, as you continue trying to play music with the mouse, you teach it what music is.  (Download the 1 file here, double-click, and the interactive sounds start... also download sample music after it learned what music is)

    Also you can go into the options, in the "create musical instruments" tab, and write Java code (if you know how) to define the musical instruments directly. You and the artificial intelligence (that writes Java code when you click good/bad) have unlimited control over wave amplitude (are the speaker cones in or out at any specific microsecond, and how much?), in stereo sound. By using the red, green, blue, mousex, mousey, left, and right variables, you can output color, output sound, and input mouse position, and use it any way you like to define new musical instruments instantly.

    This is free and open-source (GNU GPL 2 or 3 open-source license), so if you know how to build Java software, you can build more programs with it or modify it however you like. For example, there are many thousands of other softwares licensed the same which you can connect to it for free, at  Maybe some of you would help advance it far enough to compete with those Guitar Hero video games and other music based games? The advantage is Audivolv generates music at wave amplitudes, 44100 times per second, and the video games mix recorded sounds in ways that don't learn. Audivolv is artificial intelligence.

    It stops improving after a few minutes of training it with good/bad buttons, but in later versions, I'll make it smart enough to learn to sound like any instrument, any type of music, and sounds so unique most people can not imagine, controlled by the mouse as instruments of course.

      Add to favorites
    Synapses (4)

    gamma     Mon, Jul 5, 2010  Permanent link
    It sounds so terrible that it could stop a democratic process in a country and cause gnus to migrate to the moon. I think that the big mistake was trying to make it intelligent.
    BenRayfield     Mon, Jul 5, 2010  Permanent link
    gamma, its supposed to sound terrible when it starts, actually supposed to sound random, like radio-static, sometimes not playing any sound at all. The point is to teach it what sound and music and interaction with the mouse means, not for it to know from when you first start using it. That way, its more flexible when it learns what music is.

    Its good Human mothers do not react to their babies like you react to this software. Babies are not intelligent. Better to give up now instead of teaching them. The software starts like a baby, not knowing the difference between mouse movements, color of the window, and sounds. It uses the exact same code for learning mouse, color, and sound, similar to how a baby may be confused about the difference between seeing and hearing. In later vesions, I'll put in more intuition for it to start with, but for now, I need it to be flexible so we can figure out whats the best intuition for it to start with.

    I'm creating a new form of life here, and the long-term goal is for this to be part of the first smarter-than-Human software (a Friendly AI), which should be a simple 5 megabyte file that anyone can use. You can find the other components of that, which I am planning to finish years from now, at

    Did you listen to the sample music in the "public domain music / evolved music" folder at  ?

    You could also try the example code in "Options And Help / Create Musical Instruments", which plays a smooth sound based on mouse speed, and changes the color based on how far left/right the mouse is. It does not have to be complicated.

    Maybe it would work better as a component of the PureData software for electronic music, which I read is being converted to the GPL open-source license. Theres many possibilities for how to use Audivolv. Which do you think would get people interested in it?
    gamma     Tue, Jul 6, 2010  Permanent link
    It appears dysfunctional - stuck. There is no sound, no menus... even the Close seems to take 20 seconds. If it is visual, it is not audible. It has to put the man in the first line, so that people learn, listen all the time, it must be a tone generator and must exchange files in the unique format.
    BenRayfield     Wed, Jul 7, 2010  Permanent link
    Probably your computer is too slow to run it. The menus are supposed to appear after you move the mouse over the window for half a second, but if your computer is slower than 2 - 3 ghz (requirements are not exact) then I've often seen it not get that far. In later versions, I'll have it adjust its own speed so it runs on slower computers too.

    To everyone, my phone number is at  if you want help with any of my software.
    gamma     Wed, Jul 7, 2010  Permanent link
    Consider this strange thing. If I double click the perceptron.jar it works fast, but if I click on start.bat which contains

    java -jar "perceptron.jar"

    it runs 5 times slower.
    BenRayfield     Wed, Jul 7, 2010  Permanent link
    The problem thats similar to what you describe is I told the graphics to run too fast for older computers, which prevents it from receiving mouse events, so the menus do not appear if that causes it to run too slow. Are you on a Macintosh? That effect is much stronger on Macintosh because the graphics are done differently in their version of Java.

    If you want to help, you can go to "Support" in  and "create new item" in "Audivolv Bugs" and upload the AudivolvMainLog.txt file from somewhere in the "Audivolving" folder it creates beside the program you double-clicked to run. (If you are on Macintosh, because of some text difference with newlines I think, that file may not be created. I simply did not have a Macintosh to test on much.)
    gamma     Wed, Jul 7, 2010  Permanent link
    Oh Christ! I am against the Macintosh!
    Say, how about you join in the Perceptron project, a video feedback fractal generator? I am mighty and I'll vote you for the general executive. Tomorrow, I need to upload a new version with the smallest possible corrections.
    BenRayfield     Wed, Aug 18, 2010  Permanent link
    Gamma I removed that code you copy/pasted into this thread because SpaceCollective is not for talking about excessively technical details. I will consider joining your Perceptron project because I watched the video and want to connect it to Audivolv (my software this thread is about) in an artificial intelligence way. Lets write summaries here and talk about technical details in private messages.

    Audivolv is mostly about interaction between mouse movements and generated sounds (44100 calculated amounts of electricity going to each speaker per second for 44.1 khz sound) evolving into a "multimedia playground" where mouse and speakers are the media. Add Perceptron to that and we have 3 media, all interacting through the same artificial intelligence code. Our "playground" expands. Its about the psychology of people and artificial intelligence interacting, and I see a lot of potential in using your realtime fractal graphics as a visual psychology interface.

    My long-term plans for Audivolv are to use it to evolve new kinds of brain simulations (more advanced than neural networks). Not Human brains, but new kinds of simulated brains that have their own unique abilities compared to Human brains, something to work with us instead of replace us, to fill in the thought processes that we have problems with and for us to do the same for my many new species of simulated brains. How? First get Audivolv to evolve musical instruments that people play with the mouse. Then use that exact same code to evolve new patterns of brainwaves that parts of the simulated brains play with their simulated brainwaves. Mouse left/right and up/down (changing 100 times per second) are 2 dimensions. Electricity in each speaker (changing 44100 times per second) is a dimension. Its a general artificial intelligence system for movement in dimensions, so the same code that uses mouse and speakers as musical instruments will work for electricity waves in simulated brains. Neural-network software does not simulate 44100 electricity amounts per second because computer programmers think that level of detail is wasteful. My long-term goal for Audivolv is to evolve such simulated brains and brainwave patterns at such a high level of detail that you can choose any simulated brain cell and listen to its brainwaves on the speakers. We can do experiments with brainwaves that sound like music, but I expect most of them will sound like electricity. My simulated brains will be so detailed that they will have an EEG and if you connect those EEG-measured brainwave patterns to a psychology software, you might figure out what its thinking. Add this to our "multimedia playground" too.

    It will be fun to evolve the simulated brains the same way we can evolve musical instruments in Audivolv today, but there is something much more interesting to do when the project advances that far: Get the simulated brains to design more simulated brains. Audivolv is already a Java software that creates Java software, so theres no technical problems to solve to get the simulated brains to build more simulated brains, except the technical problem of creating the first few simulated brains. When simulated brains design more simulated brains, they will learn to become smarter without limit. We should do the same.

    UPDATE: These are the plans for the next version of Audivolv and how it will evolve very simple statistical simulations of the Human user's mind to evolve better musical instruments that they will score higher:

    Audivolv uses GigaLineCompile for evolving new Java code. There are a few kinds of objects it evolves code for, including:

    * InstrumentFunc = Code that is wrapped in a jsoundcard.SoundFunc is called by JSoundCard in a loop many times per second. It receives microphone(s) amplitudes and calculates speaker(s) amplitudes.

    * MusicianFunc = Code that tries to do the same thing the Human user does. If an Instrument has 2 microphones/inputs and 3 speakers/outputs, a Musician would have 3 microphones/inputs and 2 speakers/outputs, because each audio channel is paired with its opposite. A Musician hears speaker(s) and calculates microphone(s) amplitudes. The Human user is like a MusicianFunc, which is used through JSoundCard, but that MusicianFunc can not be used directly because instead it calls an InstrumentFunc in a loop controlled by things which are affected by sound-card timing. Normal MusicianFuncs simulate what the Human user would do so a few possibilities can be tried at once instead of only what the Human user is doing at the time. Whatever Instruments the simulations like best are what the Human user gets to play next.

    * MeasureFunc = Code that hears microphone(s) and speaker(s) simultaneously (both as inputs) and outputs an interpretation of them as 1 or more extra audio channels. Other code uses the outputs of MeasureFuncs to find good combinations of MusicianFunc with InstrumentFunc. Whichever MusicianFuncs are best are used to find the best InstrumentFuncs. Whichever InstrumentFuncs are best are used to find the best MusicianFuncs. Its a cycle that finds good MusicianFuncs and InstrumentFuncs. Its also a hierarchy with the Human user being the top MusicianFunc. MeasureFuncs evolve to represent different parts of the Human user's mind. Example: A MeasureFunc that calculates the average volume (or the opposite of that would work too since its statistical relationships) of speakers/microphones together would be useful for excluding MusicianFuncs and InstrumentFuncs that don't play any sound. The way that works is the Human user would tend to score InstrumentFuncs low if they don't play sound, and those InstrumentFuncs would tend to score low (or tend to score high would work too, as long as its consistent) in that MeasureFunc, therefore that MeasureFunc would get a higher score for being more relevant (positive or negative relationship) to what the Human user is thinking. MeasureFuncs can be evolved for any interpretation of the sounds (within the limits of the code evolution system) or manually written. Example: A MeasureFunc could do a fourier-transform on the audio channels and calculate how much of a certain frequency there is. That could be useful for excluding annoying high-pitched sounds, but I expect the code evolution system is flexible enough to learn to measure frequencies approximately.

    Its a bipartite graph (2 groups of nodes where each edge is between a node from each group) where the 2 groups of nodes are InstrumentFuncs and MusicianFuncs. Each edge has a MeasureFunc and the score it calculated between the 2 nodes (an InstrumentFunc and a MusicianFunc).

    The Human user is the RootMusicianFunc and the scores the user chooses have RootMeasureFunc as their MeasureFunc. Each of those scores is between RootMusicianFunc and the InstrumentFunc that user was playing when they scored it. All evolved MeasureFuncs are for approximating RootMeasureFunc or for being used in groups of MeasureFuncs to approximate RootMeasureFunc in a bayesian or other statistical way.

    It would be more accurate to call it a tripartite (3 groups) hypergraph instead of a bipartite (2 groups) graph, where the 3 parts of each edge are MusicianFunc, InstrumentFunc, and MeasureFunc, and each edge also has a score (number in range -1 to 1). That way, the 3 populations of evolving codes can use the same algorithm. It can also be stored symmetricly in a database table with 3 columns for the 3 types and 1 (or more?) column for score. In a symmetric way, the statistics of how the other 2 things are related to others of the third thing can be used to choose which of the third thing to evolve with each other.

    The result will be a system that has goals about its goals about music, a brain made of 3 similar parts specialized in music. The MeasureFuncs will be its emotions, and the MusicianFuncs and InstrumentFuncs will be its normal neurons and mirror-neurons. From what I've read, mirror-neurons are for thinking about how others think.

    It will be a system that can learn while you play its intelligent musical instruments, or it can learn on its own without you so it sounds better when you come back. Later versions will have more evolved systems built on top of this primitive brain made of 3 parts.