Member 2005
18 entries
81605 views

 RSS
Stuart Dobson (M)
Melbourne, AU
Immortal since Dec 1, 2008
Uplinks: 0, Generation 3

superconcepts
Social Rebirth
Twitter
G+
We all change. The future is emergent and dynamic, evolving with our minds and our society. Technology plays a fundamental part in this evolution, this evolution of complexity. So I ask, how does technology affect society? How does technology affect our minds and then society in turn? How does our economic and political system affect society and technology? What are the products of this evolution – and what are our goals?
  • Affiliated
  •  /  
  • Invited
  •  /  
  • Descended
  • superconcepts’ favorites
    From Wildcat
    To Create is -(not)- To...
    From Venessa
    Essential Skills for 21st...
    From HazardousDavis
    Do schools KILL creativity?
    From Olena
    This is the Internet, you...
    From SAILOR
    The Venus Project
    Recently commented on
    From BenRayfield
    We Evolve Evolution
    From bpwnes
    Discussion: Zeitgeist -...
    From giulio
    We must protect the...
    From nagash
    Write ZEITGEIST on your...
    From TheUndying
    Our Algorithmic Reality
    Now playing SpaceCollective
    Where forward thinking terrestrials share ideas and information about the state of the species, their planet and the universe, living the lives of science fiction. Introduction
    Featuring Powers of Ten by Charles and Ray Eames, based on an idea by Kees Boeke.
    From superconcepts's personal cargo

    Could Artificial Intelligence Development Hit a Dead End?
    Kurzweil and his proponents seem to be unshakable in their belief that at some point, Advanced Artificial General Intelligence, Machine Sentience, or Human Built Consciousness, whatever you would like to call it, will happen. Much of this belief comes from the opinion that consciousness is an engineering problem, and that it will, at some point, regardless of its complexity, be developed.

    In this post, I don't really want to discuss whether or not consciousness can be understood, this is something for another time. What we need to be aware of is the possibility of our endeavours to create Artificial Intelligence stalling.

    Whatever happened to...Unified Field Theory?

    It seems sometimes, the more we learn about something, the more cans of worms we open, and the harder the subject becomes. Sometimes factors present themselves that we would not have expected to be relevant to our understanding.

    Despite nearly a century of research and theorizing, UFT remains an open line of research. There are other scientific theories that we have failed to completely understand, some that have gone on for so long that people are even losing faith in them, and are no longer pursuing them.

    Whatever happened to...The Space Race?

    Some problems are just so expensive that they are beyond our reach. While this is unlikely to be true forever, it could have a serious and insurmountable effect on Artificial Intelligence development. Exponentially increasing computer power and other technology should stop this being a problem for too long, but who knows what financial, computing, and human resource demands we will find ourselves facing as AI development continues.

    Whatever happened to...Nuclear Power?

    Some ideas just lose social credibility, and are then no longer pursued. If we are able to create an AI that is limited in some way and displays a level of danger that we would not be able to cope with if the limitations were removed, it's most likely that development will have to be stopped, either by government intervention or simply social pressure.

    *

    I think it's unlikely that the progress of anything can be stopped indefinitely. It requires definite failure by an infinite number of civilisations. Anyone familiar with the Fermi Paradox and the "All species are destined to wipe themselves out" theory will have a good understanding of this concept. 100% failure is just not statistically possible indefinitely when it depends on a certain action not being performed.

    However, it is certainly likely that our progress will be stumped at some point. Even with the accelerating nature of technology, this could cause an untold level of stagnation.

    We should try and stay positive of course, but it would be naive to ignore the chance that, for some time at least, we might fail.

    *

    I promise my next post will be more positive!

    Tue, Sep 14, 2010  Permanent link
    Categories: AI
      RSS for this post
    1 comment
      Promote (1)
      
      Add to favorites (1)
    Create synapse
     
    Comments:


    BenRayfield     Sat, Nov 27, 2010  Permanent link
    AI and lots of other things that had been advancing have slowed over the years because society has lost most of its motivation to advance those things. Instead the motivation is get money, control others, and other negative-sum-games. Each business, government, person, etc usually is distracted with their own goals so much that they can't reorganize society in ways that each person participating accomplishes their own goals more. Humans did not evolve to work in a group of 7 billion. More about that athttp://spacecollective.org/benrayfield/6467/Summarize-Earth

    The few people (including myself) who build artificial intelligence and advance science because they prefer a future where it exists, instead of because of whats described in that thread, can be a million times more effective at it because the others aren't trying to advance those things and therefore only do it as a side-effect if they do it at all.

    Open-source software is the best example of something created because people want it to exist instead of for the reasons described at that thread, and its accelerating while proprietary software is strong but continues to get more complex and have more bugs and those bugs are not always fixed because businesses create software only because it makes them money and not because they want the software to work.
     
          Cancel