Member 420
242 entries
1855011 views

 RSS
Project moderator:
Polytopia

Contributor to projects:
The great enhancement debate
The Total Library
Every act of rebellion expresses a nostalgia for innocence and an appeal to the essence of being. (Albert Camus)
  • Affiliated
  •  /  
  • Invited
  •  /  
  • Descended
  • Wildcat’s favorites
    From Xarene
    Human Document...
    From Xaos
    It is not Gods that we...
    From TheLuxuryofProtest
    Deep Learning in the City...
    From Rourke
    The 3D Additivist Manifesto
    From syncopath
    Simplicity
    Recently commented on
    From Benjamin Ross Hayden
    AGOPHOBIA (2013) - Film
    From Wildcat
    Tilting at windmills or...
    From Wildcat
    The jest of Onann pt. 1(...
    From syncopath
    Simplicity
    From Wildcat
    Some nothings are like...
    Wildcat’s projects
    Polytopia
    The human species is rapidly and indisputably moving towards the technological singularity. The cadence of the flow of information and innovation in...

    The Total Library
    Text that redefines...

    The great enhancement debate
    What will happen when for the first time in ages different human species will inhabit the earth at the same time? The day may be upon us when people...
    Now playing SpaceCollective
    Where forward thinking terrestrials share ideas and information about the state of the species, their planet and the universe, living the lives of science fiction. Introduction
    Featuring Powers of Ten by Charles and Ray Eames, based on an idea by Kees Boeke.
    In 1949 a small book appeared on the scene of philosophy and it was to revolutionize the very idea of the way we think. The book: The Concept of Mind by British philosopher Gilbert Ryle essentially dealt a death blow to dualism and more particularly to Descartes.
    Ryle, sharing Wittgenstein’s approach to language was the very one to coin the phrase “the ghost in the machine” to basically itemize what he considered to be the main issue with the conceptualizing of the term Mind, namely that it was a category mistake to separate mind and body.

    To quote Ryle: “Minds are not bits of clockwork, they are just bits of not-clockwork. As thus represented, minds are not merely ghosts harnessed to machines, they are themselves just spectral machines. . . . Now the dogma of the Ghost in the Machine does just this. It maintains that there exist both bodies and minds; that there occur physical processes and mental processes; that there are mechanical causes of corporeal movements and mental causes of corporeal movements. I shall argue that these and other analogous conjunctions are absurd.”
    ― Gilbert Ryle, The Concept of Mind



    Essentially Gilbert Ryle, also known for his, by now, extremely famous example of the apparent non-existence of the Oxford university (in which a visitor visits all the colleges and campuses and yet cannot find the university) was occupied with ontological commitments of theorists dealing with the so-called mind-body problem, but for Ryle such dualism in whatever form was as the quote above shows totally absurd.

    Without getting too philosophical about it the reason I started this essay with Gilbert Ryle is because the lessons that he taught us I believe, are as valid and relevant today when dealing with AI as much as they were in 1950 when dealing with the human mind.
    Just as it was important for Schopenhauer, Wittgenstein and Ryle (which are to my mind a kind of lineage of thinkers) to destroy the idea of dualism and assert the reality of oneness in the sense of an action being the actual conscious awareness with no ghost behind it, it is important for us to do the same in relation to machine consciousness.
    Moreover the very essence of Ryle’s idea concerning category mistakes I think should be applied to machines, just as it is true that minds are not conscious as such, but a collection of observable behaviors and unobservable dispositions, so it is true that machines are (or will be) conscious in as much as the collection of observables behaviors and unobservable dispositions fits our criterion of conscious awareness.

    It is after all a question not of reality but of definitions.
    -

    You would notice that I titled this article the rise of artificial consciousness and not artificial intelligence.
    The main reason for this change of concept lies with my view that the debate is wrongly articulated.
    It is my view as a futurist that the full spectrum of our understanding is still to be unfolded and only when we can become much more clear about the issues involved, will we be able to give an answer to the questions of machine intelligence and consciousness.
    My perspective does not involve prophet like predictions of a time scale (for the singularity), nor sequences of emergence (of AI or AC) but a more philosophical approach and human centered attitude concerning our intersubjective relationship with machines.

    Historically speaking the debate was first framed by Alan Turing in 1950, developing into what was later to be called the ‘Turing Test’. Turing, being an indisputable genius understood immediately that one cannot answer the question of ‘can machines think’ (thought being a difficult concept to define and measure) and thus instead he proposed a different question (more accurately a different frame of thought concerning the question), namely:

    "Are there imaginable digital computers which would do well in the imitation game?"

    The imitation game as defined by Turing (not the just released film by the same name) is technically a parlor game in which a three person game is played. A man and a woman with the third being of irrelevant gender (can be either a man or a woman), are entering separate rooms and answering questions put forward by the third person, the answers are written (or typed) and the third person must judge based only on the answers he received which is a man and which is a woman.
    In the modern standard interpretation, the woman is changed with a computer program and the aim of the test is changed to try and fool the interrogator into believing the computer is a human.



    The whole (simplified) point with many detractors to both sides of the argument is to decide if a computer can imitate a human to such a degree that another human will think it human as well.
    Until here all is well, but the questions that are involved in the philosophical sense are much harder to come to terms with since these questions touch everything from consciousness to reality to neuroscience and neurobiological circuits and probably the nature of mind itself,if such there is.

    For instance, we know today that many of the traits associated with human intelligence are widespread in the animal kingdom. Conscious awareness has been demonstrated time and again in myriad tests, from Chimpanzees and Parrots, to Octopi and Elephants and others. The problem of course has always been and presently still remains, the definition of intelligence, and conscious awareness.
    The definitions are as varied and as numerous as the number of researchers in the field and until now no coherent picture has been presented as concerns an exact and testable scientific definition.
    Of course this issue itself raises the specter of humans self-description, for by defining exactly where the limits of our conscious awareness are, we are factually repositioning our own very special status (in our eyes of course) and that may be the greatest psychological obstacle to accepting the emergence of an artificial consciousness.

    Meanwhile a computer program, a chatbot really, called Eugene Goostman has been said to have passed the Turing test:
    "An historic milestone in artificial intelligence set by Alan Turing - the father of modern computer science - has been achieved at an event organized by the University of Reading.
    The 65 year-old iconic Turing Test was passed for the very first time by computer programme Eugene Goostman during Turing Test 2014 held at the renowned Royal Society in London on Saturday.
    'Eugene' simulates a 13 year old boy and was developed in Saint Petersburg, Russia. The development team includes Eugene's creator Vladimir Veselov, who was born in Russia and now lives in the United States, and Ukrainian born Eugene Demchenko who now lives in Russia." (Turing Test success marks milestone in computing history- University of Reading)



    Of course many do not agree : "Did ‘Eugene Goostman’ Pass the Turing Test? " but that is not the point.

    A short Recap
    Before we enter the larger debate a short recap where we are at, concerning the mind (not a typo-pun intended) field of AI.

    Back in February 2014 in a much publicized article in Popular Mechanics Douglas Hofstadter (Pulitzer Prize-winning author of Gödel, Escher, Bach: An Eternal Golden Braid. ) said the following concerning Siri and Watson:” Watson is basically a text search algorithm connected to a database just like Google search. It doesn't understand what it's reading. In fact, read is the wrong word. It's not reading anything because it's not comprehending anything. Watson is finding text without having a clue as to what the text means. In that sense, there's no intelligence there. It's clever, it's impressive, but it's absolutely vacuous.” (Why Watson and Siri Are Not Real AI)
    -
    what interests me particularly in this article (a highly suggested reading notwithstanding) is D.H. use of the term ‘vacuous’. While I have no idea what exactly does DH mean when using the term in this context, I can only surmise that by using the term ‘Vacuous’ concerning Watson he means to distinguish it from the : ’there’s someone in there’ sense of personhood in humans. So while I agree with DH about the ,’presently ‘, vacuity of the current AI’s I think that the term is misapplied, it is not a filling of this emptiness that we should be looking for, but a sense making machine. (I do not think we should expect AI to have ‘someone in there’- being non-vacuous, for a while yet).
    Nevertheless the idea is obstinate, that it (the Ai) must somehow be like us.
    This is an interesting aspect of human perception and points to our very ingrained anthropomorphizing of everything.
    It is my view that also all the fears associated with the rise of AI stem from the same mechanism.
    Witness the plethora of articles that came online in the last few weeks with the media scooping and celebrating the great thinkers who warn us again and again about Terminator cum Matrix cum Hal phenomena.

    Witness for yourself the giants of science,tech and philosophy all busy at warning us:

    Nick Bostrom :
    Our weird robot apocalypse: How paper clips could bring about the end of the world (Salon) (where Nick Bostrom is explaining to how superintelligent AIs could destroy the human race by producing too many paper clips.)
    Or Elon Musk:
    Why Elon Musk is scared of artificial intelligence — and Terminators (Washington post)
    Or : Elon Musk's deleted message: Five years until 'dangerous' AI. (CNBC)
    And finally the most celebrated British physicist Stephen Hawking:
    Stephen Hawking warns artificial intelligence could end mankind (BBC)
    ..
    So whilst Facebook Envisions A.I. That Keeps You From Uploading Embarrassing Pics. (Wired) And Google buys DNNresearch- The "DNN" in its name stands for "deep neural networks." That's a contemporary approach to designing artificially intelligent systems which requires less work to "train" the systems. (Business Insider).

    Demis Hassabis the man behind Google's DeepMind: "DeepMind seeks to build artificial intelligence software that can learn when faced with almost any problem. This could help address some of the world’s most intractable problems, says Hassabis. “AI has huge potential to be amazing for humanity,” he says. “It will really accelerate progress in solving disease and all these things we’re making relatively slow progress on at the moment.” (Google’s Intelligence Designer-The man behind a startup acquired by Google for $628 million plans to build a revolutionary new artificial intelligence.) (MIT Technology Review)
    Wall Street of course will not stay behind: " “High Intelligence Trading” is the new frontier for technology, markets, regulation" (thomson reuters)

    And finally of all those surrounding the wagons of the artificial intelligence train,comes the most thorough, well researched and wide ranging articles, courtesy of Vanity fair, titled : Enthusiasts and Skeptics Debate Artificial Intelligence
    Kurt Andersen wonders: If the Singularity is near, will it bring about global techno-Nirvana or civilizational ruin?
    (Vanity Fair))- a very worthwhile your time reading.

    Notice the similarity between Mitch Kapor view and that of Jaron Lanier with that of Douglas Hofstadter from the article:

    "In fact, Kapor’s belief about what makes us human—consciousness exists, and it’s not merely a curious side effect of the brain machine’s computations—does amount to a belief in a soul. The idea that consciousness is finally just an engineering problem, he thinks, “misses critically important things. I cannot say with certainty that they’re wrong. But we’ll see.”

    Lanier’s position is that even if human-equivalent intelligence is a soluble engineering problem, it’s essential that we continue to regard biological human consciousness as unique. “If you don’t believe in consciousness” as both real and the defining essence of humanity, he explained, then “you end up devaluing people.”


    There is something fascinating (and to some something disturbing) about all the new companies busy researching and implementing deep learning, Big Data, iterative self design, super intelligence and finally AI. Follow their names for instance, we have DeepMind, MetaMind , Clarifai, NarrativeScience but if you desire to be really astonished take the time and watch the video released by Sentient Technologies (the interview is with Babak Hodjat, Co-founder, Chief Scientist and Nigel Duffy, Chief Technology Officer):
    Q&A With The Scientists Of Sentient Technologies



    and read 'Startup Funded $143M to Create Sentient Computing.
    Siri + Watson meet a nice-guy version of Skynet?
    '.

    Another company that grabs headlines lately is Vicarious:

    According to their 'about':
    "We are building a unified algorithmic architecture to achieve human-level intelligence in vision, language, and motor control. Currently, we are focused on visual perception problems, like recognition, segmentation, and scene parsing. We are interested in general solutions that work well across multiple sensory domains and tasks.
    Using inductive biases drawn from neuroscience, our system requires orders of magnitude less training data than traditional machine learning techniques. Our underlying framework combines advantages of deep architectures and generative probabilistic models.

    (see Vicarious)

    Meanwhile in San Juan on January 2 the future of life institute organized a conference titled:

    The Future of AI: Opportunities and Challenges

    This conference brought together the world's leading AI builders from academia and industry to engage with each other and experts in economics, law and ethics. The goal was to identify promising research directions that can help maximize the future benefits of AI while avoiding pitfalls (see this open letter and this list of research priorities- pdf).


    Due to its length this essay will be published in two parts.

    Part two soon..



    Tue, Feb 17, 2015  Permanent link
    Categories: AI,AC, mind,machines,consciousness,
    Sent to project: Polytopia
      RSS for this post
      Promote (12)
      
      Add to favorites (1)
    Synapses (2)
     
          Cancel