Friday, March 2, 2012

If you like this post, check out my new blog: http://appofknowledge.blogspot.com/

"The Singularity": Is It Possible?

                       The singularity, also known as the technological singularity, is a hypothetical event in the future in which technological progress will start going up at a vertical rate. This is usually linked with another event called the "Intelligence Explosion" event, in which AIs start to improve recursively and become superintelligent beings. Computing has been growing exponentially, and so, it is predicted that we will reach a point in which the exponent is too high, resulting in a huge ascent in a graph. 

                        In theory, if we continue on this growth in computing, we will reach "some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue." This means that at some point, machines will go to understaing things beyond what affect us, and we will not be able to comprehend what is going on. This would result in an Intelligence Explosion, in which machines will be able to improve themselves over and over. According to John Good, who said this in 1965: 

"Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultra-intelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind."

                         Basically, we would make machine the superior being. Right now, there are many sites devoted to the topic, including one which used to be a subtopic on Reddit but then was registered as a domain for the Gawker Science Blog. Here, things like longetivity, transhumanism and robotics are discussed. The singularity could achieve breakthroughs in things like nanotechnology and genetic engineering. 

                          But are there risks for a singularity to exist? Yes. Humans will most likely be used as labor force, as intelligent humans are no longer needed. If this happens, there will also probably be a war between man and machine, much like Terminator, and they can create robots specialized for destruction. Machines could simply release a huge radiation wave and wipe all sentient life off the planet. Who knows. 
Not that I'm implying anything.
                          The thing is that, the same way computers have gotten more and more powerful, all technology will keep evolving in the same fashion. And when it reaches an extreme height, everything will be unpredictable. The economy could start doubling at an incredible rate, maybe even quarterly. Also, for existentialists, there will be a slight problem. The AI will not love us, nor hate us. But we have atoms, which could be used for something else. Basically, that makes us raw material. 

                          There is a cause, though, which is dedicated to making a friendly AI. This AI should be the first one, so that he may prevent other AIs to be a problem to mankind. The problem is that it's easier to make an unfriendly AI. Still, Isaac Asimov's Three Laws of Robotics should prevent problems from ocurring. 

1: A robot may not harm a human being, or, through inaction, allow a human being to be harmed. 
2: A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.
3: A robot must protect its own existence as long as such protection does not conflict with either the First or Second Laws. 

                          Now, a lot of people criticize this theory, because they don't believe that an oncoming singularity is possible. Some say that since economy can be done by an intelligence inferior to the Singularity, and this would cause massive unemployment, nobody would want to invest in an AI which would rid humanity of its work. Still, if the Singularity actually occurs, then we have to be ready for anything, because nothing will be predictable from that point on. 
Unpredictable is not in my vocabulary.