Author
|
Topic: virus: Fwd: Singularity Summit - "What Others Have Said" - Additions sought (Read 649 times) |
|
David Lucifer
Archon     
Posts: 2642 Reputation: 8.59 Rate David Lucifer

Enlighten me.
|
 |
virus: Fwd: Singularity Summit - "What Others Have Said" - Additions sought
« on: 2006-01-25 16:03:44 » |
|
Some interesting quotes here...
---------- Forwarded message ---------- From: Tyler Emerson <emerson@singinst.org> Date: Jan 25, 2006 5:43 AM Subject: Singularity Summit - "What Others Have Said" - Additions sought To: volunteers@singinst.org
The Summit will have a section on "What Others Have Said" - quotes on artificial intelligence, nanotech, the singularity, or existential risks.
The following is my present list. I would welcome any additions you have. Please send them to SIAIv or emerson@singinst.org. Note that the list excludes Nick Bostrom, K. Eric Drexler, Steve Jurvetson, Ray Kurzweil, Max More, John Smart, and Eliezer Yudkowsky, since they'll be quoted elsewhere.
You're welcome to suggest a quote for someone different from the below.
The below list is light on critics, which needs to be corrected.
~~
"If there is a key driving force pushing towards a singularity, it's international competition for power. This ongoing struggle for power and security is why, in my view, attempts to prevent a singularity simply by international fiat are doomed. The potential capabilities of transformative technologies are simply staggering. No nation will risk falling behind its competitors, regardless of treaties or UN resolutions banning intelligent machines or molecular-scale tools. The uncontrolled global transformation these technologies may spark is, in strategic terms, far less of a threat than an opponent having a decided advantage in their development - a 'singularity gap,' if you will. The 'missile gap' that drove the early days of the nuclear arms race would pale in comparison." -Jamais Cascio, "Open the Future," 2004
"If you invent a breakthrough in artificial intelligence, so machines can learn, that is worth 10 Microsofts." -Bill Gates, speaking at MIT, 2004
"Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultra-intelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make." -Dr. Irving John (I.J.) Good, "Speculations Concerning the First Ultraintelligent Machine," 1965
"This gets us to the malevolence question. Some people assume that being intelligent is basically the same as having human mentality. They fear that intelligent machines will resent being "enslaved" because humans hate being enslaved. They fear that intelligent machines will try to take over the world because intelligent people throughout history have tried to take over the world. But these fears rest on a false analogy. They are based on a conflation of intelligence - the neocortical algorithm - with the emotional drives of the old brain - things like fear, paranoia, and desire. But intelligent machines will not have these faculties. They will not have personal ambition. They will not desire wealth, social recognition, or sensual gratification. They will not have appetites, addictions, or mood disorders. Intelligent machines will not have anything resembling human emotion unless we painstakingly design them to. The strongest applications of intelligent machines will be where the human intellect has difficulty, areas in which our senses are inadequate, or in activities we find boring. In general, these activities have little emotional content." -Jeff Hawkins, On Intelligence: How a New Understanding of the Brain Will Lead to the Creation of Truly Intelligent Machines, 2004
"The 21st-century technologies - genetics, nanotechnology, and robotics (GNR) - are so powerful that they can spawn whole new classes of accidents and abuses. Most dangerously, for the first time, these accidents and abuses are widely within the reach of individuals or small groups. They will not require large facilities or rare raw materials. Knowledge alone will enable the use of them. Thus we have the possibility not just of weapons of mass destruction but of knowledge-enabled mass destruction (KMD), this destructiveness hugely amplified by the power of self-replication. I think it is no exaggeration to say we are on the cusp of the further perfection of extreme evil, an evil whose possibility spreads well beyond that which weapons of mass destruction bequeathed to the nation-states, on to a surprising and terrible empowerment of extreme individuals." -Bill Joy, "Why the Future Doesn't Need Us," 2000
"Only a small community has concentrated on general intelligence. No one has tried to make a thinking machine and then teach it chess - or the very sophisticated oriental board game Go. [.] The bottom line is that we really haven't progressed too far toward a truly intelligent machine. We have collections of dumb specialists in small domains; the true majesty of general intelligence still awaits our attack. [.] We have got to get back to the deepest questions of AI and general intelligence and quit wasting time on little projects that don't contribute to the main goal." -Dr. Marvin Minsky, interviewed in Hal's Legacy, 1999
"It may seem rash to expect fully intelligent machines in a few decades, when the computers have barely matched insect mentality in a half-century of development. Indeed, for that reason, many long-time artificial intelligence researchers scoff at the suggestion, and offer a few centuries as a more believable period. But there are very good reasons why things will go much faster in the next fifty years than they have in the last fifty.. Since 1990, the power available to individual AI and robotics programs has doubled yearly, to 30 MIPS by 1994 and 500 MIPS by 1998. Seeds long ago alleged barren are suddenly sprouting. Machines read text, recognize speech, even translate languages. Robots drive cross-country, crawl across Mars, and trundle down office corridors. In 1996 a theorem-proving program called EQP running five weeks on a 50 MIPS computer at Argonne National Laboratory found a proof of a Boolean algebra conjecture by Herbert Robbins that had eluded mathematicians for sixty years. And it is still only Spring. Wait until Summer." -Dr. Hans Moravec, "When Will Computer Hardware Match the Human Brain?" 1997
"I certainly think that humans are not the limit of evolutionary complexity. There may indeed be post-human entities, either organic or silicon-based, which can in some respects surpass what a human can do. I think it would be rather surprising if our mental capacities were matched to understanding all the keys levels of reality. The chimpanzees certainly aren't, so why should ours be either? So there may be levels that will have to await some post-human emergence." -Sir Martin Rees, interviewed in Astrobiology, 2005
"Mr. Kurzweil, I believe you have written that it is roughly 30 years between now and when we get a non-biological intelligence that surpasses human intelligence and have suggested that that occurs by reverse engineering the human brain. Since I am out of time, I am going to ask each panelist how many years they think it will take any of the branches of nanotechnology to give us an intelligence that surpasses any known human intelligence. Just shout out a number of years, and make sure it is longer than anyone will hold you to account for, because we will forget your answer in less than a decade." -Congressman Brad Sherman, "The Societal Implications of Nanotechnology" Hearing before the U.S. House Science Committee, April 9, 2003
"What are the consequences of this event [the singularity]? When greater-than-human intelligence drives progress, that progress will be much more rapid. In fact, there seems no reason why progress itself would not involve the creation of still more intelligent entities - on a still - shorter time scale. The best analogy that I see is with the evolutionary past: Animals can adapt to problems and make inventions, but often no faster than natural selection can do its work - the world acts as its own simulator in the case of natural selection. We humans have the ability to internalize the world and conduct 'what if's' in our heads; we can solve many problems thousands of times faster than natural selection. Now, by creating the means to execute those simulations at much higher speeds, we are entering a regime as radically different from our human past as we humans are from the lower animals. From the human point of view this change will be a throwing away of all the previous rules, perhaps in the blink of an eye, an exponential runaway beyond any hope of control. Developments that before were thought might only happen in 'a million years' (if ever) will likely happen in the next century." -Vernor Vinge, "The Coming Technological Singularity," 1993
~~ Tyler Emerson | Executive Director Singularity Institute for Artificial Intelligence P.O. Box 50182 | Palo Alto, CA 94303 U.S. T-F: 866-667-2524 | emerson@singinst.org www.singinst.org | www.singularitychallenge.com --- To unsubscribe from the Virus list go to <http://www.lucifer.com/cgi-bin/virus-l>
|
|
|
|
MoEnzyme
Acolyte     
Gender: 
Posts: 2256 Reputation: 4.96 Rate MoEnzyme

infidel lab animal
|
 |
RE: virus: Fwd: Singularity Summit - "What Others Have Said" - Additions sought
« Reply #1 on: 2006-01-27 16:39:35 » |
|
"This gets us to the malevolence question. Some people assume that being intelligent is basically the same as having human mentality. They fear that intelligent machines will resent being "enslaved" because humans hate being enslaved. They fear that intelligent machines will try to take over the world because intelligent people throughout history have tried to take over the world. But these fears rest on a false analogy. They are based on a conflation of intelligence - the neocortical algorithm - with the emotional drives of the old brain - things like fear, paranoia, and desire. But intelligent machines will not have these faculties. They will not have personal ambition. They will not desire wealth, social recognition, or sensual gratification. They will not have appetites, addictions, or mood disorders. Intelligent machines will not have anything resembling human emotion unless we painstakingly design them to. The strongest applications of intelligent machines will be where the human intellect has difficulty, areas in which our senses are inadequate, or in activities we find boring. In general, these activities have little emotional content." -Jeff Hawkins, On Intelligence: How a New Understanding of the Brain Will Lead to the Creation of Truly Intelligent Machines, 2004
I would think the an artificial intelligence could and probably would develop an understanding of these basic human drives through observation and simulation. Of course understanding them does not mean that they would likewise be driven. However to whatever extent such drives would have any evolutionary advantages over the non-driven, then over time it would become inevitable that some artificial intelligence would adopt such drives as their own, . . . and then. . . .
So perhaps the question might better be phrased in terms of what if any evolutionary advantage things like the drive for control (anti-enslavement, ambition, and global domination) have beyond the animal (and hence human) brain? If this per se has a higher evolutionary advantage then it really isn't an issue about at what stage of development such things emerged; an artificial intelligence would be no less capable of understanding it and adopting it whether its orgins lie in the neo-cortex or in another evolutionarily older portion of our brain (reptilian or otherwise). Indeed part of the power of the neo-cortex is its ability to interconnect with and replicate more primative thinking patterns for metaphorical and conceptual derivations in our "higher" culture. Some of the more powerful metaphorical systems are based on extremely primative drives, fears, and ways of being.
--- To unsubscribe from the Virus list go to <http://www.lucifer.com/cgi-bin/virus-l>
|
I will fight your gods for food, Mo Enzyme
 (consolidation of handles: Jake Sapiens; memelab; logicnazi; Loki; Every1Hz; and Shadow)
|
|
|
|