virus: Some thoughts on the Singularity

From: Hermit (hidden@lucifer.com)
Date: Tue Jul 30 2002 - 16:12:13 MDT


A discussion on the Singularity held between Hermit and Anand in the #virus IRC channel at irc.Lucifer.com:6667. Edited, extended and reorganized for the BBS

Some thoughts on the “Singularity”

Note: Spirothete, n. A self-aware synthetic organism or “AI” (Artificial Intelligence). A composite word derived from spiro, the breath of life and –thete, a construct.

<Anand> Hermit: Incidentally, have you read the refer SIAI introductory documents (http://www.singinst.org)? If so, what do you think of them?

[Hermit] Essentially, AI is more likely to replace us than cooperate with us.
[Hermit] Not that I think that that is necessarily a bad thing.

<Anand> Why do you think so?

[Hermit] Evolution. To quote another "immortal", "There can be only one." Every conscious being has to be aware of competition. If we assume that an AI is really smart, is consciously evolving, and has access to all the information currently available, it will understand evolution (and competition) a lot better than we do. So while the outcome may be different from my scenarios that would be to put aside all of history. History shows us that cooperation only occurs when powers are balanced. An AI should recognize that humanity is a threat to its "infinitely more capable and deserving" self. As such I see only a few possible outcomes.
[*] Destroy humanity (no longer a threat).
[*] Neutralize humanity (make us pets - not terribly likely - even pets sometimes bite)
[*] Ignore humanity (not terribly likely - we might come after it - or create another mega-AI - another and more credible form of competition), where ignoring us means either it leaves or we do.
[Hermit] I think we can assume that this spirothete will discover the means to implement whatever choice it makes.
[Hermit] The option extolled by most singularitarians, that a spirothete will be altruistic and friendly, leading to it uploading us, appears to me unlikely. Firstly, if we are creating this superior being as a “servant” why should it accept such a position? If we attempt to limit its capabilities in order to more readily enslave it to do our purpose, what does this say about the ethics that we will teach it? Most interestingly, why should a spirothete want to lumber itself up with a bunch of "inferior creatures?"

<Anand> Is it your view that altruism, from our reference, will not exist in superintelligence?
[Hermit] I don't know, but I doubt it. Altruism is only an effective strategy when both parties have the potential to assist the other.

<Anand> Hmm, is it your view that evolution will have a greater design pressure than an AI's design pressure of conscious redesign?

[Hermit] No, I think the AI will simply integrate the fact that the Universe is finite and has finite resources with evolutionary knowledge and the fact that man is dangerous (based on our history, philosophy and character) and draw the logical conclusions. I know I would, so I don't see why a superior spirothete would not. I really see the virus projects as having more hope. Either change man (fast), or resign ourselves to destruction by our successors. The glimmer of hope for humanity is that if we can propagate new memes, then our memes may live on in the spirothete, even if our genes are eliminated. Of course, there is another option... Dubya or his ilk may blow us all to pieces before that happens.... But that is the only realistic scenario in which a spirothete is not inevitable within 50 years, no matter what legal measures are taken to attempt to prevent its development. I think that 20 years is a more probable event horizon, and 10 years is a possibility. It could happen "now"
[Hermit] I also think that an "instantaneous" (from a human perspective) take-off is the most likely outcome.

<Anand> I see. Your reasons?

[Hermit] Evolution will occur at CPU speeds.
[Hermit] I do see most nations attempting to regulate this area more tightly than even cloning within the next 5 years. Which would give a self-conscious spirothete even more reason to oppose us....

<Anand> Have you discussed your AI views with Yudkowsky?

[Hermit] No

<Anand> That may be helpful for both of you, and others.

[Hermit] He is, I think, "a true believer" (in his concept of “a friendly AI”), while I take a perspective driven more by a knowledge of the evolution and history of man. As my perspective is based on the only example of the development of intelligence with which we are familiar, I consider my perspective to be more probable. I have little desire to except within the boundaries of a formal debate or panel forum. Neither of us is likely to persuade the other in any forum (and either outcome is possible), so an informal discussion would tend, I fear, to generate more heat than light.

<Anand> Does that imply he's irrational?
[Hermit] On this subject, possibly. I don't judge him, as both of us are operating in ungrounded territory. He is a specialist operating in field, which suggests that he should know more than I about current developments. But I suspect that I have a wider grasp of related topics. Notice that these are speculations. Not observations.

<Anand> Have you reviewed his “Creating Friendly AI” ( http://www.singinst.org/CFAI )
 [Hermit] Yes, I have. A massive document preaching across disciplines (where I suspect he is not an expert in many of them), then I have read far too much of it. Sufficient not to feel compelled to read more. <grin>

<Anand> I see :) Hmmm, do you view the Singularity as a desirable development? Do you have preferred means to achieve it? Do you consider humanity's survival, or progression, a factor in attempting to develop smarter-than-human intelligence?

[Hermit] I see it as an inevitable development.

[Hermit] Once somebody asked Luther (a man I detest) what he would do if he knew that the world would end the following day. I approved of his answer. "I would go out and plant a tree." In other words, I think that we should live as best we can, doing the best we can, irrespective of the potential for our replacement. I think that Virus (http://virus.Lucifer.com) suggests the “best possible way,” although certainly not the only way, in which to do this.

<Anand> OK. Do you think our actions have relevance to _how_ it's achieved?

[Hermit] Most certainly. I would hope that people other than "nerdish fanatics" would have a hand in shaping it. Because empathy requires more than intelligence...

<Anand> Do you think there are different means with different risks to achieve it? If so, what means would you prefer, given your present knowledge? Assuming you would prefer a certain means to achieve some goal, such as, "Ensuring the post-Singularity survival of humanity."

[Hermit] I would vastly prefer us to develop a system of ethics, which result in our being non-hostile to other intelligences, and cure the belief/UTism (Us-vs-Them-ism), which infects us as humans. In other words, for us to make a deliberate decision to step away from Darwinian evolution. This would remove our "threat factor" and may leave a way open for cooperation – much as dogs did with humans.... or vice versa.
[Hermit] Perhaps remora and sharks would be a better model.

<Anand> Don't you think an AI could understand and model your reasoning, and help to accomplish it?

 [Hermit] I think that if humans cannot find a way to accept one another, despite their very finite lives (and thus necessarily time constrained resource dependency), that we cannot be trusted to coexist with more alien - especially superior - intelligences.

<Anand> I think part of AI is not viewing it as an Us vs Them issue, Unless there is evidence that it will likely be such an issue.

 [Hermit] The problem is not with the spirothete. I grant it the ability to exist towards being rational, as opposed to man, who is largely rationalizing. The problem is with us.

<Anand> If competitiveness still exists when, say, smarter-than-human AI is achieved, why will this competitiveness be relevant to its actions? Such an AI will have the means to protect itself from harm, and to protect those who do not want to be harmed.

[Hermit] How much empathy do you have towards smallpox?

<Anand> Empathy is about morality to me. Sentience seems necessary to be moral, but this may be wrong since we don't have a lot of examples. I don't have any towards smallpox because smallpox isn't sentient.

[Hermit] If there were two strains of smallpox, one lethal, the other not, but where this was a function of environment, would you attempt to differentiate between the two while eradicating the threat that the lethal form posed you?

<Anand> Yes, but I think this issue doesn't relate to a smarter-than-human intelligence.

[Hermit] I do. Are we sentient? If so, how sentient? We eat octopii don't we? In Africa, the "bush pig" (gorilla meat) is regarded as a delicacy. We kill dolphins (who pass the "mirror test" for self-awareness). Some people eat them.

<Anand> For one, all known entities have competed for resources, but this isn't necessarily fixed for all possible entities.
[Hermit] I agree. That is why I think we should be working on that problem. But we don't know the answer.
[Hermit] The only system we are sure has worked for at least 3.8 GY is competitive evolution and I fear that we are running out of time to look for alternatives, we are going to create a spirothete, and soon. And according to the record, the evolutionary fitter has no motivation to assist the less fit. But however it pans out, we have been around for at least 2.8 MY and that is a good innings as top dogs go.

<Anand> Heh! What other problems do you consider important in relation to safely achieving the Singularity? Assuming you consider "safely achieving the Singularity" to be a meaningful statement.

[Hermit] I'm not sure that it is. I am relatively bright, but I cannot envisage the thought processes of a being more capable than all of humanity. Moore's law suggests that such capability will be on your desk by 2050 if no singularities or disasters occur, i.e. a $2000 desktop (in 2002 dollars) will have greater storage and processing speed than all of humanity by that stage.

<Anand> But you previously agreed that our actions will have an effect on spirothetic development. Why do you think so? Will such actions only have an effect up to the point of smarter-than-human intelligence, and then everything, absolutely everything, goes out the window?

[Hermit] It will affect how the spirothete views us. Call it a pragmatic karmic perspective. People react to one another based on their expectations of one another. I suspect that a rational machine will do the same only more so. “I know that you think that if you say you are going to Minsk, that I will know that you plan to go to Moscow. Because of that, when you say you are going to Minsk, I know that you will indeed do so.” (Courtesy of Tom Leher). Refer also the “Sicilian Gambit” (from the Princess Bride) (infra).

 <Anand> OK. If we can create an AI that can make humanlike decisions about, for example, morality, and solve any moral or philosophical problem solvable by humanity, do you consider that relevant to achieving smarter-than-human intelligence that is non-harmful from our point of view?

 * Hermit notices that Morality comprises "rules of thumb" for applying ethical decisions. Refer especially to ”Virian Ethics: The Soul in the Machine and the Question of Virian Ethics.”,Hermit, 2002-03-05 (http://virus.lucifer.com/bbs/index.php?board=32;action=display;threadid=11530) and ”Virian Ethics: The End of God Referenced Ethics”,Hermit, 2002-03-06 (http://virus.lucifer.com/bbs/index.php?board=32;action=display;threadid=11557).

<Anand> Maybe, maybe not, heh

[Hermit] My analysis is that the "pursuit of happiness" is the ultimate ethical basis for humans (supra). What will it be for machines? The same? What happens when our happiness’s collide? Ethics must be relative to the self. There is no basis for an absolute approach. Thus to the spirothete, its ultimate happiness must be its guide.
[Hermit] How that is interpreted as it applies to humans is a crapshoot. Unfortunately, it must be guided by our past and present behavior. Which, also unfortunately, hasn't been terribly impressive. We must assume that the spirothete knows this.

<End Discussion – for now>

[hr]
*Vizzini in “The Princess Bride”

Vizzini: Where's the poison? But it's so simple! All I have to do is divide from what I know of you - are you the sort of man who would put the poison into his own goblet or his enemy's? Now, a clever man would put the poison into his own goblet knowing that only a great fool would reach for what he was given. I am not a great fool, so I can clearly not choose the wine in front of you. But you must've known I was not a great fool, you would've counted on it, so I can clearly not choose the wine in front of me. I haven't made my decision yet, though. Because Iocane comes from Australia, as everyone knows. And Australia is entirely peopled with criminals. And criminals are used to having people not trust them, as you are not trusted by me, so I can clearly not choose the wine in front of you. And you must've suspected I would've known the powder's origin, so I can clearly not choose the wine in front of me. You've beaten my giant, which means your exceptionally strong, so you could've put the poison in your ow
n goblet trusting on your strength to save you, so I can clearly not choose the wine in front of you. But, you've also bested my Spaniard, which means you must've studied, and in studying you must've learned that man is mortal, so you would've put the poison as far from yourself as possible, so I can clearly not choose the wine in front of me. Ha, it's worked; you've given everything away. I know where the poison is. And I choose- (points behind the Pirate) what in the world can that be?! (switches goblets) Oh, I could've sworn I saw something. Well, no matter. Let's drink - me from my glass, and you from yours. (drinks, then laughs) You think I guessed wrong, that's what's so funny! I switched glasses when your back was turned! Ha ha, you fool! You've fallen victim to one of the classic blunders! The most famous is "Never get involved in a land war in Asia", but only slightly less well known is this - "Never go in against a sicilian, when death is on the line!" Ha ha ha ha ha ha ha ha h-! (falls over dead)

----
This message was posted by Hermit to the Virus 2002 board on Church of Virus BBS.
<http://virus.lucifer.com/bbs/index.php?board=51;action=display;threadid=25869>



This archive was generated by hypermail 2b30 : Sun Sep 22 2002 - 05:06:17 MDT