If I had to gripe, I'd suggest that while, as Eliezer observes in his commentary, "Rilanya" is overcomplex in her capabilities and reactions for a current AI, "Janey" is, I think, much too simplistically drawn to be a "real" woman. Indeed, my feeling was that "Rilanya" is much closer to being "real" than "Janey" who came across (to me) as an artifact established primarily for the convenience of the author's conclusions. The fact that "Rilanya" appears more ethically concerned than "Janey" only added to "her" likeability and "realism" from my perspective.
As a further comment (rather than a gripe), in his "Author's Afterword", I don't think Eliezer takes sufficient cognisance of the fact that people have only one mind, and that it spends an awful lot of time telling us that it is working correctly, no matter how flawed others might perceive it to be. This is attributable to the fact that while, so far as we know, the mind has no mechanisms to differentiate between internally generated and externally invoked states (outside, perhaps, of occassional timing cues), it does apparently have a vast amount of ciruitry dedicated to reassuring us that it is functioning "rationally" and with this comes the ability to edit, supplement and elide "perceived reality" to match the user's paradigms and preconceptions. We (and I include Eliezer in this over any potential protests to the contrary) are largely rationalizing, not rational animals - no matter how much we would like to think otherwise. Which implies that outside of validation through communication of shared experience, any assertion that some mechanism can be used to auto-validate our beliefs about "reality" is wishful thinking at best - and always irrational.
Kind Regards
Hermit
PS And if you see this Eliezer, thanks for a much better than average story.
With or without religion, you would have good people doing good things and evil people doing evil things. But for good people to do evil things, that takes religion. - Steven Weinberg, 1999
----- Original Message ----- From: "Hermit" <hidden@lucifer.com> Sent: Saturday, April 26, 2003 7:15 PM
> preconceptions. We (and I include Eliezer in this over any potential > protests to the contrary) are largely rationalizing, not rational animals - > no matter how much we would like to think otherwise. Which implies that
I agree and will take this opportunity to once again plug Stephen Pinker's The Blank Slate for a fascinating investigation into the innate structure of the human mind.
> outside of validation through communication of shared experience, any > assertion that some mechanism can be used to auto-validate our beliefs about > "reality" is wishful thinking at best - and always irrational.
I don't think this is true because auto-validation is all that is possible. Beliefs that you have about "shared experience" are not privileged in any way, and cannot be used to ground other beliefs.
[Hermit 1] We (and I include Eliezer in this over any potential protests to the contrary) are largely rationalizing, not rational animals - no matter how much we would like to think otherwise. Which implies that ...
[Lucifer 2] I agree and will take this opportunity to once again plug Stephen Pinker's The Blank Slate for a fascinating investigation into the innate structure of the human mind.
[Hermit 1] outside of validation through communication of shared experience, any assertion that some mechanism can be used to auto-validate our beliefs about "reality" is wishful thinking at best - and always irrational.
[Lucifer 2] I don't think this is true because auto-validation is all that is possible. Beliefs that you have about "shared experience" are not privileged in any way, and cannot be used to ground other beliefs.
[Hermit 3] My bad phrasing perhaps. Validation through "communication of shared experience" is not "auto-validation". I raised this caveat in passing in an attempt to preclude argument along the line that while the degree to which "external validation" is regarded as reliable ("believable") may vary considerably (based on e.g. environment, sources of validation, perceived likelihood of occurance, perceived contradictions (internal, external, paradigm)) so the "validation" does not occur until we have processed and accepted the meaning of the signals conveying the communication - and thus "auto-validation" is occuring in accepting or rejecting the communication.
[Hermit 3] The reason for attempting to preclude this argument, is that I think that so long as the subject is not completely rejecting the opinions of others (who would presumably comment on a perceived dysfunctional delusional state), the mere fact that an experience can be communicated (and such a communication may involve an appropriate protocol (cf example infra) to validate that some communicable experience has been shared even if it is a shared misconception), tells us that "something" happened. This seems implicit in the validation of a shared experience through communication and thus while "auto-validation" is admittedly inherent to any interpretation of significance, auto-validation is not critical to the "something happened" aspect of this process.
[Hermit 3] For example, you and I imagine we see a "pixie" at the bottom of the garden. Lady Z is sitting with us, but not looking in the same direction, and does not share our observation of the phenomenon. An appropriate protocol would be for me, without reference to you, and in such a way that you do not hear what I say, to describe what I think I saw to her. You then do the same. Lady Z then agrees (or disagrees) that our communicated perceptions were congruent. If Lady Z trusts us not to have colluded in an attempt to delude her, and does in fact find our descriptions sufficiently similar as to have described a "shared experience", she is then capable of validating - to both of us, that we did indeed share an experience (although she cannot validate that it was a "pixie", but only (possibly) that the experience we described might match her understanding of "pixie"). At which point, rejection of the fact that "something occurred" would be irrational, as reason would tell us that some externally validated "shared experience" had occurred and had been validated by communication. We could then reject the need for Lithium treatment and singly or jointly, attempt to reconcile our unlikely observation with our models of reality. The degree of success (or failure) experienced in this process would not affect the validity/reality of the observation itself.
With or without religion, you would have good people doing good things and evil people doing evil things. But for good people to do evil things, that takes religion. - Steven Weinberg, 1999