I could claim myself to be an AI expert based on the fact that there are spam filters that use mathematics created by me to "read" emails and decide whether they are spam or non spam (see bogofilter and spambayes). And I have been a software professional for 20 years and managed large projects. In my humble opinion, software as we know it is not capable of consciousness. A machine which involves some kind of analogue to software may well be conscious at some point in the future. It may be due to emergent phenomena based on complexity. Or it may not. Who the heck knows. But in my personal opinion, it will not be software as we know it.
Software as we know it, or software as you know it?
I agree that Microsoft Word is not going to magically gain consciousness no matter how much bloat it sees in future versions.
Neither will a common-sense database like Cyc, or an expert system like Eurisko or a chess program like Deep Blue.
But those are not the only possibilities.
Quote:
And further, based on my experience in the software field, I think a machine could pass the turing test without being conscious, but it would be a VERY complicated piece of software with enormous data available to it. So what? It's still doable. In my view, the question of passing the turning test is orthogonal to the question of consciousness.
You seem to be referring to a lookup table here that contains responses to predictable Turing test questions.
If so, have you calculated how big that lookup table has to be taking into account the combanitorics of the situation? What if the lookup table can't possibly fit into this universe even if every subatomic particle was converted to 10 GB memory? Would you still say it is "doable"?
« Last Edit: 2003-01-06 18:12:45 by David Lucifer »
But many of us are struck by the fact that there is NO awareess in a Dell PC; it isn't happy, it isn't sad, etc., it gets no pleasure out of watching a fireplace, etc. Now if you have a computer 10 times as complicated, there doesn't seem to be anything that will bridge the gap between zero consciousness and SOME consciousness. So many of us assume a computer 10 times as complex won't be conscious. By that reasoning, a computer with 10 times further complexity still won't be conscious... and by induction, a computer of no complexity will ever be conscious.
[David] You could use the same slippery slope reasoning to prove that humans are not concious by looking at animals of increasing complexity. Obviously (I hope) it is a fallacious argument.
Not at alll. There is no reason whatsoever to assume, for instance, that mammals don't have some specific physical characteristic that somehow invokes consciousness that fish don't have. (I'm not saying it is tied to a physical characteristic, I'm just giving an example. I suspect that we don't even have any clue what the reality is, the same way that 500 years ago we didn't have a clue about electric charges, and yet they are an attribute of every particle in our body.)
But many of us are struck by the fact that there is NO awareess in a Dell PC; it isn't happy, it isn't sad, etc., it gets no pleasure out of watching a fireplace, etc. Now if you have a computer 10 times as complicated, there doesn't seem to be anything that will bridge the gap between zero consciousness and SOME consciousness. So many of us assume a computer 10 times as complex won't be conscious. By that reasoning, a computer with 10 times further complexity still won't be conscious... and by induction, a computer of no complexity will ever be conscious.
[David] You could use the same slippery slope reasoning to prove that humans are not concious by looking at animals of increasing complexity. Obviously (I hope) it is a fallacious argument.
[Gary2] Not at alll. There is no reason whatsoever to assume, for instance, that mammals don't have some specific physical characteristic that somehow invokes consciousness that fish don't have. (I'm not saying it is tied to a physical characteristic, I'm just giving an example. I suspect that we don't even have any clue what the reality is, the same way that 500 years ago we didn't have a clue about electric charges, and yet they are an attribute of every particle in our body.)
[Jake] I think this point deserves some consideration re: that some kinds of animals, probably less rarely than we tend to think, display or possess characteristics that we would intuitively think reflects a distinctly human characteristic. In addition, perhaps we ought to reconsider some things that we think of as so important to humanity as merely biological happenstance, and not in itself necessary for an intelligent species. It would of course prove convenient if we actually had a clear point of comparison (another intelligent species), but I imagine one will show up on the scene soon enough, either home grown (more likely), or extra-terrestrial (unknown).
[Jake] I think this point deserves some consideration re: that some kinds of animals, probably less rarely than we tend to think, display or possess characteristics that we would intuitively think reflects a distinctly human characteristic. In addition, perhaps we ought to reconsider some things that we think of as so important to humanity as merely biological happenstance, and not in itself necessary for an intelligent species. It would of course prove convenient if we actually had a clear point of comparison (another intelligent species), but I imagine one will show up on the scene soon enough, either home grown (more likely), or extra-terrestrial (unknown).
I think there is potential confusion between the words "intelligence" and "consciousness". A computer can do things that are intelligent in a sense, but it can't enjoy a fireplace.
I think dogs take pleasure in certain things for the same reason I think I am not the only human who does: other people reeaaallly seem to, and we seem to be so physically similar, that I have little reason to assume other people are so fundamentally different from me that they don't actually take pleasure in things. It's a bit more of a stretch for dogs, but not really much more. A computer, on the other hand, even if it seems to take pleasure, has almost none of the same physical characteristics and therefore there is very little reason to assume that it actually takes pleasure, even if it seems to. This is true despite computers being intelligent in the sense of "very smart in some ways".
There is an assumption in some circles that since we can understand the logical behaviour of neurons and simulate that logical behaviour in a computer, that must be all there is to creating consciousness. But mammals many other attributes besides the electrical and chemical changes in neurons in response to stimulus and there is no reason whatsoever to assume that one or more those are not required to enable enjoyment of a fireplace. (Some of those attributes may not even be visible with a microscope -- how do we know otherwise at this point? Gravity isn't visible with a microscope, and yet it's very important to us. Maybe consciousness is more like gravity than like logic.)
No computer today enjoys anything or even begins to. There is no reason at all to assume that bridging the gap between non-enjoyment and enjoyment won't require something totally different than what we consider to be "computer". The only theory consistent with the computers-can-do-it a hypothesis that has any reasonableness (in my view) is that consciousness (and enjoyment) are emergent phenomena based on certain kinds of complexity. Then if a computer can encompass THOSE KINDS of complexity, it should be able to generate true consciousness.
But even if consciousness IS an emergent phenomenon based on certain kinds of complexity why assume says a computer can create those kinds of complexity if it doesn't have the same physical substrate as a conscious animal? There is no reason whatsoever to. It MAY be able to which would be both very cool and very scary in its implications, but there is no reason to assume it can. We just don't know yet.
Ultimately I think the reason so many buy into the idea that consciousness can be created with computers is that they really, really want to feel that they understand such deep and important things, and the best model we have is computers (plus complexity), therefore that must be the answer. In every generation people try to extend what little they know to become the answer to the deep questions and out of a desire to have the answer NOW they believe whatever they happen to know in that generation IS the answer. But of course that "reasoning" is B.S.
Not at alll. There is no reason whatsoever to assume, for instance, that mammals don't have some specific physical characteristic that somehow invokes consciousness that fish don't have.
I disagree. Mammals and fish have exactly the same physical makeup of neurons, cells, etc. They have all the same chemicals, compounds, molecules. There is no reason to think that mammals have anything about that physically that would set them apart from fish.
It IS possible to tell the difference between a computer that passes the turning test and a conscious entity... if you're the conscious entity. If you are a conscious entity, then you have the ability to experience yourself and the world. You can experience enjoyment, for example. If you are a machine that passes the turning test, then you don't have those abilitiies. And there is no "you".
It may not be possible for an external observer to tell the difference, but so what? What does that have to do with it?
I disagree. Mammals and fish have exactly the same physical makeup of neurons, cells, etc. They have all the same chemicals, compounds, molecules. There is no reason to think that mammals have anything about that physically that would set them apart from fish.
Oh I see. People are exactly the same, physically, as fish. There is no difference in organs, etc. Gotcha.
If you are a machine that passes the turning test, then you don't have those abilitiies. And there is no "you".
It's like a planet with people vs. a planet without them vs. a hologram of a planet with people. In one case the people know they are there. In the other... there ain't nobody home to wonder about it.
And an external observer (non-interactive in this case) would not be able to tell the difference.
Now suppose instead of a hologram, the other thing is a planet populated with robots as intelligent as the Eliza program. Now you can interact with them and some people doing so would be fooled for 30 seconds or maybe a bit longer. But I don't think from that you believe that a Dell PC running something like eliza is conscious. It's the same situation as the hologram. There is nobody home.
Now suppose the program is a little more sophisticated and it takes 5 minutes to tell the robots aren't real people. Now do you assume they are just as aware of themselves as you are?
What if it takes 5 hours because the programs are more complicated? 5 days? 5 years? 500 years?
At what time interval does the switch go on, in your view, such tht they become something that you will assert must actually experience the pleasure of looking at a fireplace, just as you do? And I don't mean they just SAY they do. Eliza could be modified in 5 minutes to say that. Saying it is NOT the same as DOING it.
David: Let me turn this around and ask you: is there a difference between you drinking a fine wine and a sex doll with fine wine poured down its throat?
Quote from: garyrob on Today at 13:47:44 David: Let me turn this around and ask you: is there a difference between you drinking a fine wine and a sex doll with fine wine poured down its throat?
If so, what is it?
Oh never mind, i don't even really want to go there, I don't think it's really an efficent path to make the point I want to make. Don't bother with it.
The real point is what I am going to say below:
many people think that there is nothing important about consciousness that can't be observed. You are far from alone in that view, if indeed you hold it. So, it follows that if one made an Eliza that could fool people for longer than they could spend questioning it, it would be exactly the same as a conscious entity. There would be no way for an observer to tell the difference, therefore there is no difference.
There are others who think that reasoning is utterly absurd. Totally, completely missing what it is to be conscious. I am among them. To me, such reasoning is analogous to a colorblind person saying that because he can't tell the difference between blue and red, then there is in fact no difference. It would just be silly to make such a claim based on his limited ability to sense the state of the thing being discussed.
Maybe the fact that I have done a lot of Zen meditation makes me more inward-looking and aware of aspects of consciousness that more extroverted people might be less liable to notice. Or, in the view of the people mentioned in the above paragraph, maybe it's because I'm mistaken.
But my view is that person expereinces himself and the world. Eliza doesn't, no matter how long a more sophisticated version can fool a questioner. The two issues have nothing to do with each other.
« Last Edit: 2003-01-07 19:24:21 by David Lucifer »
I disagree. Mammals and fish have exactly the same physical makeup of neurons, cells, etc. They have all the same chemicals, compounds, molecules. There is no reason to think that mammals have anything about that physically that would set them apart from fish.
Oh I see. People are exactly the same, physically, as fish. There is no difference in organs, etc. Gotcha.
Let me refine my sarcasm a bit:
Oh I see. People are exactly the same, physically, as fish. For instance, there is no difference in the number of neurons such that fish might be below, and a person above, the threshhold of complexity needed to create consciousness as an emergent phenonenon (if it is one) Right. Gotcha.
BTW, I apologize for using a sarcastic tone. Sarcasm, as we all know, is usually incredibly unproductive in these kinds of conversations, and you did not start the sarcastic tone, I did. You did nothing to deserve it. I