"the great tinkerer" <gr8tinkerer@hotmail.com> wrote:
> we obviously dont rememeber every stimulus, we do have a
> selective memory otherwise wed overflow our minds capacity, should
> an artificailly intelligent computer have a selective memory?
Well, I think the answer is an obvious "yes" -- as with humans, there is
certainly no reason to remember everything that we sense or learn; one of
the real tricks to intelligence is knowing just WHAT we should forget! (my
own memory of my child-hood is very fragmentary, but the pieces I do have
make a heck of a lot more sense now than they did when I experienced
them...)
> also, "certain forms" of
> bacteria dont have sensory such as vision and hearing
> and smelling to react to.
Yes, and certain forms of humans don't have sensory such as ultraviolet
vision or bat-like radar... Another key trick in AI research will be to
decide what type of senses our creations should have. In my opinion,
vision is over rated -- I suspect that it will be easier to design a radar
like system with the same capabilities.
> yes but do we run out a of memory? and how many megabytes
> of data do you think can fit in the human brain?
Billions? I don't really know. I do know, however, that I haven't yet
memorized the contents of my computers encyclopedia... I suspect that there
is already sufficient "memory" available for a great AI, just the we are
lacking the software which would translate the declarative *knowledge*
which they can store into procedural *wisdom*, which they could USE!
> a computer would need to ugrade its memory to fit more
> knowledge other wise it would need to select which things
> to "forget" and do we really want artificial intelligence
> to need forget things?
Sure. See above.
> babies dont know a language when they are born.
I disagree. It may be a primitive, emotional language, but even babies can
get their needs across pretty well. (heck, even cats can do that!)
> the problem with programming artificial intelligence is that we
> need to be minimalists, we shouldnt program in knowledge we should
> program "intelligence" and then let the machine do the learning.
A tall order, but definitely thinking in the right direction.
Nathan Russell <frussell@frontiernet.net> asks:
> Are you implying that earthworms or spunges - both of which
> can regenerate very well - are superior to us?
No, I was just pointing out the fact that regeneration (or extension) is
not a necessary component of intelligence; e.g. computers don't have to be
able to install a new hard-drive in themselves to qualify as intelligent.
> Wouldn't self reference equate to self-conciousness?
Errr... I think self reference is the definition of consciousness (funk and
wagnalls: "The state of being conscious; awareness of oneself and ones
surroundings") In other words, I think "self-consciousness" is redundant.
> My point being, if the use of a symbol for self is not
> enough for consciousness, then what is?
Well, I agree that the use of a symbol for self is enough for consciousness
-- but only on the understanding that the ability to use such a symbol
*implies* a significant amount about the system in question (having to do
with symbol manipulation ability, capacity to understand the meaning
(isomorphism to reality[1]) of the self (and other) symbol, etc.).
[1] Could we possibly start up a discussion here on Hofstadter's definition
of meaning -- namely that meaning emerges because of an isomorphism? Is
this concept related to David's meaning=effect (which itself illustrates
the meaning of "meaning" with an isomorphism)? Hofstadter's main examples
are mathematical formal systems, but I've thought about it and certainly
the most concrete examples always work (I can always identify the
isomorphism).
ERiC