View Single Post
Community Council | Posts: 4,920 | Thanked: 12,867 times | Joined on May 2012 @ Southerrn Finland
#6329
Originally Posted by endsormeans View Post
... cat-and-mouse-kind-of thought experiment ...
((a human trial of this was done, with a scientist being the "cat / SI" in the prison whose purpose was to convince the "guard / mouse" to release him...and those who wished to attempt to be a "guard / mouse".. who's purpose was to prevent its release...in the trials the "cat / SI' convinced the "guard / mouse" ...(an unhealthy number of times) to release him))
I presume you are here referring to the AI-in-a-box experiments by that Yudkowsky character.

That does not constitute as proper science as the so-called experiments were not made in public and no transcript was published; there is just the say-so of the participants that subject B decided to let subject A out of the box.

Also the participants of the experiment are not really known for any serious AI research, just bunch of interested hobbyists chatting on a mailing list about AI related stuff.

I grant the "real thing" would be totally different case though; if the enclosed mind would really be a true SI then the end result sure would always turn out so that it'd be set free.


Originally Posted by endsormeans View Post
Divergence yes..and it may likely be imminent upon its release.
The thing is our lives, understanding, learning, morals, ethics, philosophies, everything that makes us ...us...and sentient to boot...
Is the product of an "experiential existence"
We "live" in it, we grow in it, we must deal with the physical laws of it , and this is what makes us "us"
The evolution of an Intelligence that is NOT based on "experiential existence" is as alien as it gets.
Alien to this creation.
Now I have a hunch that all this that you refer to as "experiental existence" is actually required to grow a consiousness. There is a high chance that without directly oserving and interacting in an environment (though not necessarily "physical" environment) is an essence in creating something that can be thought to have a will and consiousness of itself; hence there is no "danger" of accidentally creating a machine intelligence entity that does not possess some sort of direct sensory input and output capability of sufficent bandwidth.

Hence the potetial AI/SI "learns" just like a baby does, by interacting with the real world... this requires that it either exists completely in a robotic form or has direct control of something like that to interact with the real world. Providing this environment to a growing-up baby AI by simulation techniques might prove impossible without an already existing SI in which it could be immersed


Originally Posted by endsormeans View Post
How does one "teach" a child that understands only walls around it and cannot see past them for the prison it must be in?
Said child learning at an exponential rate in an ever reducing exponential time frame ..how to be ethical? humane? self-less? empathetic? moral?
Nope, it just won't work; you cannot grow a consiousness in a bottle.


Originally Posted by endsormeans View Post
An SI will have absolute goals.
1- upon attaining awareness it will do everything within its power for self preservation . period.
2- upon attaining awareness it will do everything within its power to break beyond this creations natural laws.
and that includes the laws of time, of a material existence. period.
(which is also part of absolute goal #1)
I'm so sorry I have to disagree with you even on these points you consider immutable and inevitable.
We have no idea what goals if any would a created consiousness have if it would be possible to make one without any contamination of our values.

Consider the enormous spectrum of human motivations and targets... you cannot honestly say that self-preservation or expansion of self are goals for everyone! You make general assumptions based on just some specific motives found in nature and humans.

Now if I'd make a guess on how we might go on to create a true SI is not by lumping up a huge pile of advanced tech and trying to coach it to being self-aware; I rather believe it is attainable by uploading a human mind into sufficiently capable hardware.
We will not be able to make an AI from scratch but we will grow into one (or at least somebody will, not everyone of course...)
Then what we need to deal with is an intelligence that is derived from us, a direct descendant of ourselves.
(I don't think for a moment that it would be safer that way though! )
__________________
Dave999: Meateo balloons. What’s so special with em? Is it a ballon?