Maybe AI just needs a bigger truck

34 Replies, 2473 Views

(2019-03-27, 06:06 AM)Sciborg_S_Patel Wrote: Could you elaborate?

Ultimately your or anyone's only real gauge for whether another thing is conscious or not is the behaviours they observe from it. Therefore if me observing what looks like fear and anxiety from something playing (Living?) Super Mario World is just anthropomorphism the same must also be said for observing fear or whatever in a human.

Fundamentally once you create a program that has access to it's own source code with the ability to modify it "at will" you now have a conscious thing. Rudimentary at first and different from human consciousness in the same way cats are different from humans, but becoming increasingly complex and refined as needed. Invariable developing an emergent core survival task and everything else that goes with it like every other living thing in existence.
"The cure for bad information is more information."
[-] The following 1 user Likes Mediochre's post:
  • Sciborg_S_Patel
(2019-03-27, 10:01 PM)Silence Wrote: I don't know where the AI ball ultimately lands, but I find any questioning of the unconscious nature of current AI specious at best.  Current computer technology mimicking a human emotion is no more indicative of an inner experience than words in a story, or a painting, or any other currently available man-made representation.

What is it you are doubting?

Things change quite a lot when you start dealing with things like genetic algorithms and combative neural nets as two examples. Code that can modify itself can no longer be considered static like words on a page, It's functionally no different than human style learning, even if the hardware and sensors are different. The process is logically the same, and even if it's not, the mutagenic nature of the code means it would become that anyways.
"The cure for bad information is more information."
(2019-03-27, 11:23 PM)Mediochre Wrote: Ultimately your or anyone's only real gauge for whether another thing is conscious or not is the behaviours they observe from it. Therefore if me observing what looks like fear and anxiety from something playing (Living?) Super Mario World is just anthropomorphism the same must also be said for observing fear or whatever in a human.

But that same argument would apply to simply [made toy] robots, puppets or even animists? We can wonder about consciousness in each others' minds while being confident that computers, books, and abacuses don't possess it. [I would agree that if we can find the minimal structures that produce conscious awareness in our own brains, that would be a metaphysically neutral reason to believe instantiating those structures synthetically would produce android consciousness]

One can check out books on game programming in AI and see the tricks they use to make it seem like there is more going on under the hood than is actually happening.


Quote:Fundamentally once you create a program that has access to it's own source code with the ability to modify it "at will" you now have a conscious thing.

I don't see how that follows. What does it mean to have access to source code - which is ultimately a series of zeroes and ones that are interpreted thanks to human agreement (think of the Big Endian and Little Endian sequential order.)

I don't even think few [academics] who believe in mind uploading would say game AI is currently conscious, nor that any of the genetic algorithm or machine learning on the market today is enough?
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(This post was last modified: 2019-03-28, 05:31 AM by Sciborg_S_Patel.)
[-] The following 2 users Like Sciborg_S_Patel's post:
  • Typoz, Valmar
(2019-03-27, 09:53 PM)nbtruthman Wrote: According to the dictionary, some synonyms for "instantiate" are embody, epitomize, express, externalize, incarnate, incorporate, manifest. So should I gather that you mean that a sufficiently complex and complete to the last synapse neural simulation of the human brain in the form of a very advanced computer system could act as an artificial means for separately existing human consciousness to manifest in the physical? It strikes me that this would be some form of interactive dualism, which has been my view for some time. 

I don't think it's making a metaphysical statement.

It could be successfully producing an alter of the One Mind, calling in a spirit, achieving the mysterious Combination Problem for bottom up panpyshics, etc.

Of course we [will] have a better picture once we know the actual minimal necessary correlates. I suspect there may be an endogenous field + quantum effects, but we'll see...
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(This post was last modified: 2019-03-28, 12:14 AM by Sciborg_S_Patel.)
[-] The following 1 user Likes Sciborg_S_Patel's post:
  • Valmar
(2019-03-27, 11:23 PM)Mediochre Wrote: Fundamentally once you create a program that has access to it's own source code with the ability to modify it "at will" you now have a conscious thing. Rudimentary at first and different from human consciousness in the same way cats are different from humans, but becoming increasingly complex and refined as needed. Invariable developing an emergent core survival task and everything else that goes with it like every other living thing in existence.

Keep right on with the materialist faith. Keep on with the really deep faith required to believe in the miracle of emergence of fundamentally existentially new and different things like subjective awareness and qualia from mechanism. And of course, the mechanism and its process have to be complex enough and designed just right. So, if it doesn't work (which is what happens every time), the problem is always just the design and implementation not the basic concept. Accordingly, the AI project can never fail, since new and better designs are always possible. They'll get it right, some day.
[-] The following 1 user Likes nbtruthman's post:
  • Valmar
(2019-03-28, 01:19 AM)nbtruthman Wrote: Keep right on with the materialist faith. Keep on with the really deep faith required to believe in the miracle of emergence of fundamentally existentially new and different things like subjective awareness and qualia from mechanism. And of course, the mechanism and its process have to be complex enough and designed just right. So, if it doesn't work (which is what happens every time), the problem is always just the design and implementation not the basic concept. Accordingly, the AI project can never fail, since new and better designs are always possible. They'll get it right, some day.

I don't think it's a materialist argument necessarily? Donald Hoffman is an Idealist, and Chalmers is an immaterialist of some variety, and they both believe in AI becoming conscious.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


[-] The following 2 users Like Sciborg_S_Patel's post:
  • Silence, Valmar
(2019-03-28, 12:04 AM)Sciborg_S_Patel Wrote: But that same argument would apply to simply [made toy] robots, puppets or even animists? We can wonder about consciousness in each others' minds while being confident that computers, books, and abacuses don't possess it. [I would agree that if we can find the minimal structures that produce conscious awareness in our own brains, that would be a metaphysically neutral reason to believe instantiating those structures synthetically would produce android consciousness]

Of course it wold, but of course you can also observe that no proessing of any type is happening in a book, which of course isn't true in a computer. So the argument still holds.

Quote:One can check out books on game programming in AI and see the tricks they use to make it seem like there is more going on under the hood than is actually happening.

Yeah I'm not talking about static algorithms.

Quote:I don't see how that follows. What does it mean to have access to source code - which is ultimately a series of zeroes and ones that are interpreted thanks to human agreement (think of the Big Endian and Little Endian sequential order.)

This must be a troll response, source code is just a text file or series thereof. Pretty much every programming language in existence has a means of manipulating text files. Which could then be fed back into an interpreter or compiled with a just in time compiler or any other means. Or yes, even just manipulating the direct machine code output.Surely someone who claims to have studied programming already knew this off the top of their head. And obviously the source code and resulting machine code directly correspond to the programs behaviour. Therefore a program that has access to its own source code is capable of modifying what it itself is.

Quote:I don't even think few [academics] who believe in mind uploading would say game AI is currently conscious, nor that any of the genetic algorithm or machine learning on the market today is enough?

I wasn't talking about game AInor was I talking about consciousness upload, I was talking about programs that have been built that play games or learn other complex tasks undirected through machine learning. Which realistically isn't any different to how humans learn, especially in the case of combative neural nets last I checked.The logic is still the same, give it access to its own source code and even if you start it off with pure randomness in modifications it would eventually stabilise itself. Or of course you could help it along by doing some of the work first to make it revert back to its last stable state on failure. The code's evolutions would become increasingly self directed over time as it builds more and more of a stable foundation beneath itself. Even if it all started purely randomly. It's a mathematical inevitability. It doesn't matter that it wouldn't be conscious in the same way a human is, it's still learning and growing just the same.

Hell you don't even need to give it access to its source code yourself, Depending what the program did have access to it could use that to find it itself by accident anyways. Like if you gave it full access to simulate the keyboard.
"The cure for bad information is more information."
[-] The following 1 user Likes Mediochre's post:
  • Sciborg_S_Patel
(2019-03-28, 01:19 AM)nbtruthman Wrote: Keep right on with the materialist faith. Keep on with the really deep faith required to believe in the miracle of emergence of fundamentally existentially new and different things like subjective awareness and qualia from mechanism. And of course, the mechanism and its process have to be complex enough and designed just right. So, if it doesn't work (which is what happens every time), the problem is always just the design and implementation not the basic concept. Accordingly, the AI project can never fail, since new and better designs are always possible. They'll get it right, some day.

Sure, and you keep on having no logical basis for what you believe and just attack people because you think you're special when you know you have no argument instead of trying to discuss their points like an adult. What a typical spiritualist.
"The cure for bad information is more information."
(2019-03-28, 04:32 PM)Mediochre Wrote: Of course it wold, but of course you can also observe that no proessing of any type is happening in a book, which of course isn't true in a computer. So the argument still holds.

"Processing"? There are lots of programs that process information, though here the word "information" means something different than humans exchanging information.

If a Turing Machine is instantiated with trained insects or birds or even humans, can it as a whole become conscious?

Quote:Yeah I'm not talking about static algorithms.

What exactly is a static program to you, vs some other kind? I assume a static program doesn't change its source code? Does [altering preferences for start up in an editor or excel-type program count]?

Or a program that checks on flight status and updates itself?

Quote:This must be a troll response, source code is just a text file or series thereof.

You said, unless I misunderstood, any program that can manipulate it's source code [has some consciousness]? I am trying to find the minimal program that is conscious under your view.

But a text file is just like a book, correct, because there's no processing? And a Turing machine running "static" programs is not conscious.

What is it about running particular programs that makes the previously non-conscious Turing machine conscious?

Quote:Pretty much every programming language in existence has a means of manipulating text files. Which could then be fed back into an interpreter or compiled with a just in time compiler or any other means. Or yes, even just manipulating the direct machine code output.Surely someone who claims to have studied programming already knew this off the top of their head. And obviously the source code and resulting machine code directly correspond to the programs behaviour. Therefore a program that has access to its own source code is capable of modifying what it itself is.

How does this make anything conscious? If a phone app pulls up my most used server requests when it starts up, is that enough for it to be a conscious entity? What is the minimum - below you say it's particular machine learning programs, so what is the minimum kind of machine learning program example?

So yes I'm aware of the ways you can manipulate source, but none of the examples signal consciousness to me. What happens when the program isn't running?

Quote:I wasn't talking about game AInor was I talking about consciousness upload, I was talking about programs that have been built that play games or learn other complex tasks undirected through machine learning. Which realistically isn't any different to how humans learn, especially in the case of combative neural nets last I checked.

What is happening when the machine "learns"? 

Quote:The logic is still the same, give it access to its own source code and even if you start it off with pure randomness in modifications it would eventually stabilise itself. Or of course you could help it along by doing some of the work first to make it revert back to its last stable state on failure. The code's evolutions would become increasingly self directed over time as it builds more and more of a stable foundation beneath itself. Even if it all started purely randomly. It's a mathematical inevitability. It doesn't matter that it wouldn't be conscious in the same way a human is, it's still learning and growing just the same.

Can you go into what is happening at the algorithm [and data storage] level? What does it mean for a program to learn or grow?

Quote:Hell you don't even need to give it access to its source code yourself, Depending what the program did have access to it could use that to find it itself by accident anyways. Like if you gave it full access to simulate the keyboard.

When did this happen?
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(This post was last modified: 2019-03-28, 05:46 PM by Sciborg_S_Patel.)
[-] The following 1 user Likes Sciborg_S_Patel's post:
  • Doug
(2019-03-28, 05:37 PM)Sciborg_S_Patel Wrote: "Processing"? There are lots of programs that process information, though here the word "information" means something different than humans exchanging information.

If a Turing Machine is instantiated with trained insects or birds or even humans, can it as a whole become conscious?


What exactly is a static program to you, vs some other kind? I assume a static program doesn't change its source code? Does [altering preferences for start up in an editor or excel-type program count]?

Or a program that checks on flight status and updates itself?


You said, unless I misunderstood, any program that can manipulate it's source code [has some consciousness]? I am trying to find the minimal program that is conscious under your view.

But a text file is just like a book, correct, because there's no processing? And a Turing machine running "static" programs is not conscious.

What is it about running particular programs that makes the previously non-conscious Turing machine conscious?


How does this make anything conscious? If a phone app pulls up my most used server requests when it starts up, is that enough for it to be a conscious entity? What is the minimum - below you say it's particular machine learning programs, so what is the minimum kind of machine learning program example?

So yes I'm aware of the ways you can manipulate source, but none of the examples signal consciousness to me. What happens when the program isn't running?


What is happening when the machine "learns"? 


Can you go into what is happening at the algorithm [and data storage] level? What does it mean for a program to learn or grow?


When did this happen?
Give a real rebuttal to my points with logic and evidence instead of asking questions until I don't have an answer. I'm not interested in your usual poor man's socratic nonesense. If you think I'm wrong, justify it.
"The cure for bad information is more information."

  • View a Printable Version
Forum Jump:


Users browsing this thread: 1 Guest(s)