‘Artificial Intelligence is a misnomer’ - Sir Roger Penrose

40 Replies, 1221 Views



Quote:Sir Roger Penrose is a mathematical physicist who has changed the way we see the universe. He won the Nobel Prize for Physics in 2020 for his work on black holes.

He tells Krishnan about how he wasn’t top of the class in maths at school, talks about his relationships with Steven Hawking and MC Escher and tells Krishnan why he thinks Artificial Intelligence is a misnomer.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


[-] The following 3 users Like Sciborg_S_Patel's post:
  • tim, Typoz, Ninshub
   
[-] The following 3 users Like Typoz's post:
  • Silence, tim, Sciborg_S_Patel
(2022-12-30, 10:40 PM)Sciborg_S_Patel Wrote:

I once thought like that... but recent advances have been so quick in both biological computing, bio[you name it], quantum mechanics and quantum computing (quantum information), that I'm flabbergasted... it's going to tip... once such and organism incorporates QM isolation, and can evolve, it will arrive... and it won't be a mammal like intelligence... it will be cold and hard... and it will give them what they want... and they will listen to it... and they won't know who or what they are listening to...
We shall not cease from exploration
And the end of all our exploring 
Will be to arrive where we started
And know the place for the first time.
[-] The following 1 user Likes Max_B's post:
  • Sciborg_S_Patel
(2023-01-02, 05:19 PM)Max_B Wrote: I once thought like that... but recent advances have been so quick in both biological computing, bio[you name it], quantum mechanics and quantum computing (quantum information), that I'm flabbergasted... it's going to tip... once such and organism incorporates QM isolation, and can evolve, it will arrive... and it won't be a mammal like intelligence... it will be cold and hard... and it will give them what they want... and they will listen to it... and they won't know who or what they are listening to...
Not sure I follow this - are you suggesting that AI will grow out of quantum computing and, assuming a quantum basis for organic consciousness, be able to access our minds?
(2023-01-03, 05:37 PM)Will Wrote: Not sure I follow this - are you suggesting that AI will grow out of quantum computing and, assuming a quantum basis for organic consciousness, be able to access our minds?

Could do, but I was thinking more along the lines that an evolving biological classical/quantum "computer" using human or non-human patterns, could produce biased solutions to problems without our awareness.

How would we know? Using such a quantum device we might design this enzyme, that protein, this material etc. all innocuous and seemingly beneficial by themselves, but biased in an evolutionary way towards some end we cannot perceive.
We shall not cease from exploration
And the end of all our exploring 
Will be to arrive where we started
And know the place for the first time.
[-] The following 1 user Likes Max_B's post:
  • Sciborg_S_Patel
(2023-01-03, 06:30 PM)Max_B Wrote: Could do, but I was thinking more along the lines that an evolving biological classical/quantum "computer" using human or non-human patterns, could produce biased solutions to problems without our awareness.

How would we know? Using such a quantum device we might design this enzyme, that protein, this material etc. all innocuous and seemingly beneficial by themselves, but biased in an evolutionary way towards some end we cannot perceive.
Ah. Don't we have that problem to some degree already?

I'm inclined to be skeptical of AI, but assuming it emerges as you say, would it necessarily be a given that it would be a cold and dangerous intelligence?
(2023-01-04, 12:09 AM)Will Wrote: Ah. Don't we have that problem to some degree already?

I'm inclined to be skeptical of AI, but assuming it emerges as you say, would it necessarily be a given that it would be a cold and dangerous intelligence?

Well I have particular views on how our experience arises.
We shall not cease from exploration
And the end of all our exploring 
Will be to arrive where we started
And know the place for the first time.
(2023-01-02, 05:19 PM)Max_B Wrote: I once thought like that... but recent advances have been so quick in both biological computing, bio[you name it], quantum mechanics and quantum computing (quantum information), that I'm flabbergasted... it's going to tip... once such and organism incorporates QM isolation, and can evolve, it will arrive... and it won't be a mammal like intelligence... it will be cold and hard... and it will give them what they want... and they will listen to it... and they won't know who or what they are listening to...

I think this speculation based on latest research runs into a fundamental problem. For any possible advanced computer system (including quantum types), acquiring consciousness appears to be simply impossible for logical reasons and first principles: things are not thoughts, and computer AI can manifest nothing more than the algorithms implemented in such systems by the designers and programmers.

From the article at https://uncommondescent.com/intelligent-...s-why-not/ :

(an excerpt from Chapter 1 of the new book Non-Computable You by Robert J. Marks II):

Quote:"Artificial intelligence has done many remarkable things. AI has largely replaced travel agents, tollbooth attendants, and mapmakers. But will AI ever replace attorneys, physicians, military strategists, and design engineers, among others?
The answer is no. And the reason is that as impressive as artificial intelligence is — and make no mistake, it is fantastically impressive — it doesn’t hold a candle to human intelligence. It doesn’t hold a candle to you.
And it never will. How do we know? The answer can be stated in a single four-syllable word that needs unpacking before we can contemplate the non-computable you. That word is algorithm. If not expressible as an algorithm, a task is not computable.,,,
Non-Computable You,,,
If biting into a lemon cannot be explained to a man without all his functioning senses, it certainly can’t be duplicated in an experiential way by AI using computer software.,,,
Qualia are a simple example of the many human attributes that escape algorithmic description. If you can’t formulate an algorithm explaining your lemon-biting experience, you can’t write software to duplicate the experience in the computer.,,
Qualia are not computable. They’re non-algorithmic.,,,"

AI is performed by computers and computers using any possible technology are entirely algorithmic. That is to say, they are constrained to obey a set of operations written by a computer programmer. Regardless of the nature of the programmer, the programs themselves consist entirely of logic and mathematics implemented in silicon, and the physics that underlies the operations of the silicon chips. 

The core of the processor, regardless of how it is implemented in technology, is the execution of algorithms consisting at base of an interminably long series of basic mathematical operations on digital bits using fundamental processes of addition, subtraction, multiplication, division, etc.

Mathematics is algorithmically constructed, based on logic and foundational axioms. And physics is built algorithmically on foundational laws. Since AI is necessarily in both principle and practice entirely algorithmic, this is still the case regardless of the sophistication of its processing implementation, even if it is quantum-based with almost inconceivably great processing capacity. It operates according to the logic of mathematics and the laws of physics.

But human consciousness and the soul consist of something unknowable that exhibits something entirely nonphysical: sentient subjective consciousness and awareness and experience.

The qualia of subjective awareness and consciousness are fundamentally not algorithmic, not computable. Because of this and because all computers, including any possible advanced quantum computer, can do absolutely nothing but compute using algorithms, an AI will never even possibly be conscious. Certainly very advanced ones, especially future quantum computers, will exhibit extraordinary processing power, maybe power to convincingly mimic conscious human agents for a time in their communications, but they will not possibly be actually experiencing consciousness, be experiencing subjective awareness, perception, agentness, actually knowing things.

Along with all of this, Laird has pointed out that one of the most essential attributes of human consciousness - human free will - would also be prevented from manifesting in any kind of computer system since it is entirely programmed and algorithmic, deterministic. Consciousness is fundamentally just not computable and algorithmic, quantum computers or no.
(This post was last modified: 2023-01-05, 12:57 AM by nbtruthman. Edited 1 time in total.)
(2023-01-05, 12:45 AM)nbtruthman Wrote: I think this speculation based on latest research runs into a fundamental problem. For any possible advanced computer system (including quantum types), acquiring consciousness appears to be simply impossible for logical reasons and first principles: things are not thoughts, and computer AI can manifest nothing more than the algorithms implemented in such systems by the designers and programmers.

From the article at https://uncommondescent.com/intelligent-...s-why-not/ :

(an excerpt from Chapter 1 of the new book Non-Computable You by Robert J. Marks II):


AI is performed by computers and computers using any possible technology are entirely algorithmic. That is to say, they are constrained to obey a set of operations written by a computer programmer. Regardless of the nature of the programmer, the programs themselves consist entirely of logic and mathematics implemented in silicon, and the physics that underlies the operations of the silicon chips. 

The core of the processor, regardless of how it is implemented in technology, is the execution of algorithms consisting at base of an interminably long series of basic mathematical operations on digital bits using fundamental processes of addition, subtraction, multiplication, division, etc.

Mathematics is algorithmically constructed, based on logic and foundational axioms. And physics is built algorithmically on foundational laws. Since AI is necessarily in both principle and practice entirely algorithmic, this is still the case regardless of the sophistication of its processing implementation, even if it is quantum-based with almost inconceivably great processing capacity. It operates according to the logic of mathematics and the laws of physics.

But human consciousness and the soul consist of something unknowable that exhibits something entirely nonphysical: sentient subjective consciousness and awareness and experience.

The qualia of subjective awareness and consciousness are fundamentally not algorithmic, not computable. Because of this and because all computers, including any possible advanced quantum computer, can do absolutely nothing but compute using algorithms, an AI will never even possibly be conscious. Certainly very advanced ones, especially future quantum computers, will exhibit extraordinary processing power, maybe power to convincingly mimic conscious human agents for a time in their communications, but they will not possibly be actually experiencing consciousness, be experiencing subjective awareness, perception, agentness, actually knowing things.

Along with all of this, Laird has pointed out that one of the most essential attributes of human consciousness - human free will - would also be prevented from manifesting in any kind of computer system since it is entirely programmed and algorithmic, deterministic. Consciousness is fundamentally just not computable and algorithmic, quantum computers or no.

What I'm suggesting ('X') won't have our experience, not anywhere near, as X won't use all our classical/non-classical patterns, so won't have access to add up - what we call the past/present/future - of our classical patterns non-classically.

The patterns of X that match with our own patterns will be shared (here). X will have a *very* different experience because of those different patterns, and also because of what sensory patterns/information X has access to here. X will need to have sufficient isolation to be able to add up X's patterns non-classically, and for the classical result to be tied to a classical system that can evolve X's classical pattern using a substrate with plasticity via a feedback loop (learning).

Unfortunately my experience of what I've written above can't be shared with you, as you'll only be able to understand it through your own experience.
We shall not cease from exploration
And the end of all our exploring 
Will be to arrive where we started
And know the place for the first time.
[-] The following 1 user Likes Max_B's post:
  • Sciborg_S_Patel
(2023-01-05, 09:01 AM)Max_B Wrote: What I'm suggesting ('X') won't have our experience, not anywhere near, as X won't use all our classical/non-classical patterns, so won't have access to add up - what we call the past/present/future - of our classical patterns non-classically.

The patterns of X that match with our own patterns will be shared (here). X will have a *very* different experience because of those different patterns, and also because of what sensory patterns/information X has access to here. X will need to have sufficient isolation to be able to add up X's patterns non-classically, and for the classical result to be tied to a classical system that can evolve X's classical pattern using a substrate with plasticity via a feedback loop (learning).

Unfortunately my experience of what I've written above can't be shared with you, as you'll only be able to understand it through your own experience.

Just one comment at this time: you assume your position and presuppose a priori at the very start, that (X) will have consciousness, that is, will experience. And the "learning" referred to on the part of (X) also presupposes conscious knowing. This completely ignores my argument rather than engage it. Needless to say, that is begging the question and is an invalid debate tactic.
(This post was last modified: 2023-01-05, 05:55 PM by nbtruthman. Edited 3 times in total.)

  • View a Printable Version


Users browsing this thread: 1 Guest(s)