‘Artificial Intelligence is a misnomer’ - Sir Roger Penrose

40 Replies, 1230 Views

(2023-01-04, 02:49 AM)Max_B Wrote: Well I have particular views on how our experience arises.
Fair enough. I would assume that, if this sort of AI were possible, it would develop in various ways and have a range of "personalities" like with any conscious beings. There are plenty of cold and dangerous people, but plenty of other sorts too.
(2023-01-05, 05:39 PM)nbtruthman Wrote: Just one comment at this time: you assume your position and presuppose a priori at the very start, that (X) will have consciousness, that is, will experience. And the "learning" referred to on the part of (X) also presupposes conscious knowing. This completely ignores my argument rather than engage it. Needless to say, that is begging the question and is an invalid debate tactic.

Apologies if you expected that I would debate the arguments in your post, I had intended that my last sentence would make it explicit that I would not do so, even if the style of the earlier part of my response failed to communicate that... but it seems I failed even here.
We shall not cease from exploration
And the end of all our exploring 
Will be to arrive where we started
And know the place for the first time.
(2023-01-05, 08:42 PM)Will Wrote: Fair enough. I would assume that, if this sort of AI were possible, it would develop in various ways and have a range of "personalities" like with any conscious beings. There are plenty of cold and dangerous people, but plenty of other sorts too.

If you're thinking about how I envisage it might appear to us, I honestly don't know. I'm not sure I said dangerous, I didn't intend that it should appear like that, rather the opposite, that the group who controlled it wouldn't even be aware of its danger. I meant cold and hard, as in the logic of its responses, which would be - for example - without mammalian altruism, responses that most people couldn't even imagine, because of thousands of years of human experience.
We shall not cease from exploration
And the end of all our exploring 
Will be to arrive where we started
And know the place for the first time.
(2023-01-05, 12:45 AM)nbtruthman Wrote: AI is performed by computers and computers using any possible technology are entirely algorithmic. That is to say, they are constrained to obey a set of operations written by a computer programmer. Regardless of the nature of the programmer, the programs themselves consist entirely of logic and mathematics implemented in silicon, and the physics that underlies the operations of the silicon chips.

But we seem to already run into a problem that machine "learning" gives results that we cannot easily predict.

It doesn't even need to be conscious to end up taking over some decision making that leads to a bad end for us.

Additionally, I think we have to separate Turing Machines and variations thereof from something that might be partially biological or a recreation of some biological structure - for example, for the sake of argument, microtubule arrangements that allow for quantum biology.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(2023-01-05, 09:19 PM)Max_B Wrote: If you're thinking about how I envisage it might appear to us, I honestly don't know. I'm not sure I said dangerous, I didn't intend that it should appear like that, rather the opposite, that the group who controlled it wouldn't even be aware of its danger. I meant cold and hard, as in the logic of its responses, which would be - for example - without mammalian altruism, responses that most people couldn't even imagine, because of thousands of years of human experience.

There is a mix of several ideas here. One is logic. Often there is a logic behind some of our actions. But there is also free will. That to me means the ability to do something not logical but perhaps intuitive - whatever that may mean. The mention of mammalian altruism, which I understand is just an example though it might also reflect on our own nature, that also raised a thought in my mind on the nature of the universe: is that cold and hard or is it perhaps benevolent and generous? I'm not suggesting answers though I do think these things are worth some contemplation.
[-] The following 1 user Likes Typoz's post:
  • Sciborg_S_Patel
(2023-01-05, 11:29 PM)Sciborg_S_Patel Wrote: But we seem to already run into a problem that machine "learning" gives results that we cannot easily predict.

It doesn't even need to be conscious to end up taking over some decision making that leads to a bad end for us.

Additionally, I think we have to separate Turing Machines and variations thereof from something that might be partially biological or a recreation of some biological structure - for example, for the sake of argument, microtubule arrangements that allow for quantum biology.

I agree. But I think that even the modern most recent machine learning systems have the same fundamental limitation I mentioned: at base in their processors, no matter how advanced, they are algorithmic - their immensely complicated and difficult or impossible to trace routes to solutions to problems are the ultimately the result of manipulation of digital bits executing algorithms. This seems to rule out consciousness, as Marks determined.

Even something utilizing quantum mechanical processing (like some emulation of the Hameroff/Penrose microtubule functionality you mention), seems to me ultimately ends up executing algorithms, doing computations, and no matter how powerful this may be is still "things in action" and fundamentally of a different order than consciousness, subjective awareness. This is the good old Hard Problem again.
(This post was last modified: 2023-01-06, 12:41 PM by nbtruthman. Edited 3 times in total.)
[-] The following 1 user Likes nbtruthman's post:
  • Sciborg_S_Patel
(2023-01-05, 09:19 PM)Max_B Wrote: If you're thinking about how I envisage it might appear to us, I honestly don't know. I'm not sure I said dangerous, I didn't intend that it should appear like that, rather the opposite, that the group who controlled it wouldn't even be aware of its danger. I meant cold and hard, as in the logic of its responses, which would be - for example - without mammalian altruism, responses that most people couldn't even imagine, because of thousands of years of human experience.
Aha. That makes more sense, thank you. Though again, this possibility seems not unlike problems we already have with social media tech, just more intense.
[-] The following 1 user Likes Will's post:
  • Sciborg_S_Patel
(2023-01-06, 12:39 PM)nbtruthman Wrote: I agree. But I think that even the modern most recent machine learning systems have the same fundamental limitation I mentioned: at base in their processors, no matter how advanced, they are algorithmic - their immensely complicated and difficult or impossible to trace routes to solutions to problems are the ultimately the result of manipulation of digital bits executing algorithms. This seems to rule out consciousness, as Marks determined.

Even something utilizing quantum mechanical processing (like some emulation of the Hameroff/Penrose microtubule functionality you mention), seems to me ultimately ends up executing algorithms, doing computations, and no matter how powerful this may be is still "things in action" and fundamentally of a different order than consciousness, subjective awareness. This is the good old Hard Problem again.

Hmmm the Hard Problem is the question of relating the qualitative and the quantitative, not necessarily that the two are forever apart.

Seeing as Orch Or is non-computational (in terms of Turing Machine execution) a synthetic life that was built with some aspect of that theory of mind would not necessarily be algorithmic.

I just think even when we bring souls and irreducible consciousness into the equation, it is still possible for humans to create synthetic life. It may only ape thinking and possess more of an animal soul, but as Radin notes maybe this kind of life is actually better at Psi than humans because the higher level consciousness is not constrained by evolution in the same way.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(This post was last modified: 2023-01-06, 05:04 PM by Sciborg_S_Patel. Edited 1 time in total.)
(2023-01-06, 05:03 PM)Sciborg_S_Patel Wrote: Hmmm the Hard Problem is the question of relating the qualitative and the quantitative, not necessarily that the two are forever apart.

Seeing as Orch Or is non-computational (in terms of Turing Machine execution) a synthetic life that was built with some aspect of that theory of mind would not necessarily be algorithmic.

I just think even when we bring souls and irreducible consciousness into the equation, it is still possible for humans to create synthetic life. It may only ape thinking and possess more of an animal soul, but as Radin notes maybe this kind of life is actually better at Psi than humans because the higher level consciousness is not constrained by evolution in the same way.

I guess so. But even if not by algorithmic means this artificial life form would still be doing nothing but solving logical problems quantum mechanically, no? And such activity could be classed as "computations" of some sort and in a different category of existence than consciousness, no?
[-] The following 1 user Likes nbtruthman's post:
  • Sciborg_S_Patel
(2023-01-06, 06:16 PM)nbtruthman Wrote: I guess so. But even if not by algorithmic means this artificial life form would still be doing nothing but solving logical problems quantum mechanically, no? And such activity could be classed as "computations" of some sort and in a different category of existence than consciousness, no?

Well, to go with the example, if Orch Or is correct what generates an individual consciousness from a potential Source is certain structures in organic entities.

So these synthetic entities could be as conscious as we are. OTOH maybe a synthetic recreation of the brain's necessary structures doesn't produce anything at all, and the android just lies there.

We won't know until we're further down the line scientifically, though as Max notes by the time we know it might be too late...
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell



  • View a Printable Version
Forum Jump:


Users browsing this thread: 1 Guest(s)