Top 10 AI hype stories of 2018

59 Replies, 4476 Views

(2019-04-09, 07:49 PM)Chris Wrote: That was my point really. It's difficult to believe it could be conscious just because it looks so simple. But in principle it can be as complicated as you like - as complicated as the human brain, for example.

I am bit confused as to what you are claiming - Are you saying the Tinkertoy computer could, upon running the correct program, become conscious?

I'm not necessarily disagreeing with that but it does make one have to really think about what they are saying the nature of consciousness is...
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(2019-04-09, 08:46 PM)Sciborg_S_Patel Wrote: I am bit confused as to what you are claiming - Are you saying the Tinkertoy computer could, upon running the correct program, become conscious?

I'm not necessarily disagreeing with that but it does make one have to really think about what they are saying the nature of consciousness is...

I'm saying that in principle a powerful enough computer built on that principle should be as capable as an equally powerful computer built on any other principle. Though no doubt there would be severe practical difficulties in building a powerful computer on the Tinkertoy principle.

Personally, I assume computer-based artificial intelligences will be conscious, in much the same way as we are.
[-] The following 1 user Likes Guest's post:
  • Sciborg_S_Patel
(2019-04-09, 09:17 PM)Chris Wrote: Personally, I assume computer-based artificial intelligences will be conscious, in much the same way as we are.

Just so I completely understand, you mean a computer which is not conscious will become conscious upon running some specific kind of as-yet-unknown program?
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(2019-04-09, 09:19 PM)Sciborg_S_Patel Wrote: Just so I completely understand, you mean a computer which is not conscious will become conscious upon running some specific kind of as-yet-unknown program?

I think essentially the answer is yes. Though I'm not sure whether it's appropriate to say the hardware becomes conscious. Probably better to  say the computation produces a conscious intelligence. In the same way that many people here would prefer to say the mind is conscious, rather than the brain is conscious.
[-] The following 1 user Likes Guest's post:
  • Sciborg_S_Patel
(2019-04-09, 08:35 PM)Chris Wrote: It just seems a bit strange that there could be computers able to simulate human conversation so well that people can't tell the difference between them and the real thing, but that the humans would be assumed to be conscious, and the computers not.

It also seems strange that there are people able to simulate sawing a woman in half so well that people can't tell the difference between that and the real thing.

There are also 3D TV screens and hi-fi sound, able to simulate all sorts of things. But a simulation of a traffic-flow system is not an actual traffic flow. A simulation of a nuclear reactor is not an actual nuclear reactor.

None of this seems difficult to comprehend. I'm eternally surprised that academics and thinkers raise these issues and pretend not to notice the nonsense.


Still, to say something more concrete here. There is a need to distinguish between the concept of intelligence and that of consciousness. Discussions of AI occasionally seem to shift, without any explanation, into discussions of consciousness. It is true that both are difficult to define, but so is the sensation of saltiness or the seeing of redness, but we don't equate them just because they are hard to define, do we?
[-] The following 5 users Like Typoz's post:
  • Laird, Silence, nbtruthman, Doug, Sciborg_S_Patel
(2019-04-09, 09:34 PM)Chris Wrote: I think essentially the answer is yes. Though I'm not sure whether it's appropriate to say the hardware becomes conscious. Probably better to  say the computation produces a conscious intelligence. In the same way that many people here would prefer to say the mind is conscious, rather than the brain is conscious.

It's interesting to think of the symbols as having their own consciousness. IIRC Empedocles, the ancient Greek who gave us the idea of 4 classical elements [in the West at least], suggested that words could have a mind of their own, or at least some words could...
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(This post was last modified: 2019-04-10, 01:13 AM by Sciborg_S_Patel.)
[-] The following 1 user Likes Sciborg_S_Patel's post:
  • Laird
(2019-04-09, 10:53 PM)Typoz Wrote: It also seems strange that there are people able to simulate sawing a woman in half so well that people can't tell the difference between that and the real thing.

There are also 3D TV screens and hi-fi sound, able to simulate all sorts of things. But a simulation of a traffic-flow system is not an actual traffic flow. A simulation of a nuclear reactor is not an actual nuclear reactor.

None of this seems difficult to comprehend. I'm eternally surprised that academics and thinkers raise these issues and pretend not to notice the nonsense.


Still, to say something more concrete here. There is a need to distinguish between the concept of intelligence and that of consciousness. Discussions of AI occasionally seem to shift, without any explanation, into discussions of consciousness. It is true that both are difficult to define, but so is the sensation of saltiness or the seeing of redness, but we don't equate them just because they are hard to define, do we?

I certainly don't expect you to agree, but I don't think those counter-arguments are particularly fair.

I think the simulation of human conversation by an artificial intelligence fundamentally differs from the examples of simulation you give, because (from the viewpoint that intelligence is produced by activity in the brain) it's a simulation produced by essentially similar means - by physical processes in a complicated system of interacting components. I know it's not how AI works, at least at the moment, but if you could manufacture an exact computational analogue of the human brain, and if it produced something that appeared indistinguishable from human intelligence, then I don't think it would be right to dismiss that as a mere simulation because it wasn't a real human brain.

And again, though intelligence isn't the same as consciousness, I don't think it's fair to compare them to redness and saltiness, because on the materialist hypothesis they are much more closely related than that. And the difficulty of confining the discussion to consciousness alone is that there can be  no way of testing directly for the presence of consciousness in any mind but the mind of the observer.
[-] The following 1 user Likes Guest's post:
  • Laird
(2019-04-10, 01:10 AM)Sciborg_S_Patel Wrote: It's interesting to think of the symbols as having their own consciousness. IIRC Empedocles, the ancient Greek who gave us the idea of 4 classical elements [in the West at least], suggested that words could have a mind of their own, or at least some words could...

To my mind it doesn't feel right to say symbols have consciousness either. If anything had consciousness, I think it would have to be the act of computation as a whole.
(2019-04-09, 10:53 PM)Typoz Wrote: It also seems strange that there are people able to simulate sawing a woman in half so well that people can't tell the difference between that and the real thing.

There are also 3D TV screens and hi-fi sound, able to simulate all sorts of things. But a simulation of a traffic-flow system is not an actual traffic flow. A simulation of a nuclear reactor is not an actual nuclear reactor.

None of this seems difficult to comprehend. I'm eternally surprised that academics and thinkers raise these issues and pretend not to notice the nonsense.


Still, to say something more concrete here. There is a need to distinguish between the concept of intelligence and that of consciousness. Discussions of AI occasionally seem to shift, without any explanation, into discussions of consciousness. It is true that both are difficult to define, but so is the sensation of saltiness or the seeing of redness, but we don't equate them just because they are hard to define, do we?

I think those analogies aren't quite right. It's the difference between using a crochet mesh bag or a plastic bag to hold your groceries. The material is different, but the outcome is the same - you are able to carry your groceries home. As you say, the simulations you describe aren't producing the 'thing' they are simulating. But what AI/consciousness simulations are meant to produce are the same thing as our hot, wet brain produces (supposedly).

Linda
(2019-04-10, 07:32 AM)Chris Wrote: I certainly don't expect you to agree, but I don't think those counter-arguments are particularly fair.

I think the simulation of human conversation by an artificial intelligence fundamentally differs from the examples of simulation you give, because (from the viewpoint that intelligence is produced by activity in the brain) it's a simulation produced by essentially similar means - by physical processes in a complicated system of interacting components. I know it's not how AI works, at least at the moment, but if you could manufacture an exact computational analogue of the human brain, and if it produced something that appeared indistinguishable from human intelligence, then I don't think it would be right to dismiss that as a mere simulation because it wasn't a real human brain.

And again, though intelligence isn't the same as consciousness, I don't think it's fair to compare them to redness and saltiness, because on the materialist hypothesis they are much more closely related than that. And the difficulty of confining the discussion to consciousness alone is that there can be  no way of testing directly for the presence of consciousness in any mind but the mind of the observer.

It's too bad that with these discussions of consciousness science and the possibility of conscious AI, materialists stay strictly in the philosophical realm, carefully never examining what the empirical evidence has to say. For instance, the great amount of empirical evidence from veridical NDE accounts, evidence that shows clearly that human consciousness is ultimately independent of the physical brain (and therefore is not the mechanistic product of neurological brain processes).  Of course, since it conflicts with their paradigm the materialists complacently dismiss this as anecdotal rubbish, regardless of data quality and degree of investigation. To think again on this, all it takes is an unbiased perusal of NDE investigations and analyses such as The Self Does Not Die, by Titus Rivas, Dirven and Smit. I'm not going to hold my breath.
(This post was last modified: 2019-04-10, 03:03 PM by nbtruthman.)
[-] The following 2 users Like nbtruthman's post:
  • Laird, Doug

  • View a Printable Version
Forum Jump:


Users browsing this thread: 2 Guest(s)