Top 10 AI hype stories of 2018

59 Replies, 5827 Views

(2019-03-31, 06:18 PM)Sciborg_S_Patel Wrote: On this point the following is a good starting place:

Is the Brain a Digital Computer?

Lanier: You Can't Argue with a Zombie


Searle's article lays out the issues quite clearly - syntax is not intrinsic to physics and anything can be used as a computation device.

If people thought of a Babbage difference engine built out of wood there would be no belief that AI could be conscious.

However the complexity of an i-phone and cloud technology lead many to believe that somehow a computation device will become conscious.  Adding complexity does not create qualia or agency.
(This post was last modified: 2019-04-08, 03:53 PM by North.)
[-] The following 5 users Like North's post:
  • letseat, Typoz, nbtruthman, Sciborg_S_Patel, Doug
(2019-04-08, 03:53 PM)North Wrote: Searle's article lays out the issues quite clearly - syntax is not intrinsic to physics and anything can be used as a computation device.

If people thought of a Babbage difference engine built out of wood there would be no belief that AI could be conscious.

However the complexity of an i-phone and cloud technology lead many to believe that somehow a computation device will become conscious.  Adding complexity does not create qualia or agency.

I'm mostly inclined to agree, it is hard to see consciousness arising in the Tinkertoy Computer for example.

The potential outlier is for those who accept materialism isn't true. What is it that makes conscious entities conscious? And is there any synthetic life that one might accept as conscious?
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


[-] The following 1 user Likes Sciborg_S_Patel's post:
  • Laird
(2019-04-08, 03:53 PM)North Wrote: Searle's article lays out the issues quite clearly - syntax is not intrinsic to physics and anything can be used as a computation device.

If people thought of a Babbage difference engine built out of wood there would be no belief that AI could be conscious.

However the complexity of an i-phone and cloud technology lead many to believe that somehow a computation device will become conscious.  Adding complexity does not create qualia or agency.

That pretty much summarises my view too.

When I worked as a programmer, we used to test and debug software by dry-running the program, That is to say we used to read through the program one instruction at a time, recording on paper the contents of memory, and following any jumps or loops required according to the program code.

So not only can a computer be recreated using wooden cogs and levers, it doesn't even require any mechanical or electronic devices at all. The computer hardware is entirely superfluous. The process itself is an abstraction, in the same way that mathematics is an abstraction, not dependent on any material basis.
[-] The following 4 users Like Typoz's post:
  • Laird, Sciborg_S_Patel, Oleo, Doug
(2019-03-31, 06:18 PM)Sciborg_S_Patel Wrote: On this point the following is a good starting place:

Is the Brain a Digital Computer?

Lanier: You Can't Argue with a Zombie

John Searle's philosophical position in his paper Is the Brain a Digital Computer? seems to be firmly in the materialist camp and isn't ammunition for the dualists and idealists.

Searle's paper strictly addresses and argues against the notion that the mind literally is the structure of a computer program (what he calls "Weak AI" or cognitivism), not the notion that the mind is really the process of executing the mind's neurological program  mechanized by the brain's neurons, or perhaps the same process but executed by advanced AI computer programs. Searle apparently agrees with the latter view (that the mind is the process of executing computer algorithms): "Just to keep the terminology straight, I call (Strong AI) the view that all there is to having a mind is ...that brain processes (and mental processes) can be simulated computation." 

In his Summary he boils it down to one statement: "In the brain, intrinsically, there are neurobiological processes and sometimes they cause consciousness. But that is the end of the (philosophical) story"
(2019-04-08, 05:31 PM)Sciborg_S_Patel Wrote: I'm mostly inclined to agree, it is hard to see consciousness arising in the Tinkertoy Computer for example.

The potential outlier is for those who accept materialism isn't true. What is it that makes conscious entities conscious? And is there any synthetic life that one might accept as conscious?

I think the reason it's hard to see consciousness arising in the Tinkertoy Computer is essentially the same reason it's hard to imagine the Tinkertoy Computer passing a Turing test, though.

How do people feel about the ability of more sophisticated computers to pass a Turing test? If a computer could converse in a way indistinguishable from the way humans converse, would it still be as unbelievable as it seems now, that they could be conscious?
(2019-04-09, 04:24 PM)nbtruthman Wrote: John Searle's philosophical position in his paper Is the Brain a Digital Computer? seems to be firmly in the materialist camp and isn't ammunition for the dualists and idealists.

Searle's paper strictly addresses and argues against the notion that the mind literally is the structure of a computer program (what he calls "Weak AI" or cognitivism), not the notion that the mind is really the process of executing the mind's neurological program  mechanized by the brain's neurons, or perhaps the same process but executed by advanced AI computer programs. Searle apparently agrees with the latter view (that the mind is the process of executing computer algorithms): "Just to keep the terminology straight, I call (Strong AI) the view that all there is to having a mind is ...that brain processes (and mental processes) can be simulated computation." 

In his Summary he boils it down to one statement: "In the brain, intrinsically, there are neurobiological processes and sometimes they cause consciousness. But that is the end of the (philosophical) story"

Seems to me he's saying that the Strong AI theory should be rejected on materialist grounds. I think he's right about that.

I don't know if Dualists would consider programs to be conscious - why that particular machine and not others? Idealists and others who follow a One -> Many ontology have a more difficult position, as it seems possible there could be an argument to go from particular structures to conscious entities.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(2019-04-09, 06:12 PM)Chris Wrote: I think the reason it's hard to see consciousness arising in the Tinkertoy Computer is essentially the same reason it's hard to imagine the Tinkertoy Computer passing a Turing test, though.

How do people feel about the ability of more sophisticated computers to pass a Turing test? If a computer could converse in a way indistinguishable from the way humans converse, would it still be as unbelievable as it seems now, that they could be conscious?

But a Tinkertoy Computer can become as sophisticated as you need it to be to perform computations. I believe it could play Tic Tac Toe in its original design, for example. And there are other examples one can bring to bear, even small batter powered robots trained to perform the internal operations but having no continuous flow of electricity along with few to no connected components.

Such designs may be lacking compared to a quad-core from Intel, but if speed/accuracy is the marker then should we regard those who are "slow" to not possess consciousness?

If symbols/syntax are reducible to something like particles I don't think you can have a good reason for programs being conscious (though this would also mean human consciousness is also subject to eliminativism). If one grants abstract entities some independent reality, like in Platonism or Informational Realism, it seems to me one might argue for programs having some mentality though whether this mind is aware of why it is doing or is more like a well trained animal would still be in question.

Another possibility is some kind of Bottom Up Panpsychism where a program somehow allows the computer to answer the Combination Problem...
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(2019-04-09, 06:12 PM)Chris Wrote: I think the reason it's hard to see consciousness arising in the Tinkertoy Computer is essentially the same reason it's hard to imagine the Tinkertoy Computer passing a Turing test, though.

How do people feel about the ability of more sophisticated computers to pass a Turing test? If a computer could converse in a way indistinguishable from the way humans converse, would it still be as unbelievable as it seems now, that they could be conscious?

I think the Turing test, in its original form, tells us nothing about consciousness. What it does test for is the ability to deceive.
[-] The following 3 users Like Typoz's post:
  • nbtruthman, Sciborg_S_Patel, Doug
(2019-04-09, 06:42 PM)Sciborg_S_Patel Wrote: But a Tinkertoy Computer can become as sophisticated as you need it to be to perform computations. ...

That was my point really. It's difficult to believe it could be conscious just because it looks so simple. But in principle it can be as complicated as you like - as complicated as the human brain, for example.
[-] The following 1 user Likes Guest's post:
  • Sciborg_S_Patel
(2019-04-09, 06:45 PM)Typoz Wrote: I think the Turing test, in its original form, tells us nothing about consciousness. What it does test for is the ability to deceive.

It just seems a bit strange that there could be computers able to simulate human conversation so well that people can't tell the difference between them and the real thing, but that the humans would be assumed to be conscious, and the computers not.
[-] The following 1 user Likes Guest's post:
  • Sciborg_S_Patel

  • View a Printable Version
Forum Jump:


Users browsing this thread: 2 Guest(s)