Maybe AI just needs a bigger truck

34 Replies, 1768 Views

(2019-03-29, 01:13 AM)Mediochre Wrote: Give a real rebuttal to my points with logic and evidence instead of asking questions until I don't have an answer. I'm not interested in your usual poor man's socratic nonesense. If you think I'm wrong, justify it.

You said a program that can alter its source is conscious, I asked what the minimum kind of source alteration is enough to qualify.

If you cannot explain how the actual algorithms and data storage works, why would anyone be convinced that because an AI seems to you like it is learning it actual[ly] possesses consciousness? [If the actual physical computer is not conscious when running some programs but conscious when running others, I don't get what it is about the second class of programs that makes the physical computer conscious.]

Besides you saying the program seems conscious to you, I am not sure what other points you made [that we disagree on]. You said humans and programs learn in a similar way, so I asked about the program's learning process. [We both agree synthetic reproductions of the as-yet-unknown necessary structures of the human body would produce a conscious entity yes?]
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(This post was last modified: 2019-03-29, 02:47 AM by Sciborg_S_Patel.)
[-] The following 2 users Like Sciborg_S_Patel's post:
  • Laird, Valmar
(2019-03-29, 02:38 AM)Sciborg_S_Patel Wrote: You said a program that can alter its source is conscious, I asked what the minimum kind of source alteration is enough to qualify.

What's the minimum change in human behaviour to constitute consciousness?

Quote:If you cannot explain how the actual algorithms and data storage works, why would anyone be convinced that because an AI seems to you like it is learning it actual[ly] possesses consciousness? [If the actual physical computer is not conscious when running some programs but conscious when running others, I don't get what it is about the second class of programs that makes the physical computer conscious.]

If you can't explain how free will choices or non local memory works in human consciousness, why would anyone be convinced that because a human seems to you like its learning it actually possesses consciousness? [If the actual collection of molecules are not conscious during some chemical reactions but conscious during others, I don't get what it is about the second set of chemical reactions that makes the collection of molecules conscious.]
"The cure for bad information is more information."
That's just poor form Mediochre.

You jumped into this thread with a very bold assertion regarding video game's being conscious, got challenged to justify/clarify what you meant, and have subsequently either attacked posters accusing them of posing ridiculous questions or posed your own ridiculous questions.

Its okay to make a mistake and get called out for it.  Just be a man about instead or resorting to adolescent defensive tactics.
[-] The following 1 user Likes Silence's post:
  • Valmar
(2019-03-29, 01:07 PM)Silence Wrote: That's just poor form Mediochre.

You jumped into this thread with a very bold assertion regarding video game's being conscious, got challenged to justify/clarify what you meant, and have subsequently either attacked posters accusing them of posing ridiculous questions or posed your own ridiculous questions.

Its okay to make a mistake and get called out for it.  Just be a man about instead or resorting to adolescent defensive tactics.

No, I just found it SO ironic that people have asked Sci and (proponents of those ideas in general) multiple times to explain the specific mechanics behind their beliefs like free will and non local memory/consciousness and instead of doing that they just fall back on philosophical bullshit wordplay. Facetiously asking "what does X mean" and rewriting the argument they're responding to into something vague and totally different than the actual question they were asked. But now, magically, when it's something they're skeptical about they want to know the mechanics, they want the specifics. That's a hilarious level of dishonesty. They did exactly what I predicted they would above, use the same arguments a skeptic would for consciousness, free will and whatnot only because it's a machine. That was the point of me spitting their questions back at them in skeptic form, to demonstrate that that's what they're doing.

Especially when this is coming from someone who claims to have studied programming. To have to ask what it means to have access to the source code, wanting to know how the memory and database work, sorry but I'm not humoring this ridiculousness. During the same time I posted updates to both my writeup on how I learned how to poltergeist and a new thread on a new set of experiments I'm putting together. With those two posts I've contributed more to this forums goal of science focused psi discussion than every philosophical post combined. I think I'll focus my time and energy on that.
"The cure for bad information is more information."
(2019-03-29, 02:34 PM)Mediochre Wrote: With those two posts I've contributed more to this forums goal of science focused psi discussion than every philosophical post combined.

No, you've just woven together story telling.  As a reader all you've offered me is an appeal to your own authority.  There's nothing to sink one's teeth into and you know that.  Doesn't mean what you tell is is false but its no more "scientific" than any other retelling of strange phenomological events.
[-] The following 1 user Likes Silence's post:
  • Valmar
This is a pretty simple concept that requires no philosophy, People are products of their environment. Their own memories and thoughts are also part of that environment thanks to neural plasticity. Vast amounts of empirical evidence supports this. If a program has access to its source code it has the equivalent of neural plasticity, if it has a database of any type it has past experiences, and naturally it also has an environment that it accesses through it's physical sensors like a human. Now this program has all the same things as a human, Which is something you defacto believe is conscious. Therefore the program must also be conscious. If this isn't the case, justify why. Not in some philosophical bullshit way where you start navel gazing about "but what does processing mean? What is information?" but in a very practical, logical, empirical way. Because if you're already going to argue that equivalent physical architecture is enough to get equivalency you've already ceded the argument to neuroscience since you're effectively just saying it's all in the brain and brain structures. So come on, justify it. What's the difference?
"The cure for bad information is more information."
[-] The following 1 user Likes Mediochre's post:
  • Sciborg_S_Patel
(2019-03-29, 03:32 AM)Mediochre Wrote: What's the minimum change in human behaviour to constitute consciousness?


If you can't explain how free will choices or non local memory works in human consciousness, why would anyone be convinced that because a human seems to you like its learning it actually possesses consciousness? [If the actual collection of molecules are not conscious during some chemical reactions but conscious during others, I don't get what it is about the second set of chemical reactions that makes the collection of molecules conscious.]

I thought you wanted to get away from the "poor man's socratic nonsense?", if so should we not shift away from answering questions w/ questions?

As I said before I can wonder about whether my fellow humans are conscious while still being confident that a Turing Machine - which could be instantiated by a group of humans - is not conscious any more than a book or abacus is conscious.

Regarding the second paragraph's first sentence, I am basing it on structurally similarity to my own being that possesses consciousness - why I said a synthetic entity that produces the empirically verified necessary/sufficient structures would be a conscious entity.

Regarding the italicized second sentence. are you making an argument for panpsychism? That all substances or at least processes are conscious, including the running of Turing Machines? Because it seemed you were saying it takes a specific kind of program to make a non-conscious physical instantiation of a Turing Machine conscious?

Perhaps I've misunderstood, but my reading of your argument is that a certain kind of program - specifically deep learning or possibly any class of machine learning programs - can make a computer - thus a physical instantiation of a Turing Machine - conscious. However this same computer is not conscious before it runs the programming, and possibly not conscious when it runs other programs.

If that is not the argument you are making, then I apologize as that is the argument that I think is wrong and was arguing against.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


This post has been deleted.
(2019-03-29, 04:13 PM)Silence Wrote: No, you've just woven together story telling.  As a reader all you've offered me is an appeal to your own authority.  There's nothing to sink one's teeth into and you know that.  Doesn't mean what you tell is is false but its no more "scientific" than any other retelling of strange phenomological events.


You could always try it yourself and record your own results, ask me questions, etc. The same as any other scientific endeavour. I mean, unless every research paper ever written is also just storytelling to you because you can't verify any of it just from reading it. Sure mines a little more informal, but the basis is the same.
"The cure for bad information is more information."
(2019-03-29, 02:34 PM)Mediochre Wrote: No, I just found it SO ironic that people have asked Sci and (proponents of those ideas in general) multiple times to explain the specific mechanics behind their beliefs like free will and non local memory/consciousness and instead of doing that they just fall back on philosophical bullshit wordplay. Facetiously asking "what does X mean" and rewriting the argument they're responding to into something vague and totally different than the actual question they were asked. But now, magically, when it's something they're skeptical about they want to know the mechanics, they want the specifics. That's a hilarious level of dishonesty. They did exactly what I predicted they would above, use the same arguments a skeptic would for consciousness, free will and whatnot only because it's a machine. That was the point of me spitting their questions back at them in skeptic form, to demonstrate that that's what they're doing.

The free will thread is 70+ pages, I think Laird and I put a decent amount of discussion regarding causation. And there's a thread to discuss causation further. It can be unsatisfying, but it certainly wasn't vague.

Quote:Especially when this is coming from someone who claims to have studied programming. To have to ask what it means to have access to the source code, wanting to know how the memory and database work, sorry but I'm not humoring this ridiculousness.

Not how they work in general, how they specifically work for programs you - if I understand your argument - would say have made the computer a conscious entity.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell



  • View a Printable Version
Forum Jump:


Users browsing this thread: 1 Guest(s)