Analytical argument against physicalism

52 Replies, 8227 Views

(2017-09-23, 02:15 PM)Laird Wrote: I finally got around to reading your article, Titus. Nice work, you guys did a great job of outlining an argument which seems watertight.

Incidentally, I first came across an argument against epiphenomenalism vaguely similar to yours, and which I found compelling, on a post on a blog to which another forum poster directed me a while back. The blog post is Consciousness (IV) — The Case of the Lunatic Fish, and here is the most germane part of it:


I wonder what your response to that is? Would you suggest that it is not as strong an argument as it could be because of the possibility of an innate conception of consciousness, as you do of certain arguments in your paper? Or do you see it as more closely approximating your strong argument?

I happen to know the person in question and I wonder why he doesn't mention the article by Van Dongen and myself, because he knows it very well and we have repeatedly discussed it and a paper published after Exit Epiphenomenalism. It almost seems as if he wants to present the argument without giving us any credit for it. The analytical argument is basically the same, albeit in a different formulation.... Anyway, more important than credit is of cause the demolition of physicalism. 

Our argument is incompatible with any kind of new mysterianism by the way, because new mysterianism claims we should simply reject arguments against physicalism, because physicalism would simply be true. Although we could reach the conclusion that physicalism is true, we could not understand why it is true. Our argument goes against this, because the supposed total causal inefficacy of consciousness and its total dependency on the brain is not so much "very or too hard to understand for human minds" but simply untenable analytically.
(This post was last modified: 2017-10-04, 06:55 PM by Titus Rivas.)
[-] The following 1 user Likes Titus Rivas's post:
  • Laird
(2017-09-23, 10:48 PM)nbtruthman Wrote: The paper also summarizes a number of other arguments that have been formulated against epiphenomenalism. One of those is the intuitive. 

It has been shown by several studies that the brain is permanently changed by various conscious practices, including meditation and the learning of certain tasks. These are conscious willed mental actions which epiphenomenalism says have no causal efficacy on anything, especially the brain that it assumes originates them. But these empirical experiments demonstrate apparent causal action by willful actions of mind, where certain brain structures are observed to undergo slow, permanent changes apparently in response.  It may be simplistic to ask, but why is this not considered another, though empirical, argument against epiphenomenalism? At the very least, it seems to perhaps be a more effective version of the intuitive argument described in the paper. The meditation/brain change example is just one bit of a mountain of correlations observed to occur between conscious actions of will and physical (apparent) responses in the brain, body and other parts of the world, where the physical apparent responses are exactly what would be expected if consciousness actually had causal efficacy. These are correlations and don't prove actual causation, but I think this evidence constitutes a strong abductive reasoning argument from the preponderance of evidence, against epiphenomenalism. Please excuse this departure from rigorous philosophical argumentation.

In fact, this is not the intuitive argument, but rather an empirical argument akin to that of the existence of psi. The evidence for the impact of meditation upon the brain would even be a case of intrasomatic psychokinesis. However, extrasomatic psychokinesis is even more important because it can't be explained away as something ultimately brain based. 

The epiphenomenalist would simply state that the impact of meditation on the brain equals the impact of specific non-conscious computational processes in the brain. The influence of non-conscious aspects of cognition, including beliefs, cognitive "introspection" (in the sense of monitoring one's own cognition, rather than of one's subjective consciousness), is in principle fully compatible with epiphenomenalism. Cognition would be able to rewire the brain, consciousness would not. 

This can't refute the argument of extrasomatic psychokinesis though, because the brain can't possibly possess such powers outside itself. Therefore it must be related to the non-physical mind or consciousness rather than to the brain.
(This post was last modified: 2017-10-04, 07:38 PM by Titus Rivas.)
[-] The following 1 user Likes Titus Rivas's post:
  • Laird
(2017-09-23, 10:58 PM)Laird Wrote: Just a thought - and I'm not saying your argument doesn't merit attention in the paper - perhaps a counter-argument is that it is merely an assumption that the changes in the brain originate in the mind, because it might be that the brain changes itself in response to patterns of its own behaviour. I don't know whether this counter-argument is good enough to succeed, but thought it was worth mentioning.

Yes, it is good enough. We're talking about so-called embodied cognition here. Cognition (according to physicalism) would really amount to purely computational neurological processes and affect both other neurological and other bodily processes through neural pathways. It would have nothing to do with subjective, qualitative consciousness (other than in the sense of being its one and only source).
(This post was last modified: 2017-10-04, 07:17 PM by Titus Rivas.)
[-] The following 1 user Likes Titus Rivas's post:
  • Laird
(2017-09-24, 10:07 AM)nbtruthman Wrote: A good point, but a counter-counter argument might be that this would require that the brain change itself in just the right ways so as to generate apparently willed actions, like the continuance of meditation in the face of boredom, fatigue, etc.. Why should it do this, why would it bother when the conscious will itself is just an epiphenomenon with no causal efficacy in the world? Why generate consciousness in the first place?

This  issue is covered by the following paragraph of Exit Epiphenomenalism: 

Argument based on evolution theory
The evolutionary argument was already entertained by William James *18 and recently it has been defended once more by Karl Popper *19. According to William James, the properties of consciousness indicate its causal efficacy. First of all consciousness probably becomes more complex and intense in the course of animal evolution. In this sense it is similar to a physical organ. Secondly, consciousness would be a kind of "selective agency", an instrument to make decisions with. Thirdly, the nervous systems which get more complex at every stage of evolution, do not only seem to adapt better every time, and to get more flexible each time, but also they seem to get more unstable with every evolutionary step.
It is for this reason, that consciousness would have originated, following James, as it makes choices, and thus prevents the brain from being lost in chaos. This is due among other reasons to the fact that only consciousness has something to choose, 'matters has no ideals to pursue'. Thus consciousness raises the probability of the maintenance of biological life. On this point, James reasons as follows: This plausible image offers a justification of the existence of consciousness. If consciousness does not matter, why would it ever have originated during evolution? Karl Popper formulates it as follows: 'If natural selection is to account for the emergence of the World 2 of subjective or mental experiences, the theory must explain the manner in which the evolution of World 2 (and of World 3) systematically provides us with instruments for survival" *20.
Now, the problem with the evolutionary argument is that its proponents don't realize enough that not all individual parts of an organism need to be functional from the point of view of evolution theory *21. A bear may for example have a thick and warm skin which is also very heavy. The warmth of the skin contributes to the bear's survival, but the weight does not. The weight is an inevitable epiphenomenon of the fact that the skin is thick and warm. Thus it is well conceivable that something inevitably originates as a consequence of a certain organization of genes without it having any importance for evolution itself. Therefore, it is incorrect to sustain that epiphenomenalism would inevitably contradict (neo)darwinism. It is not necessary for consciousness to have a positive effect in order to be conserved as a possible effect of evolution, but exclusively that it would not affect the probability of survival and reproduction in a negative way. This is precisely what is the case according to epiphenomenalism: Consciousness does not have any impact on anything, neither positive nor negative. With regards to James's argument *22 of the "selective agent" that consciousness would be: this is explicitly attacked by Ray Jackendoff. In reality, Jackendoff holds, it is a subconscious, 'computational' process of concentration and selection of certain information, that would in many cases effectively lead to experiences of conscious attention. The real selection and choice would thus take place at a subconscious level, not based on subconscious objectives and motives, but on its hypothetical subconscious "substrates" (= the hypothetical physiological structures [or processes] underlying them).
(This post was last modified: 2017-10-04, 07:39 PM by Titus Rivas.)
[-] The following 1 user Likes Titus Rivas's post:
  • Laird
(2017-09-25, 12:07 AM)Laird Wrote: Well, that raises an interesting question: can there be will without consciousness? Could, e.g., a non-sentient but advanced AI be said to possess a will?

Let's say for argument's sake that it could. Then, what if part of its programming was to alter its own programming to improve itself according to that "will"? Could this be a valid analogy to the possibility of the human brain rewiring itself independently of consciousness?

This is just a matter of definition. In the everyday sense, there can't be a will without consciousness, because only beings with consciousness can be said to want anything. 

The human brain rewiring itself through cognition would not require any conscious will to do so. It would simply be motivated to do so, proximately by cognitive reasons, and ultimately by the way it's built by evolution.
[-] The following 1 user Likes Titus Rivas's post:
  • Laird
(2017-09-26, 07:08 AM)nbtruthman Wrote: That is an interesting question. I don't think so. The "will" inherently refers to an aware sentient mind having desire, purpose and intention to act, not the operation of programmed logic gates, no matter how many and how fast.

Definition (noun):
1. The mental faculty by which one deliberately chooses or decides upon a course of action: championed freedom of will against a doctrine of predetermination.
2.a. Diligent purposefulness; determination: an athlete with the will to win.
  b. Self-control; self-discipline: lacked the will to overcome the addiction.
3. A desire, purpose, or determination, especially of one in authority: It is the sovereign's will that the prisoner be spared.
4. Deliberate intention or wish: Let it be known that I took this course of action against my will.
5. Free discretion; inclination or pleasure: wandered about, guided only by will.
6. Bearing or attitude toward others; disposition: full of good will.

A non-sentient advanced AI might be designed to partially reprogram itself in response to failure as a task learning strategy to improve its performance. Many AI systems do something like this today. But they are nothing more than very sophisticated machines designed to learn by analyzing responses to actions and manipulating and storing data, modifying themselves to continually improve their performance in doing something like autonomously driving a car, or answering questions from Internet browsers, or diagnosing and devising treatments for medical conditions. There is no conscious willing to do anything going on inside these things, they are merely complex mechanisms. 

Of course the machines keep getting more and more "intelligent" in accomplishing designed tasks. This ultimately gets into issues with the Turing test. If a machine finally convinces human experimenters that it behaves and communicates exactly as if it is conscious (and the test is sophisticated and thorough enough), then the materialists would claim it is conscious. The interactive dualists would mostly say it is still just a mechanism mimicing human behavior albeit in a very cleverly designed way, that there is still nothing really conscious and self aware, sentient, going on inside. This being ultimately because in principle consciousness is not reducible to matter and energy in motion - human-made machines can never be conscious, sentient beings. I am of that opinion. 

So a non-sentient AI can't even in principle have a will, and a sentient AI (having a will) is impossible in my opinion.
 
As to whether the brain can rewire itself independently of consciousness, I don't think so. It is evident that brain structures complexify and grow as a built-in automatic response to increase in use, and vice versa. That mechanism does seem to be part of its built-in "programming". But the increase or decrease in use is due either to actions of the conscious will, or possibly disease processes. In the case of stroke recovery it looks as if the brain is rewiring itself mainly in response to conscious attempts by the patient to regain lost functionality. Maybe an expert could answer whether some stroke recovery still takes place in the brain of a victim who remains in a coma, with no consciousness.

I guess it's full circle back to the conflict of philosophies of mind between materialist and nonmaterialist views.

We agree on what you're saying about the will and the way most people would use that word.

However, I don't agree with what you're saying about the brain rewiring itself. In fact, this is one of the main tenets of physicalism, and we can't just assume that it is false, because it clashes with our intuition. Physicalists sincerely believe that anything cognitive is purely determined by neurological computation. They consider the effect of meditation etc. simply as the effect of some higher order of cognition, which would still remain fully embodied and brain based (and therefore unaffected by consciousness). A classic in this area is mentioned in Exit Epiphenomenalism, namely Consciousness and the Computational Mind by Ray Jackendoff. Jackendoff accepts the reality and efficacy of any kind of higher order of cognition, but consciousness would never participate in cognition. It would be a non-efficacious, passive receiver of the results of non-conscious computation.
(This post was last modified: 2017-10-05, 05:22 AM by Titus Rivas.)
(2017-10-04, 03:06 AM)Laird Wrote: As promised in my last post, here is some more in response to counter-arguments against the argument in Titus's paper.

The Standford Encyclopedia of Philosophy's article on epiphenomenalism refers to the sort of argument prosecuted by Titus in his paper as "self-stultification", and describes it in these terms (similar to those of Titus and his co-author):

"The most powerful reason for rejecting epiphenomenalism is the view that it is incompatible with knowledge of our own minds — and thus, incompatible with knowing that epiphenomenalism is true. (A variant has it that we cannot even succeed in referring to our own minds, if epiphenomenalism is true. See Bailey (2006) for this objection and Robinson (2012) for discussion.) If these destructive claims can be substantiated, then epiphenomenalists are, at the very least, caught in a practical contradiction, in which they must claim to know, or at least believe, a view which implies that they can have no reason to believe it".

It then purports to describe a counter-argument based on a figure supposedly describing an interactionist causal chain, where "M" is a mental event (which by the self-stultification argument cannot on an epiphenomenalist view be known, since it has no causal efficacy):, and where the Pn's are physical events (such as speech), and where, I think (but can't be sure), "C" indicates "directly causes":

                M
                  \
                |  C
                     \
                P1  P2 --> P3 --> ....

               (Figure 2)

[Please forgive me for the messiness of the diagram - it seems difficult if not impossible to specify a monospaced font in this editor.]

This counter-argument (so far as I understand it) seems to be premised on P3 conveying (potentially inferential) knowledge of M. There is a bunch more to it than that, but I won't bother to go into it, because this premise seems to me to be both a necessary part of the counter-argument and a red herring. It is a red herring because the so-called self-stultification argument neither entails nor suggests the premise that knowledge of M is conveyed by or inferred from a physical event: it is premised on the idea that a subsequent mental event would constitute knowledge of a prior mental event!

In the article's own words, prior to supplying the above "interactionist" figure 2: "The argument that epiphenomenalism is self-stultifying in the way just described rests on the premise that knowledge of a mental event requires causation by that mental event". Yes, but not causation of a subsequent physical event - causation of a subsequent mental event!

Thus, the correct figure to be drawn of the failure of epiphenomenalist causation is this:
                M1      M2
                ^         ^
                |          |
                P1 -->  P2 --> P3 --> ....

The self-stultification argument is that because mental event M1 (some state of consciousness) has no causal efficacy upon mental event M2, then mental event M2 cannot contain knowledge of M1, i.e. we could never become conscious (i.e. "know", which - knowing - is a mental state) that we are conscious. There is no capacity for self-reflection under epiphenomenalism. This argument is simply not addressed in this form by the supposed counter-arguments presented in the SEP article.

Now, I read one of the papers (Robinson, 1982b) referenced as "further explain[ing] and defend[ing]" this purported counterargument, and, if you want to, you can too (via the pirate site Sci-Hub). It is this one: Causation, Sensations and Knowledge by William S. Robinson. I read it carefully to see whether it says anything that would indicate that its counter-argument against the self-stultification argument was any more relevant than the one summarised in the SEP article, but, apparently, it is not. It, too, seems premised on the idea that knowledge of mental events is physical, or at least that knowledge of mental events is predicated on or inferred from physical events.

There is another supposed counter-argument by Chalmers presented, but it too seems (to me) to fail, and I don't have the patience to address why right now.

If anybody else is as interested as I was to dig into these counter-arguments, I would welcome your thoughts: have I called it right, or am I, myself, falling prey to red herrings?

As I see it, this particular argument is discussed in Exit Epiphenomenalism: 

"The argument from the knowledge of contents of consciousness
The crudest form of the argument mentioned above states the following: Some epiphenomenalists are talking about all kinds of contents of consciousness, such as for example the experience of colours or sounds, and they hold at the same time that none of these contents would have any impact on reality. How is it possible then that those very same epiphenomenalists talk about contents of consciousness?
This version of the argument, however, can still be refuted by epiphenomenalism. While talking about the contents of consciousness, one does not have to be talking, according to epiphenomenalism, about the contents themselves, but in fact only about the specific physiological substrates that constitute the supposed cause of any kind of subjective experiences *35. A proposition such as 'I see the colour red' would thus be caused completely by the supposed physiological correlate of the content of the consciousness concerned. That there would be such physiological substrates for any conscious content that exists, is a basic principle of epiphenomenalism: All subjective experiences would be caused by cerebral structures or processes *36." 



The point is that this anti-epiphenomenalist argument deals with mind in a general sense, rather than with consciousness in particular. If mental contents are solely based on physical computation, than the refutation (of this argument against epiphenomenalism) does hold. But not if we concentrate on knowledge of the defining properties of consciousness itself (which must be based on consciousness itself). So apparent knowledge of most parts of our minds could in principle be compatible with physicalism, but specific knowledge of consciousness (as such) is not.
(This post was last modified: 2017-10-04, 07:44 PM by Titus Rivas.)
[-] The following 1 user Likes Titus Rivas's post:
  • Laird
(2017-10-03, 01:07 PM)Laird Wrote: nbtruthman, I hope that, my having procrastinated over a response, that which I have to offer is acceptable.

First, though, I hope that you understand that I am deliberately playing critic: as best I understand, you and I see things very similarly (I, too, am an interactionist dualist who doesn't believe that AI can become sentient). I am, then, simply trying to test our assumptions and arguments to see how well they hold up. In that spirit...

You quote various definitions of "will", and whilst it seems that they generally imply consciousness, I am not sure that this (consciousness) is really necessary to or implicit in (a definition of) will. As neural networks, especially those which can alter their own programming, become more and more sophisticated, their behaviour, too, will become more and more human-like, and will appear more and more to be goal-driven in the same way that human behaviour is: can, then, we really discriminate between goal-driven human behaviour and the (as you would have it) "merely apparently" goal-driven behaviour of AI? Perhaps we might refer to it as "artificial will"?

Re neural plasticity, you write (emphases mine):


Again, deliberately playing critic (because "in real life" I see things very similarly to you), your argument seems rather more rhetorical than logical: i.e. it is based on assertions which themselves are unsupported. For example, you write that "the increase or decrease in use is due [...] to the actions of the conscious will", but this is exactly the point in contention (i.e. that the brain rewires itself under the influence of the conscious will as opposed to autonomously), so if you intend this as a premise in an argument, then it begs the question, and if not, then it is not even an argument but "merely" rhetoric.

I hope to post more in response to (arguments against) the paper linked to in the opening post sometime soon, but for the moment am running out of battery charge, and won't be able to recharge in the very immediate future, so please stay tuned and be patient!

"....can, then, we really discriminate between goal-driven human behaviour and the (as you would have it) "merely apparently" goal-driven behaviour of AI? Perhaps we might refer to it as "artificial will"?"


It seems to me that probably the answer is no, for an AI sufficiently developed to mimic human consciousness. But that behavior discrimination does not really determine whether or not the AI is actually conscious and sentiently aware. That's the problem with the Turing test. It is merely judging behavior - that seems to be all we can do to try to decide the issue. Ultimately we can't even really be absolutely sure other human beings are conscious, for that matter, except perhaps with use of PSI as in telepathy. We instinctively rely on intuition that other humans are conscious unless perhaps we are philosophers enamored of a form of solipsism. 

If an AI system could be communicated with by telepathy in this test it would presumably show true consciousness in that system. I think that it would always fail that test. Anyway. it looks as if the issue probably ultimately boils down to philosophical argumentation, unfortunately.

".....if you intend this (neural plasticity apparently in response to use) as a premise in an argument, then it begs the question..."

I agree it ultimately begs the question. I said "it looks as if". I just think that the weight of evidence makes a strong abductive reasoning argument for causation of conscious will on the brain. For instance it is observed that stroke patients undergoing therapy involving intentional exercise of reduced or lost functions recover better and faster than patients not undergoing the therapy. But technically and philosophically rigorously it merely begs the question.
[-] The following 1 user Likes nbtruthman's post:
  • Laird
(2017-10-04, 07:36 PM)Titus Rivas Wrote: As I see it, this particular argument is discussed in Exit Epiphenomenalism: 

"The argument from the knowledge of contents of consciousness
[snip]"

The point is that this anti-epiphenomenalist argument deals with mind in a general sense, rather than with consciousness in particular. If mental contents are solely based on physical computation, than the refutation (of this argument against epiphenomenalism) does hold. But not if we concentrate on knowledge of the defining properties of consciousness itself (which must be based on consciousness itself). So apparent knowledge of most parts of our minds could in principle be compatible with physicalism, but specific knowledge of consciousness (as such) is not.

Yes, I think you've hit the nail on the head - the supposed counter-argument in the SEP article is not addressing the strongest form of the argument, only the weaker form you identify above.
[-] The following 1 user Likes Laird's post:
  • Titus Rivas
(2017-10-04, 09:27 PM)nbtruthman Wrote: If an AI system could be communicated with by telepathy in this test it would presumably show true consciousness in that system. I think that it would always fail that test.

Agreed.

(2017-10-04, 09:27 PM)nbtruthman Wrote: I just think that the weight of evidence makes a strong abductive reasoning argument for causation of conscious will on the brain. For instance it is observed that stroke patients undergoing therapy involving intentional exercise of reduced or lost functions recover better and faster than patients not undergoing the therapy. But technically and philosophically rigorously it merely begs the question.

Agreed again. (Critic hat now off!)
[-] The following 1 user Likes Laird's post:
  • Titus Rivas

  • View a Printable Version
Forum Jump:


Users browsing this thread: 3 Guest(s)