The Plant Consciousness Wars

122 Replies, 13841 Views

(2019-07-08, 11:19 AM)Vy Chấn Hải Wrote: Do that mean :l every time i go vegan bcs i feel bad when i eat to much meat, im still a monster Sad

Not if you only eat fruit.
[-] The following 1 user Likes Laird's post:
  • Valmar
(2019-07-08, 09:22 AM)Laird Wrote: I think the two situations are distinct though.

Plants are living beings whose existence is part of the natural world, just as we are, and so it is reasonable to expect that similarities in behaviour between plants and us are due to similarities in essential nature, including (the potential for a) sentient nature.

Artificial intelligence on the other hand can be reduced to calculation in electronic hardware, and we have no reason to expect that calculation in electronic hardware alone can be associated with consciousness, and in particular with the sort of sensations (of pleasure and suffering) which most qualify a being for ethical consideration. It remains an open question as to whether or not it can.

I agree with you regarding artificial intelligence. It always seems to me that somewhere along the line people are being, or choosing to be, misled. Humans like to use their imagination, to enjoy falling into an illusion. Think for example of going to the cinema and watching a movie. When the illusion works for us, we can become completely immersed inside something, to be in another world for a while. But we all know and understand that it isn't real. When it comes to carrying out a set of calculations according to some algorithm, and especially bearing in mind this needn't be done electronically, calculations could be done using gear-wheels and levers, or by a human, or team of humans using pen and paper - there seems no reason to consider that this activity results in anything such as awareness suddenly appearing as a result. Yet so much nonsense is said or written about AI, as though somehow we lost sight of the illusion, and started to believe in a fantasy.
[-] The following 7 users Like Typoz's post:
  • North, Kamarling, nbtruthman, tim, Sciborg_S_Patel, Laird, Valmar
Perhaps I've provoked an excursion off the topic by raising the subject of AI . But just to respond very briefly - regarding AI being different because it's artificial, I agree to some extent, but I think it depends how the AI works. If it were essentially a huge tree of instructions about how to act in different situations, designed to mimic human action, I would be sceptical about the possibility of consciousness. On the other hand, if it were essentially a huge electronic analogue of the human brain that spontaneously learned how to behave, then if it ended up acting externally like a human I'd be inclined to think that what was going on inside was human-like too.

Regarding the argument about gear-wheels and levers, I do think that is a red herring. I don't think it's really any different from showing people a picture of a neuron and saying it's ridiculous that it could produce consciousness. I think we have to consider the problem in terms of information processing on the much larger scale of the whole organism, and not about the small-scale physical components, be they neurons, gear-wheels or even little men in huts with abacuses.
At first the plant biologist common sense opinion seemed reasonable to me, that because there is no evidence for structures such as neurons, synapses or a brain in plants, plants just don't seem to have the necessary equipment to manifest consciousness.

However, intelligence (or intelligent behavior) appears to be present in a continuous spectrum from absolutely zero to very rudimentary even in the most primitive organisms (bacteria, single eukaryotic cells), rudimentary intelligence (amoebas, slime mold colonies), animals of very small intelligence (simple metazoan animals like nematodes, jellyfish, sponges), gradually greater and greater with arthropods like lobsters, then fish, then reptiles, then mammals. Then there are anomalies like the invertebrate cephalopods (octopuses) that apparently have intelligence as advanced as the great apes and dolphins.

Consciousness may also be present in a spectrum from very rudimentary to advanced, corresponding to the degree of intelligent behavior and with complex metazoan animals very roughly to the complexity and size of the brain. Of course we don't really know, since there is no real test that can actually prove the presence of consciousness. There is at least the mirror test with relatively advanced animals as a measure of sentient self-awareness. The mirror test gauges self-awareness by determining whether an animal can recognize its own reflection in a mirror as an image of itself.

The issue here is that plants like trees also have been demonstrated to present intelligent behavior of a sort, responses and actions fitted to their relative immobility and organismic form. So maybe they also have rudimentary consciousness of some sort, whether we want to admit it or not considering the implications. But there doesn't seem to be any test for it whatsoever, obviously of course not even the mirror test. Unfortunately plants don't have anything equivalent to image-forming eyes and the neurological system necessary to process images, and can't readily respond with behavior, so it is impossible to even attempt the mirror test.

There is a fascinating article on the science of this controversy in Trends in Plant Science. One idea on how to get around the lack in plants of any of the neurological structures known to be necessary for consciousness in animals is "swarm intelligence", but it is attacked as having a lot of problems.

Quote:"The term ‘swarm intelligence’ has also been applied to plants based on the supposed similarities between individual plant cells and social insects. According to this idea, plant behavior emerges from the coordination of individual cells and tissues, analogous to the problem-solving that emerges from the communication and cooperation between the members of a bee hive. However, this analogy has several problems. Bees are free to move about inside and outside the hive, while plant cells are permanently attached to each other. Moreover, the interactions between plant cells and tissues occurs with little or no genetic conflict, whereas individual bee behavior in a hive involves a great deal of genetic conflict due to the fact that the queen, in the course of several mating flights, collects semen from multiple males from other hives, giving rise to daughters with diverse genetic backgrounds."
(This post was last modified: 2019-07-09, 04:36 PM by nbtruthman.)
[-] The following 3 users Like nbtruthman's post:
  • Kamarling, Typoz, Sciborg_S_Patel
(2019-07-09, 08:48 AM)Chris Wrote: Perhaps I've provoked an excursion off the topic by raising the subject of AI . But just to respond very briefly - regarding AI being different because it's artificial, I agree to some extent, but I think it depends how the AI works. If it were essentially a huge tree of instructions about how to act in different situations, designed to mimic human action, I would be sceptical about the possibility of consciousness. On the other hand, if it were essentially a huge electronic analogue of the human brain that spontaneously learned how to behave, then if it ended up acting externally like a human I'd be inclined to think that what was going on inside was human-like too.

Regarding the argument about gear-wheels and levers, I do think that is a red herring. I don't think it's really any different from showing people a picture of a neuron and saying it's ridiculous that it could produce consciousness. I think we have to consider the problem in terms of information processing on the much larger scale of the whole organism, and not about the small-scale physical components, be they neurons, gear-wheels or even little men in huts with abacuses.

Computationalism is the idea that all thinking is at root computer programs biologically implemented by the human brain in all its vast complexity as a massively parallel data processor. 

In the 1930s mathematician Kurt Gödel proved that the idea of provability in a purely formal system can’t capture all the truths expressible in that system, using its own rules and proof procedures. This principle is general, in other words, or we can say scalable, beginning with simple systems that include the concept of counting integers, one, two, three, etc., to systems fantastically complex, ad infinitum.

From The Mind Can't be Just a Computer:        

Quote:"The question for computationalists and their critics is: what does Gödel’s strange proof say about systems that are supposed to undergird the human mind itself? It’s fantastically complex, sure. But Gödel’s result assures us that it, too, is subject to incompleteness. This leaves the mechanist in a bind: if in fact, the system for the human mind is subject to incompleteness, it follows that there is some perfectly formal and valid statement in mathematical logic that is completely impervious to all attempts at proving it. But if we are computers, this means that our insights into mathematics must stop at this statement. We are blind to it because as computers ourselves, we must use only our proof tools, with no access to our “truth” tools. Strange. As believers in computationalism, we would be ever so strangely incapable of doing our jobs as mathematicians.

Some statement — call it “G” in keeping with Penrose’s convention — is true but not provable for our own minds. We can’t prove it. But as mathematicians, we should still be able to see that it’s true. (Comment: and in fact mathematicians do see that it is true). Truth, in other words, ought still to be available to the human mind, even as the tools of a strict logic are inadequate. That’s mathematical insight, like the kind Gödel himself most surely used to prove Incompleteness.

Ergo, we must not be completely computational at root. The mind must have some powers of perception or insight outside the scope of purely formal methods.
....................
But a weaker thesis, still inspired by Gödel’s groundbreaking result, really provides evidentiary support for the common sense conclusion that our insights, discoveries, and sheer guesses aren’t disguised programs. On a Weak Gödel Thesis, we see that the philosophical or metaphysical claim that the human mind is a computer accounts poorly for obvious observations about thinking. Insight becomes programmed. But it is the very nature of the mind to sit outside such determinism."

Therefore it looks like our minds and consciousness must not be reducible to massive parallel data processing in massive neural nets consisting of neurons, nerve fibers and synapses. Of course there is also a great body of empirical evidence for this in in areas like veridical NDEs, NDEs while the brain is not functioning,  and veridical reincarnation research. No matter how closely an advanced AI system mimics human behavior, it does not appear it will be conscious.
(This post was last modified: 2019-07-09, 05:45 PM by nbtruthman.)
[-] The following 3 users Like nbtruthman's post:
  • Valmar, Kamarling, Typoz
(2019-07-09, 05:33 PM)nbtruthman Wrote: Computationalism is the idea that all thinking is at root computer programs biologically implemented by the human brain in all its vast complexity as a massively parallel data processor. 

In the 1930s mathematician Kurt Gödel proved that the idea of provability in a purely formal system can’t capture all the truths expressible in that system, using its own rules and proof procedures. This principle is general, in other words, or we can say scalable, beginning with simple systems that include the concept of counting integers, one, two, three, etc., to systems fantastically complex, ad infinitum.

From The Mind Can't be Just a Computer:        


Therefore it looks like our minds and consciousness must not be reducible to massive parallel data processing in massive neural nets consisting of neurons, nerve fibers and synapses. Of course there is also a great body of empirical evidence for this in in areas like veridical NDEs, NDEs while the brain is not functioning,  and veridical reincarnation research. No matter how closely an advanced AI system mimics human behavior, it does not appear it will be conscious.

I've really never got the point of that argument. As if mathematicians somehow operated by intuition rather than proof, and as if there was some magical way of proving their intuition was correct, apart from proof. Maybe I'm missing something, but I don't see it.
(2019-07-09, 05:53 PM)Chris Wrote: I've really never got the point of that argument. As if mathematicians somehow operated by intuition rather than proof, and as if there was some magical way of proving their intuition was correct, apart from proof. Maybe I'm missing something, but I don't see it.

But the proof itself is born from and rests on our intuitive sense of Reason?
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


[-] The following 1 user Likes Sciborg_S_Patel's post:
  • Valmar
(2019-07-09, 05:53 PM)Chris Wrote: I've really never got the point of that argument.

I think the argument that Titus Rivas put forward in his paper which he linked to in the opening post of his Analytical argument against physicalism thread is more persuasive, because it seems to me that the consciousness of the AI you describe here...

(2019-07-09, 08:48 AM)Chris Wrote: On the other hand, if it were essentially a huge electronic analogue of the human brain that spontaneously learned how to behave, then if it ended up acting externally like a human I'd be inclined to think that what was going on inside was human-like too.

...would have to be epiphenomenal, and Titus's paper does a good job, I think, of refuting the possibility of epiphenomenalism.
[-] The following 2 users Like Laird's post:
  • Valmar, Typoz
(2019-07-09, 06:43 PM)Sciborg_S_Patel Wrote: But the proof itself is born from and rests on our intuitive sense of Reason?

Does it? I've never read the proof.

I think I probably am missing the point of the argument. I was only ever an applied mathematician, not a pure one. But evidently the argument has been severely criticised by those who do understand it.
(2019-07-09, 07:10 PM)Laird Wrote: I think the argument that Titus Rivas put forward in his paper which he linked to in the opening post of his Analytical argument against physicalism thread is more persuasive, because it seems to me that the consciousness of the AI you describe here...


...would have to be epiphenomenal, and Titus's paper does a good job, I think, of refuting the possibility of epiphenomenalism.

I always feel I should steer clear of these philosophical arguments, because I don't really understand them and I have an instinctive suspicion that they aren't capable of answering these questions anyway.

Are you essentially saying that any kind of consciousness arising solely out of physical mechanisms would have to be epiphenomenal, in the sense that it could be separated from the physical mechanisms and would have no influence on them? And therefore it's an impossibility?

It's the concept of separation that feels wrong to me. Information is being processed by physical mechanisms, and the processing of the information is influencing the physical mechanisms in turn. It feels to me as though consciousness should be indissolubly bound up with the processing of the information, and therefore that it can't be said that consciousness has no influence on the physical mechanisms. But I'm no philosopher.

  • View a Printable Version
Forum Jump:


Users browsing this thread: 1 Guest(s)