Are Insects Conscious?

49 Replies, 2639 Views

(2022-06-05, 01:57 AM)Laird Wrote: What do you make of the observation that very experienced human meditators are able to greatly reduce the amount that's "going on" in their minds, and yet, as a result, they become more conscious, not less?

Point taken. However, I think that there must be at least some truth even if partial to my surmise about complexity of consciousness. Complexity of behavior, which varies according to the evolutionary level of development of animals, can only indirectly offer clues about complexity of consciousness, but must be at least somehow related to degree of consciousness. 

This just goes to emphasize that consciousness remains a deep mystery with absolutely no scientific understanding, whether the consciousnes be human, or what is supposedly in lower organisms that can't directly communicate. It's just fundamentally not a substance of some sort. It is subjective experience itself, some totally immateral something that is not scientifically measureable. Intuitively, I think that it just must be a brute fact that somehow lower animals, if they have any consciousness at all, have a consciousness of different degrees of an entirely lower order. Quantifying that consciousness of a lower order may be impossible because it would require scientific understanding, which is entirely lacking and probably simply not possible.
(This post was last modified: 2022-06-05, 11:36 AM by nbtruthman. Edited 1 time in total.)
[-] The following 2 users Like nbtruthman's post:
  • Larry, Sciborg_S_Patel
(2022-06-05, 12:32 AM)nbtruthman Wrote: I think it may not be that bad, given that there are a lot of accidents caused by egregious human errors that would have been prevented by a reasonably well designed but not perfect AI. AI systems don't get sleepy or drunk or drugged up, for instance. All the AI systems need to do is work significantly better than humans as drivers, not work with zero errors. 
Well they don't, but driving while sleepy, drunk, or drugged up is a criminal offence.

I suppose the real problem is that if these things started to be used on a big scale, the situation would be similar to what you have with Big Pharma. I'm glad that legal problems will probably ensure these things will never be deployed en-mass. But this isn't the point of philosophical interest. I mean we are all aware of things we learned once and now do automatically, and a few things we never learned but do automatically. We know that in us, those two modes of behaving cooperate with each other seamlessly.

However evolution works, it can't really scrap a lot of stuff and re-build from scratch, so I tend to think that in us the balance between those two modes of operation has shifted towards cognition, but I suspect the balance goes all the way down. The problem is that few people will perform experiments that show up the faults inherent in totally algorithmic systems.

One should also never underestimate the role of industrial hype in all this. For example I read of one instance in which a pizza firm had started to deliver their food using driverless cars - to see if this form of delivery was acceptable to customers.

Somewhere in the small print lay the truth. Every vehicle was equipped with a human driver concealed inside the vehicle. This was justified because the study was about customer reactions, not the feasibility of automatic delivery as such!
(This post was last modified: 2022-06-05, 12:05 PM by David001. Edited 1 time in total.)
[-] The following 1 user Likes David001's post:
  • Sciborg_S_Patel
(2022-06-05, 12:20 AM)nbtruthman Wrote: I think it's a matter of less or more "going on" in their heads. The complexity of the qualia plus the thoughts plus the emotions, amounting to a sort of quantity of consciousness. For instance with humans, we know for virtual certainty that a baby with practically no abstract thought (and very little thinking of any sort) has less going on up there than a normal adult, but of course both have consciousness. 

With ants and other insects we have much less assurance that anything at all is "going on" besides autonomic processes, other than a few behaviors that just might be clues (or might not).

But what is going on in their heads isn't consciousness, it's thought and feeling.  Are we not conflating consciousness with that which the creature is conscious of; in other words, are we not conflating subject with object?
[-] The following 3 users Like Brian's post:
  • Ninshub, Sciborg_S_Patel, stephenw
(2022-06-05, 01:07 PM)Brian Wrote: But what is going on in their heads isn't consciousness, it's thought and feeling.  Are we not conflating consciousness with that which the creature is conscious of; in other words, are we not conflating subject with object?

Well if you are anaesthetised all those things vanish - consciousness, thought and feeling.

Honestly, I think it is a mistake to try to split consciousness up in this way.

Digital machines have voltage levels in them - nothing else. We kind of know this because these things are designed into them.
(2022-06-05, 04:18 PM)David001 Wrote: Well if you are anaesthetised all those things vanish - consciousness, thought and feeling.

Honestly, I think it is a mistake to try to split consciousness up in this way.

In practice yes; in conversation and concept no.  It's like saying we shouldn't split up cars and metal.  If we are discussing cars, we don't have to discuss metal.  Machines are not conscious and yet they process information, so in discussion the difference is important.
[-] The following 2 users Like Brian's post:
  • Ninshub, stephenw
(2022-06-05, 12:32 AM)nbtruthman Wrote: I think it may not be that bad, given that there are a lot of accidents caused by egregious human errors that would have been prevented by a reasonably well designed but not perfect AI. AI systems don't get sleepy or drunk or drugged up, for instance. All the AI systems need to do is work significantly better than humans as drivers, not work with zero errors. 

In fact, from the practical pragmatic standpoint, AI driving systems only need to work on the average just as well (or poorly) as human drivers, in order for the AI systems to be a desirable option. At that level of performance, no net difference in accident rates, the advantage of not having to pilot the automobile would presumably be a saleable option. However, engineers and scientists are having great difficulty in designing the AI systems to be even just that good. I think they will get there (imperfect but practical and saleable systems), but it will take many more years of development.

Plus of course, the legal tangle of litigation over who or what is responsible for an accident gets much worse.

I realize accident reduction is the ideal, but I believe it is quite far from being at all achievable. As you point out the legal responsibility question is a big one made worse by marketing hype being distant from what the vehicle is actually capable of.

Machine "learning" is, at its basic core, attempting to classify a pattern mathematically. Think of "curve fitting" save here few people can tell you exactly why the program does what it does even after the fact whereas with basic curve fitting you at least can see why you are missing some points while hitting others.

This also doesn't get into the question of how exploitable these systems are - my suspicion is that they are quite brittle once someone figures out the weaknesses.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


[-] The following 2 users Like Sciborg_S_Patel's post:
  • stephenw, nbtruthman
(2022-06-05, 01:07 PM)Brian Wrote: But what is going on in their heads isn't consciousness, it's thought and feeling.  Are we not conflating consciousness with that which the creature is conscious of; in other words, are we not conflating subject with object?

Thoughts and feelings are inseparable immaterial attributes of human consciousness but not in themselves the essence of consciousness, and must vary greatly in how much (degree of complexity?) is expressed in lower animal forms. I don't think we are conflating consciousness itself with the objects of conscious attention - we are just trying to estimate the complexity or potential complexity or the depth of the consciousness of the particular entity, by estimating the complexity of the objects of thought and attention. As Laird pointed out, in some very advanced entities, the measure of their degree of consciousness may have to be (rather than complexity of objects of thought), the potential for or maximum capability of depth and strength or intensity of this unitary consciousness.
(This post was last modified: 2022-06-05, 09:48 PM by nbtruthman. Edited 2 times in total.)
[-] The following 1 user Likes nbtruthman's post:
  • Ninshub
(2022-06-05, 09:39 PM)nbtruthman Wrote: As Laird pointed out, in some very advanced entities

I made no mention of advanced entities. I just said it [enhanced consciousness by minimising its thought-based contents] seemed to be the case for human meditators, and implied that this might apply to non-human conscious beings too.
(This post was last modified: 2022-06-07, 12:08 PM by Laird. Edited 1 time in total.)
[-] The following 1 user Likes Laird's post:
  • Ninshub
(2022-06-05, 04:18 PM)David001 Wrote: Well if you are anaesthetised all those things vanish - consciousness, thought and feeling.

Honestly, I think it is a mistake to try to split consciousness up in this way.

Digital machines have voltage levels in them - nothing else. We kind of know this because these things are designed into them.

Vanished or blocked?  My take is that it is a state of information processing where both sensation and thought are interrupted.

Yes, it is insightful to partition the volts/amps from the patterns of semi-conductors as a physical process.  The activity, as energy, has no (very little) meaning.  It is a clear point of view, focusing on that its just batches of electrons in a designed maze.

However, in the perspective of informational realism, there is an equally active environment, that is different but comparable to the physical one.  An environment describable by science - using logic and semantics.  An environment full of action!!  The output of a computation can change the real-world using the "leverage" of logic and meaningful words.

Science can not see the Mystic.

But why not embrace what patterns science can document about mind?  Tell me folks, how is an insect "hive mind" - not Psi as the phenomenon of local telepathy?  
(This post was last modified: 2022-06-06, 07:10 PM by stephenw. Edited 2 times in total.)
[-] The following 1 user Likes stephenw's post:
  • Brian
New article on bee consciousness: https://www.theguardian.com/environment/...scientists

This piece is derived primarily from research described by Chittka in his new book "The Mind of a Bee", released July 19, 2022 (https://press.princeton.edu/books/ebook/...d-of-a-bee) . Some fascinating details of his work are emerging. He comes to the conclusion that proof is coming out that bees are clever, sentient and unique beings who can feel emotion, and think abstractly. As outlined in the article this research is of such a magnitude and the details are so comprehensive and extensive as to be fairly convincing that the signs of sentience are real and not illusory. Excerpts from the article:

Quote:Book: “Our work and that of other labs has shown that bees are really highly intelligent individuals. That they can count, recognise images of human faces and learn simple tool use and abstract concepts.”

He thinks bees have emotions, can plan and imagine things, and can recognise themselves as unique entities distinct from other bees. He draws these conclusions from experiments in his lab with female worker bees. “Whenever a bee gets something right, she gets a sugar reward. That’s how we train them, for example, to recognise human faces.”

It takes them only a dozen to two dozen training sessions to become “proficient face recognisers”, he said.

In the counting experiment, the bees were trained to fly past three identical landmarks to a food source. “After they had reliably flown there, we either increased the number of landmarks over the same distance or decreased it.” Their responses to this made it obvious that they were actually counting the number of landmarks.

Book: "The bees were capable of imagining how things will look or feel: for example, they could identify a sphere visually which previously they had only felt in the dark – and vice versa. And they could understand abstract concepts like “same” or “different”.

He began to realise some individual bees were more curious and confident than others. “You also find the odd ‘genius bee’ that does something better than all the other individuals of a colony, or indeed all the other bees we’ve tested..”

But when Chittka deliberately trained a “demonstrator bee” to carry out a task in a sub-optimal way, the “observer bee” would not simply ape the demonstrator and copy the action she had seen, but would spontaneously improve her technique to solve the task more efficiently “without any kind of trial and error”.

Quote:This reveals not only that a bee has “intentionality” or an awareness of what the desirable outcome of her actions is, but that there is “a form of thought” inside the bee’s head. “It’s an internal modelling of ‘how will I get to the desired outcome?’, rather than just trying it out.”

Quote:Feelings, emotions: In one experiment, bees suffered a simulated crab spider attack when they landed on a flower. Afterwards, “their whole demeanour changed. They became, overall, very hesitant to land on flowers, and inspected every one extensively before deciding to land on it.”

Bees continued to exhibit this anxious behaviour days after they had been attacked and sometimes even behaved “as if they were seeing ghosts. That is, they inspected a flower, and rejected it even if they saw there was no spider present.”

They behaved as if they had a sort of post-traumatic stress disorder. “They seemed more nervous, and showed these bizarre psychological effects of rejecting perfectly good flowers with no predation threat on them. After inspecting the flowers, they’d fly away. This indicated to us a negative emotion-like state.”

All these findings obviously have disquieting implications regarding our treatment of bees and other insects, for instance the casual raising of bees as forced pollinators and furnishers of honey and the casual disposal of hives and swarms if they become underproducing or hazardous to humans. Just the tip of an iceberg. We have a lot to answer for.
(This post was last modified: 2022-07-20, 03:44 PM by nbtruthman. Edited 8 times in total.)
[-] The following 2 users Like nbtruthman's post:
  • Laird, Ninshub

  • View a Printable Version
Forum Jump:


Users browsing this thread: 2 Guest(s)