Are Insects Conscious?

49 Replies, 2652 Views

But intelligence and consciousness are not the same things?

Also the question of what "information" is rears its head. Are the ants receiving signals like a computer or experiencing the scent of these pheromones? I assume it's not that ants are really running programs but their behavior can be modeled by programs in the claim?

I do agree that ants are probably not reasoning at the level of humans, but I also don't think humans represent a discontinuity in Nature where Reason emerges. I accept it might be possible, but looking paranormal events involving animals and looking at the increasing research on animal - even insect - minds it seems to me the capacity for reasoning gradually awakens as one moves toward the human scale. What might be lost in this movement I'm not sure...
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


[-] The following 4 users Like Sciborg_S_Patel's post:
  • David001, Valmar, Laird, stephenw
(2022-05-31, 06:43 PM)Sciborg_S_Patel Wrote: But intelligence and consciousness are not the same things?

Also the question of what "information" is rears its head. Are the ants receiving signals like a computer or experiencing the scent of these pheromones? I assume it's not that ants are really running programs but their behavior can be modeled by programs in the claim?

I do agree that ants are probably not reasoning at the level of humans, but I also don't think humans represent a discontinuity in Nature where Reason emerges. I accept it might be possible, but looking paranormal events involving animals and looking at the increasing research on animal - even insect - minds it seems to me the capacity for reasoning gradually awakens as one moves toward the human scale. What might be lost in this movement I'm not sure...
The goal in all this is to have the science map the natural patterns of mind.  Intelligence (as the ability to reason thru problems) is a known variable and scales against outcomes - usually prespecified outcomes.  In the case of insect mind, I think you can have reasoning that is not conscious.  Conscious is not a commonly accepted variable (Tononi et all maybe an exception.)  And for me, it implies reflexive self-awareness.  Instinct is not that, but revels focused information processing.  Ant information processing, when reviewed against problem solving behavior has measurable intelligence in its output.

Ants solve problems and army ants do so in mechanical ways that are just crazy.  They are not engineering stacked bodies - they are sensing and exploring their way thru the environment with a focus on problem solving.  Their information processing is changing future probabilities that organize outcomes.  All his stuff is assignable to computable variables and ant IP can be modeled as simulation.

Does this get us spiritual enlightenment?  No.  But hard data speak to science, that there are patterns for tracing mental evolution.

I have said before the key metric is "understanding".  I think Tononi's Integrated Information speaks to outcomes where "an understanding" is influencing the environment.   Do ants understand, with personal or even societal depth?  No.  Each ant is very limited, but like many logic circuits in an organized pattern - the processing is done effectively

I would argue that ants have a certain functional understanding of their environment.  Not each ant, but as a collection.
[-] The following 2 users Like stephenw's post:
  • nbtruthman, Sciborg_S_Patel
(2022-05-31, 06:43 PM)Sciborg_S_Patel Wrote: But intelligence and consciousness are not the same things?

Also the question of what "information" is rears its head. Are the ants receiving signals like a computer or experiencing the scent of these pheromones? I assume it's not that ants are really running programs but their behavior can be modeled by programs in the claim?

I do agree that ants are probably not reasoning at the level of humans, but I also don't think humans represent a discontinuity in Nature where Reason emerges. I accept it might be possible, but looking paranormal events involving animals and looking at the increasing research on animal - even insect - minds it seems to me the capacity for reasoning gradually awakens as one moves toward the human scale. What might be lost in this movement I'm not sure...

Part of the problem is the ambiguous and often incorrect use of the term Intelligence, as in "artificial intelligence" or AI. The dictionary definition of intelligence is mostly inseparable from consciousness, as in the following:

"Capacity for learning, reasoning, understanding, and similar forms of mental activity; aptitude in grasping truths, relationships, facts, meanings, etc.
manifestation of a high mental capacity:

He writes with intelligence and wit.

The faculty of understanding."

But unfortunately, the correct meaning of "intelligence" is unrecognizably stretched out by its use by computer scientists and other experts to refer to the data processing using algorithms that is the sole accomplishment of computers and AI systems (which we know are devoid of consciousness). They have absolutely no understanding, grasping at truths, wit, etc., which are the sole territory of consciousness.

I think the main point of the article is that for all practical purposes ants demonstrate complex behaviors that can be entirely explained by computer-like algorithmic stimulus/response networks (which in themselves are not conscious). The implication is that since there is no other behavior that might offer clues that consciousness is also operating, there is therefore no consciousness in the ants. 

As opposed to human beings, where certainly much behavior can be explained by algorithmic processes, but there is also very much other behavior that absolutely can't be so explained and is attributed to consciousness. Aside of course from the self-evident personal observation that "I think therefore I am".
(This post was last modified: 2022-05-31, 09:21 PM by nbtruthman. Edited 2 times in total.)
[-] The following 3 users Like nbtruthman's post:
  • David001, Sciborg_S_Patel, Silence
(2022-05-31, 09:14 PM)nbtruthman Wrote: Part of the problem is the ambiguous and often incorrect use of the term Intelligence, as in "artificial intelligence" or AI. The dictionary definition of intelligence is mostly inseparable from consciousness, as in the following:

"Capacity for learning, reasoning, understanding, and similar forms of mental activity; aptitude in grasping truths, relationships, facts, meanings, etc.
manifestation of a high mental capacity:

He writes with intelligence and wit.

The faculty of understanding."

But unfortunately, the correct meaning of "intelligence" is unrecognizably stretched out by its use by computer scientists and other experts to refer to the data processing using algorithms that is the sole accomplishment of computers and AI systems (which we know are devoid of consciousness). They have absolutely no understanding, grasping at truths, wit, etc., which are the sole territory of consciousness.

I think the main point of the article is that for all practical purposes ants demonstrate complex behaviors that can be entirely explained by computer-like algorithmic stimulus/response networks (which in themselves are not conscious). The implication is that since there is no other behavior that might offer clues that consciousness is also operating, there is therefore no consciousness in the ants. 

As opposed to human beings, where certainly much behavior can be explained by algorithmic processes, but there is also very much other behavior that absolutely can't be so explained and is attributed to consciousness. Aside of course from the self-evident personal observation that "I think therefore I am".

I would say the distinction between the behaviour of small creatures - such as ants, wasps etc - and more complex creatures is very, very blurred. For example, Sheldrake gives the example of some wasps that make curiously shaped nests. Experiments have been done where these nests were damaged in particular ways while the wasp was away foraging . Sometimes the wasps repaired the damage, in other cases they built on top of the damage.

The point is that in practice it may be very difficult to create an AI that can cope with even quite simple examples of ad hoc repairs. Where are the robot farmers (that do the whole job on a traditional farm).

I think AI driverless cars are a wonderful counterexample to AI hype (I know I harp on about this a bit too much, but I think it is important). These cars were supposed to be just about ready to start to populate our streets some years back, but where are they? It is remarkable that the first decade of AI hype in the 1980's ended in a whimper. Indeed most people don't even remember that there was one!

Maybe the crucial thing is that an intelligent anything has to be able to cope with an open ended set of problem variations. If you jump into your hypothetical driverless car you expect it to deal with road works, and cyclists, and children, and supermarket trolleys dumped in the road........ If it can't, it isn't a driverless car and it isn't really useful. I suspect that a lot of (all?) biological functionality incorporates some real intelligence to cope with contingencies of various sorts.

Maybe the difference between different types of intelligence would diminish if we demanded that it did the whole task.
(This post was last modified: 2022-06-01, 05:26 PM by David001. Edited 2 times in total.)
[-] The following 1 user Likes David001's post:
  • Sciborg_S_Patel
(2022-06-01, 11:05 AM)David001 Wrote: Maybe the crucial thing is that an intelligent anything has to be able to cope with an open ended set of problem variations. If you jump into your hypothetical driverless car you expect it to deal with road works, and cyclists, and children, and supermarket trolleys dumped in the road........ If it can't, it isn't a driverless car and it isn't really useful. I suspect that a lot of (all?) biological functionality incorporates some real intelligence to cope with variations of various sorts.

Maybe the difference between different types of intelligence would diminish if we demanded that it did the whole task.
David, you make it sound like there are no driverless vehicles on the road and many billions spent on the future of same.  What is not a  "whole task" about the situation?

I do agree that there many types of intelligence - such as emotional intelligence, physical instincts of athletes, language learning, etc.....

But, as I said before all are scalable to standards and quants calculate the capabilities.  Calculating capabilities in manufacturing and engineering are high-tech missions.  The ability to measure capability has evolved in science.  Driverless cars will have failures and cyclists will get hit.  (As someone who rides - and has been hit - it is important) The marketing is pitched as: driverless cars will hit less bikers than drivers do now.

I may not agree with that as a good answer, but I do stay on the bike paths and out of traffic as an older guy and do not look forward to driverless semi-trucks.
(2022-06-01, 05:23 PM)stephenw Wrote: David, you make it sound like there are no driverless vehicles on the road and many billions spent on the future of same.  What is not a  "whole task" about the situation?

Billions have certainly been spent, but how many do you see? I'm not even sure of their legal status here in Britain. Remember that the industry invented a cheat in the form of cars that would drive themselves, but periodically ask the human driver to take over!
The whole task is for such a car to drive safely or roads which aren't perfect, and maybe being repaired. It would also be able to cope with a variety of careless humans - as I commented.
Quote:I do agree that there many types of intelligence - such as emotional intelligence, physical instincts of athletes, language learning, etc.....

But, as I said before all are scalable to standards and quants calculate the capabilities.  Calculating capabilities in manufacturing and engineering are high-tech missions. 
I suspect these may be the easier problems to solve because the behaviour of erratic human beings is excluded. Most programs in that environment are probably conventional programs written in the traditional way.
Quote:The ability to measure capability has evolved in science.  Driverless cars will have failures and cyclists will get hit.  (As someone who rides - and has been hit - it is important) The marketing is pitched as: driverless cars will hit less bikers than drivers do now.
There is an awful video from the front window of a driverless car. The car doesn't seem to 'see' a woman pushing her bike across the road and just ploughs in. You might claim that is just a glitch to iron out, of course!
Quote:I may not agree with that as a good answer, but I do stay on the bike paths and out of traffic as an older guy and do not look forward to driverless semi-trucks.

There was a period when they were seriously talking of driverless trucks.

I am also an older guy (72) and I ride a bike as well as a car, but only really use the bike on tracks with no vehicles. I will use the odd connecter road, cycling on the pavement (side walk) as far as possible (this is illegal in Britain in most situations, but widely tolerated!)

As I drive I am aware of a lot of things which would be hard to handle automatically. For example if children are behaving carelessly, they may present a greater collision hazard. Horses can get spooked, etc etc. The real point is that there is an open ended list of possible hazards that one should reasonably be able to avoid.
(This post was last modified: 2022-06-01, 05:49 PM by David001. Edited 1 time in total.)
[-] The following 1 user Likes David001's post:
  • stephenw
(2022-05-31, 09:14 PM)nbtruthman Wrote: Part of the problem is the ambiguous and often incorrect use of the term Intelligence, as in "artificial intelligence" or AI. The dictionary definition of intelligence is mostly inseparable from consciousness, as in the following:

"Capacity for learning, reasoning, understanding, and similar forms of mental activity; aptitude in grasping truths, relationships, facts, meanings, etc.
manifestation of a high mental capacity:

He writes with intelligence and wit.

The faculty of understanding."

But unfortunately, the correct meaning of "intelligence" is unrecognizably stretched out by its use by computer scientists and other experts to refer to the data processing using algorithms that is the sole accomplishment of computers and AI systems (which we know are devoid of consciousness). They have absolutely no understanding, grasping at truths, wit, etc., which are the sole territory of consciousness.
In a spiritual forum, what you says makes perfect sense.  But, if the orientation is to have science as a background for parsing Psi, then I have to object, hopefully in a kind manner.

As always, I like to be able to see the data.  Learning, as machine learning, has all the data you could want to see.  If the aptitude is language translation, math proofs, material science properties or physics equations - it is not literary - it is cold objective testing and results.  I like me some thinkers in ivory towers and see Princeton as a special place -- but the truths of AI are not archetypal or ontological Truths.  They are facts about outcomes and not perspectives of humanity.

AI is functional.  It uses truth-tables for refinement.  The concepts are working in reality.  Their capability to work effectively and achieve goals is measurable.  Machine learning maybe running the lives of half the world right-now!!!!!.  So in terms of science - deep-meanings of consciousness with no data will not speak to that world.

That is why I keep on plugging measuring the natural effects of mind.  The number one aspect of mind to measure is understanding.   Just as you have said, understanding is process event that may indicate levels of consciousness.  (I am assuming self-aware consciousness as your meaning)

I am all about what is "understanding" and how does mind get some and use it.  I don't think that self-awareness is needed to apply understandings to biological life.  Instinct displays understanding, while not being conscious.  ps  I think Integrated Information measures understanding more than it does consciousness.  That is what understanding is.  Integration is a personal meaning generator interacting with meanings in the environment.  Its all open to science as data.
(This post was last modified: 2022-06-01, 06:00 PM by stephenw. Edited 2 times in total.)
(2022-06-01, 05:47 PM)stephenw Wrote: In a spiritual forum, what you says makes perfect sense.  But, if the orientation is to have science as a background for parsing Psi, then I have to object, hopefully in a kind manner.

As always, I like to be able to see the data.  Learning, as machine learning, has all the data you could want to see.  If the aptitude is language translation, math proofs, material science properties or physics equations - it is not literary - it is cold objective testing and results.  I like me some thinkers in ivory towers and see Princeton as a special place -- but the truths of AI are not archetypal or ontological Truths.  They are facts about outcomes and not perspectives of humanity.

AI is functional.  It uses truth-tables for refinement.  The concepts are working in reality.  Their capability to work effectively and achieve goals is measurable.  Machine learning maybe running the lives of half the world right-now!!!!!.  So in terms of science - deep-meanings of consciousness with no data will not speak to that world.

That is why I keep on plugging measuring the natural effects of mind.  The number one aspect of mind to measure is understanding.   Just as you have said, understanding is process event that may indicate levels of consciousness.  (I am assuming self-aware consciousness as your meaning)

I am all about what is "understanding" and how does mind get some and use it.  I don't think that self-awareness is needed to apply understandings to biological life.  Instinct displays understanding, while not being conscious.  ps  I think Integrated Information measures understanding more than it does consciousness.  That is what understanding is.  Integration is a personal meaning generator interacting with meanings in the environment.  Its all open to science as data.

The conceptual problem here is that the essence of (at least human) subjective consciousness appears, after many years of fruitless "scientific" work by scientists in various disciplines, to be spiritual and not reducible to things that science can observe and analyze. We're talking about the possibility of consciousness in insects in this thread, not the operation of algorithmic data processing structures. 

Consciousness is supremely important to human beings, since it is the core of the human self. Many decades of work by scientists in various specialties have so far failed (and appear destined to do so for the forseeable future) in coming to any "scientific" understanding of consciousness - it is ineffable, and is NOT open to science, as proven by this sorry record of failure. Science fundamentally can't "parse" consciousness, and probably not psi either.

Of course AI is functional - it works in a lot of areas, to varying degrees. It just isn't self aware subjective experiential consciousness. Even after all the work over many decades it remains separated from consciousness by the same unbridgeable chasm. This applies to all the attributes of AI such as machine "learning" and storage of immense piles of data for access and processing by algorithms. Not consciousness, not real "learning" in the sense of a conscious entity willing the absorption of information, and thinking about it. Not the subjective experience of finding real meaning in something and then contemplating it. 

The fundamental Hard Problem of subjective experiential consciousness appears to be beyond the scientific paradigm, whether the scientists working on the problem are willing to admit it or not. AI is just not conscious, and therefore is only relevant to the discussion of insect consciousness in that it appears there is no such thing (at least as far as ants are concerned) since ant behavior appears to be the result of computer-like algorithms which are not conscious.
(This post was last modified: 2022-06-01, 08:28 PM by nbtruthman. Edited 10 times in total.)
[-] The following 2 users Like nbtruthman's post:
  • Valmar, Silence
Re the question in this thread's title, I think the answer is "Quite obviously, yes."

As far as the idea that algorithmic-like behaviour in ants disproves that they are conscious goes, my sense is: not very far at all. Humans display algorithmic-like behaviour in many circumstances too: just think about driving on bitumen roads. When approaching a red light, human drivers invariably slow to a stop just before the light. When approaching a round-about, human drivers invariably give way to the right (or perhaps the left, depending on which country they're in). Etc etc.

All of this, although algorithmic, is nevertheless performed consciously.
[-] The following 2 users Like Laird's post:
  • Valmar, David001
(2022-06-01, 11:53 PM)Laird Wrote: Re the question in this thread's title, I think the answer is "Quite obviously, yes."

As far as the idea that algorithmic-like behaviour in ants disproves that they are conscious goes, my sense is: not very far at all. Humans display algorithmic-like behaviour in many circumstances too: just think about driving on bitumen roads. When approaching a red light, human drivers invariably slow to a stop just before the light. When approaching a round-about, human drivers invariably give way to the right (or perhaps the left, depending on which country they're in). Etc etc.

All of this, although algorithmic, is nevertheless performed consciously.

Yes indeed, and most of us who have driven long enough have encountered a red light stuck on red. I remember once stopping at such a light at about midnight, and after waiting for some time I was just about to cautiously go past it when a police car pulled up behind me. As a result I had to wait a bit longer until the policeman emerged to tell me to do just that!

My point is that purely algorithmic methods applied in real life easily get stuck.
[-] The following 1 user Likes David001's post:
  • Valmar

  • View a Printable Version
Forum Jump:


Users browsing this thread: 7 Guest(s)