After initially feeling very uneasy about AI in general - basically because I felt that mechanisms that could behave like conscious beings seemed like the first chink in my belief that reality is (partially) non-materialist.
My feeling now is that maybe the truth is more interesting. We need to think very clearly about the differences in cognition (mentally apply quotes if you wish) between humans, other animals, and AI - at least if it relies on neural nets (actual or simulated) - which were explored precisely because they mimic the architecture of the brain.
ChatGPT was trained using a snapshot of the internet, so it is easy to say that it is not analogous to a biological brain. However biological brains require a period of learning, so that difference may not be that important.
I am starting to wonder if the sort of mechanism that Stapp proposes - which may give an external mind (the observer in QM parlance) may work in biological and the simulated neural nets of AI. I wonder if those who research these devices would even realise if this was happening - because the whole process seems pretty inexact.
David
My feeling now is that maybe the truth is more interesting. We need to think very clearly about the differences in cognition (mentally apply quotes if you wish) between humans, other animals, and AI - at least if it relies on neural nets (actual or simulated) - which were explored precisely because they mimic the architecture of the brain.
ChatGPT was trained using a snapshot of the internet, so it is easy to say that it is not analogous to a biological brain. However biological brains require a period of learning, so that difference may not be that important.
I am starting to wonder if the sort of mechanism that Stapp proposes - which may give an external mind (the observer in QM parlance) may work in biological and the simulated neural nets of AI. I wonder if those who research these devices would even realise if this was happening - because the whole process seems pretty inexact.
David