A review article has appeared in Science about how understanding human consciousness can lead to conscious AI systems. The article is behind a paywall, but has been popularized at Live Science (www.livescience.com/60789-how-to-make-a-conscious-robot.html).
Comment:
If the science writer hasn't grossly mischaracterized the article, it seems to be remarkably inadequate. What an impoverished, inadequate analysis of human consciousness. The first two levels are limited to just cognition, or computation. The third level isn't really required for simple consciousness, as exists in a lot of animals.
They conveniently leave out the essence of consciousness, which is awareness, the inner sense of being and experience - starting with the simple experience of the color red, for example. This is knowing what something is like, that is, qualia. These researchers apparently dismiss David Chalmers' "hard problem" as invalid.
But I think that in the back of their minds they realize it is real but they just don't understand it (nobody does), and they know computers can't implement it through any amount of computing. All that AI can do is compute. So naturally being good reductive materialists and knowing which side their bread is buttered on, the researchers figure that their three levels had better be the essense of human consciousness. The researchers seem to be selfservingly in outright denial of the reality of the problem. After all, the hope for superintelligent even conscious AI is where the funding comes from. So they limit their definition of consciousness to what is possible to the technology.
So much for the cutting edge of neuroscience.
Quote:"Researchers have identified three levels of human consciousness, which may help scientists one day create conscious artificial intelligence, a study published in Science suggests. "Once we can spell out in computational terms what the differences may be in humans between conscious and unconsciousness, coding that into computers may not be that hard," said Hakwan Lau, a study co-author."
"To address the controversial question of whether computers may ever develop consciousness, the researchers first sought to explore how consciousness arises in the human brain. In doing so, they outlined three key levels of consciousness.
The first is level C0. This level of consciousness refers to the unconscious operations that take place in the human brain, such as face and speech recognition, according to the review. Most of the calculations done by the human brain take place at this level, the researchers said — in other words, people aren't aware of these calculations taking place.
The next level of consciousness, C1, involves the ability to make decisions after drawing upon a vast repertoire of thoughts and considering multiple possibilities.
The final level, C2, involves "metacognition," or the ability to monitor one's own thoughts and computations — in other words, the ability to be self-aware."
Comment:
If the science writer hasn't grossly mischaracterized the article, it seems to be remarkably inadequate. What an impoverished, inadequate analysis of human consciousness. The first two levels are limited to just cognition, or computation. The third level isn't really required for simple consciousness, as exists in a lot of animals.
They conveniently leave out the essence of consciousness, which is awareness, the inner sense of being and experience - starting with the simple experience of the color red, for example. This is knowing what something is like, that is, qualia. These researchers apparently dismiss David Chalmers' "hard problem" as invalid.
But I think that in the back of their minds they realize it is real but they just don't understand it (nobody does), and they know computers can't implement it through any amount of computing. All that AI can do is compute. So naturally being good reductive materialists and knowing which side their bread is buttered on, the researchers figure that their three levels had better be the essense of human consciousness. The researchers seem to be selfservingly in outright denial of the reality of the problem. After all, the hope for superintelligent even conscious AI is where the funding comes from. So they limit their definition of consciousness to what is possible to the technology.
So much for the cutting edge of neuroscience.