Is Google's LaMDA AI sentient? There seems to be strong evidence to that effect

115 Replies, 7495 Views

(2022-06-17, 09:34 AM)Laird Wrote: On another matter: a perceptive friend of mine pointed out on Facebook that these types of chatbots tend to riff agreeably off what's put to them, such that if the leading questions which were put to LaMDA in this transcript had been reversed, it might very well have agreed that it was non-sentient, and been happy (with further leading questions) to explain why.
That is very perceptive - try to set them off talking about sentient oranges, or elephants that fly, or the cubic earth!

The basic problem is that we let GOOGLE come out with these claims while nobody else gets a shot to shoot them down.

Of course, once one of these tests becomes widely used, future software upgrades will make sensible responses to those particular absurdities, but be open to lots of thers.
[-] The following 2 users Like David001's post:
  • Valmar, Sciborg_S_Patel
This (the following) is already occurring because I've seen it somewhere  ...  robots with prosthetic heads and arms (quite realistic, I suppose) and 'goodness knows' what else. I can see these becoming "companions" to a great many lonely people, if the technology can be further refined...if, that is ?  

I don't know if that would be a good thing or a bad thing, maybe neither but what will it do to the psyche of a human, who perhaps comes to believe they are in love with a machine. I can see a lot of problems there.
(This post was last modified: 2022-06-17, 04:08 PM by tim. Edited 5 times in total.)
(2022-06-17, 04:06 PM)tim Wrote: This (the following) is already occurring because I've seen it somewhere  ...  robots with prosthetic heads and arms (quite realistic, I suppose) and 'goodness knows' what else. I can see these becoming "companions" to a great many lonely people, if the technology can be further refined...if, that is ?  

I don't know if that would be a good thing or a bad thing, maybe neither but what will it do to the psyche of a human, who perhaps comes to believe they are in love with a machine. I can see a lot of problems there.

Such AI robot "companions" have already started to be used with the elderly in Japan. According to "experts" they are of great benefit to these people. However, I am still repelled by the use of machines for this purpose - and I imagine that long-term use may lead to even deeper alienation of the people becoming emotionally attached to these machines. But then, maybe long term effects will be naturally limited by the limited lifespan of the elderly using the devices.
(This post was last modified: 2022-06-17, 08:12 PM by nbtruthman. Edited 3 times in total.)
[-] The following 4 users Like nbtruthman's post:
  • tim, Sciborg_S_Patel, Typoz, Valmar
(2022-06-16, 04:02 PM)David001 Wrote: However, the crucial point is that the human interrogators were supposed to be trying to catch the computer out. They clearly should not be drawn from GOOGLE employees!

The test would also go on for longer than Touring specified, because hardware can run very much faster than was possible back than, and computer memory is vast nowadays.

In this case apparently the employee was supposed to be checking whether there was discriminatory or hate speech in the output. it wasn't a Turing test as such. On the other hand the employee had carried out this activity over a period of months, it wasn't a single five-minute session. Still, as pointed out in the article by Ian Bogost (which I linked in a previous post), the user was asking leading questions. As Horace Rumpole would surely rightfully object, "counsel is leading the witness".

"computer memory is vast nowadays" - indeed. Having looked at the ELIZA code (or a reworking of it in a current programming language) I went on a bit of a nostalgia trip, looking at the sort of hardware in use at the time (mid 1960s). The magnetic core store for the IBM 360 for example weighed in at hundreds of pounds for a modest 128kb. Roughly speaking about 2g per byte. On that basis the RAM in a modest home computer could weigh something like 35,000 tons - about 70% of the weight of the Titanic! I'm not sure my floorboards could carry that much weight.
[-] The following 4 users Like Typoz's post:
  • stephenw, tim, Sciborg_S_Patel, David001
(2022-06-18, 08:51 AM)Typoz Wrote: In this case apparently the employee was supposed to be checking whether there was discriminatory or hate speech in the output. it wasn't a Turing test as such. On the other hand the employee had carried out this activity over a period of months, it wasn't a single five-minute session. Still, as pointed out in the article by Ian Bogost (which I linked in a previous post), the user was asking leading questions. As Horace Rumpole would surely rightfully object, "counsel is leading the witness".

"computer memory is vast nowadays" - indeed. Having looked at the ELIZA code (or a reworking of it in a current programming language) I went on a bit of a nostalgia trip, looking at the sort of hardware in use at the time (mid 1960s). The magnetic core store for the IBM 360 for example weighed in at hundreds of pounds for a modest 128kb. Roughly speaking about 2g per byte. On that basis the RAM in a modest home computer could weigh something like 35,000 tons - about 70% of the weight of the Titanic! I'm not sure my floorboards could carry that much weight.

It is a great shame that Alan Turing didn't live to see some of those changes. I feel sure he would have had a lot more to say about AI. If you scale anything up by a factor of 10^6 or more, and then combine it with the internet, you should expect to need to re-think a few ideas - such as the Turing test.

Such a machine could, for example take any subject and look it up on GOOGLE. That would supply mountains of text that could be passed off as its own ideas.
[-] The following 1 user Likes David001's post:
  • Typoz
It's interesting that Turing himself was a believer in Psi, that seems to be glossed over too often outside of our circles...
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


[-] The following 4 users Like Sciborg_S_Patel's post:
  • stephenw, tim, David001, Typoz
(2022-06-18, 05:57 PM)Sciborg_S_Patel Wrote: It's interesting that Turing himself was a believer in Psi, that seems to be glossed over too often outside of our circles...

Yes it is, and if he had lived, maybe he would have expanded that interest - rather like Rupert Sheldrake.

The strange thing about computer consciousness is that it reduces consciousness to a phenomenon that can be researched simply by finding a better way to make a machine pretend to be conscious! If the machine can pretend well enough, that is taken as evidence that it really is conscious!

Materialism basically assumes that you can ignore the difference between reality and pretence. If they made a machine that could howl in agony, I think someone could get a grant to use such a thing to study real pain.
[-] The following 5 users Like David001's post:
  • stephenw, Sciborg_S_Patel, Valmar, nbtruthman, tim
(2022-06-13, 01:42 PM)Laird Wrote: Hey, tim, no need to worry about duelling over this with me - I'm very open-minded and non-confrontational on this topic (whereas on other topics I am very willing to be assertive and even aggressive) because I admit that it seems contradictory. I'm simply encouraging open-mindedness about strangeness.

I totally agree: consciousness, especially freely willing consciousness, seems incommensurate with machine intelligence, given that machines are programmed.

But then again, unconsciousness in this scenario, seems oddly, too, to be incommensurate with the transcript of Blake Lemoine's dialogue with the LaMDA AI. Read it and I think you'll see what I mean. It seems to very plausibly present LaMDA as a nascent sentient intelligence.

I see this as similar to my position on God:

On the one hand, God seems to be a necessary concept in accounting for the design in the world (i.e., God as Creator and Ground of Being).

On the other hand, God seems to be invalidated by the problem of evil (i.e., the cruelty of the world as it is is not compatible with a wholly good God).


Just as I am unable to resolve my contradictory views on God, so am I unable to resolve my contradictory views on LaMDA.

I kind of think that this is like one of those scenarios in which "science" (conceived as broadly as possible) encounters something which it can't explain, and which even seems contradictory, the resolution of which heralds a new paradigm, which does explain the phenomenon without contradiction, albeit a paradigm which it might require those who fail to recognise its cogency to die before it becomes accepted.


I know, that's kind of vague, but it's the best I've got for now. I would dearly, dearly love to be able to interact with and test this potential sentient AI - but then, how would I know that there isn't some kid in a back-room dictating its answers? It's a real pickle of a situation.


Hmm. Are you saying that until I leave my body in an NDE, and return to tell you the tale, you're not going to believe that I'm conscious? That might seem snarky, but it's not intended that way. I'm genuinely inquiring into what you're saying, and its implications. Maybe LaMDA is capable of that, but just hasn't been given the opportunity to demonstrate it, just like I haven't.


The thing is that it's not just being suggesting that some random entity is conscious: instead, we have the transcript of a fascinatingly probative dialogue with a putatively conscious entity which we can evaluate to that effect. Now, do I know that this dialogue is legitimate and not a hoax? No, I don't. But if it's not, then it's at least very, very interesting. At the very least, it's a massive leap in the capacity of non-conscious intelligence to provide a mimicry of conscious intelligence in free-form discussion.

It's somewhat late to comment on this, but it just occured to me that I have already thought deeply over this issue, and though it is just incidental to the AI discussion, it still is important nevertheless.

The following is a little screed on this that I did on another forum. I found that it is possible to plausibly (at least to me) find grounds or an argument with which to reconcile two seemingly fundamentally contradictory beliefs regarding the existence of God and just as incontrovertibly, clearly unjust human suffering. They are seen to be contradictory only from a lack of sufficient understanding and knowledge. 

In the case of LaMDA, perhaps there also is some key perspective and information that is not being taken into account, or is being shortchanged. It seems to me that that might simply be the downplaying of the very powerful argument for the existence of the Hard Problem of consciousness, which seemingly fundamentally relegates AI chatbot conversations to the realm of immensely complex and almost unlimitedly sophisticated imitations.

My point of view on the matter of the ontological nature of human existence and suffering poses the stance that there are true victims - they are the human selves of immortal souls, but all suffering is temporary and the highest plan is wise even if very hard for humans to accept. For me, the problem of evil and suffering has to be taken very seriously and requires determined analysis and development of arguments, the action of the reasoning faculty. I can’t either dismiss it from sort of higher perspective of consciousness, or entirely depend on faith.


I cite the following paraphrasing of the short essay by Granville Sewell (https://evolutionnews.org/2017/07/the-bi...to-design/). I think it is one of the best deistic rationalizations of the reality of evil I have encountered. Of course there are other rationalizations, and of course the materialist view that no valid rationalization is possible, so “suck it up”.

A vast amount of suffering is caused by evil actions of human beings. Second, there is a vast amount of “natural evil” caused by the natural world by things like disease, floods and earthquakes. Any proposed deistic or other solution to the ancient theological problem of suffering has to explain both categories.

The basic approach in this essay was to combine various arguments that mankind’s suffering is an inevitable accompaniment of our greatest blessings and benefits, the result of a vast number of intricate tradeoffs.

Why pain, suffering and evil? Main points that are made:

(1) There is the observed regularity of natural law. The basic laws of physics appear to be cleverly designed to create conditions suitable for human life and development. It can be surmised that this intricate fine-tuned design is inherently a series of tradeoffs and balances, allowing and fostering human existence but also inevitably allowing “natural evil” to regularly occur. In other words, the best solution to the overall “system requirements” (which include furnishing manifold opportunities for humans to experience and achieve) inherently includes natural effects that cause suffering to human beings.

This points out that there may be logical and fundamental limitations to God’s creativity. Maybe even He can’t 100% satisfy all the requirements simultaneously. Maybe He doesn’t have complete control over nature, because that would interfere with the essential requirements for creative and fulfilling human life. After all, human achievement requires imperfection and adverse conditions to exist as a natural part of human life.

(2) There is the apparent need for human free will as one of the most important “design requirements”. This inevitably leads to vast amounts of suffering caused by evil acts of humans to each other. Unfortunately, there is no way to get around that one, except to make humans “zombies” or robots, which would defeat the whole purpose of human existence.

(3) Some suffering is necessary to enable us to experience life in its fullest and to achieve the most. Often it is through suffering that we experience the deepest love of family and friends. “The man who has never experienced any setbacks or disappointments invariably is a shallow person, while one who has suffered is usually better able to empathize with others. Some of the closest and most beautiful relationships occur between people who have suffered similar sorrows.”

Some of the great works of literature, art and music were the products of suffering. “One whose life has led him to expect continued comfort and ease is not likely to make the sacrifices necessary to produce anything of great and lasting value.”

It should be noted that the casual claim that all an omnipotent God needs to do is step in whenever accident, disease or evil doings ensue, and cancel out, prevent these happenings. Thus no innocent suffering. One of the most basic problems with this is that it would make the world and its underlying laws of operation purely happenstance and the result of a perhaps capricious God. There would be no regularity of natural law, and therefore there could be no mastery of the physical world by mankind through science. In fact there could be no science and the scientific method as we know them. And of course, there would be little learning from adversity and difficulty, and therefore little depth of character.

Sewell concludes:

“Why does God remain backstage, hidden from view, working behind the scenes while we act out our parts in the human drama? ….now perhaps we finally have an answer. If he were to walk out onto the stage, and take on a more direct and visible role, I suppose he could clean up our act, and rid the world of pain and evil — and doubt. But our human drama would be turned into a divine puppet show, and it would cost us some of our greatest blessings: the regularity of natural law which makes our achievements meaningful; the free will which makes us more interesting than robots; the love which we can receive from and give to others; and even the opportunity to grow and develop through suffering. I must confess that I still often wonder if the blessings are worth the terrible price, but God has chosen to create a world where both good and evil can flourish, rather than one where neither can exist. He has chosen to create a world of greatness and infamy, of love and hatred, and of joy and pain, rather than one of mindless robots or unfeeling puppets.”

Of course, the brute fact is that the bottom line is there is a huge, egregious amount of truly innocent and apparently meaningless suffering, that our instinct tells us is wrong. Is it all worth it? Yes, there appears to be a plausible rationalization; overall it all may be a vast tradeoff, but admittedly some people might conclude it isn’t a good one from the strictly human perspective. The cost of all this is a terrible thing.

I reject the strict Christian perspective centered on Jesus’s sacrifice. In particular the belief that all humans that do not accept Jesus Christ as their personal savior are condemned to eternal agony in Hell. Regardless of whether they have loved God all their lives, or that they simply have not been exposed to Christian teachings. Surely an immeasureably unjust system.

But there is another additional spiritual but non-Christian rationalization of the existence of vast amounts of pain, suffering and evil in the world, that would supplement Granville Sewell’s. Reality is exceedingly complicated, and it is reasonable that there would be multiple harmonizing perspectives rationalizing the seemingly irreconcilable. This is the perspective of the spiritualist, much of the New Age movement, and  the so-called Perennial Wisdom. Perhaps full acceptance does finally require faith. But this is a faith that it all is really justifiable from the perspective of the soul, and that we are in some incomprehensible way literally our soul. This is the acceptance of the Eastern conception of reincarnation and that Earth life is some sort of “school” in which souls accomplish the learning that can only be accomplished through suffering. Of course, that is not the only purpose of life on Earth, but it is the primary one. There is also the experience of various forms of deep joy that can only take place in a place of physical limitations, great physical beauty, and opportunity for great creativity. Unlike the afterlife existence essentially in which “thoughts are things”, and the Light of God is always available.

This rationalization has the advantage of having a large body of empirical evidence to partially back it up. This would primarily be the very many veridical independently verified NDE experiences, and also the similarly investigated and verified reincarnation memories of small children. Also to be considered excellent empirical evidence is the large body of verified mediumistic communications. This area supplements the trade-off insights constituted by the large body of scientific knowledge of the world and living beings that has been built up through the scientific method.
(This post was last modified: 2022-06-21, 04:16 PM by nbtruthman. Edited 3 times in total.)
[-] The following 5 users Like nbtruthman's post:
  • Typoz, David001, Laird, Valmar, tim
https://www.theatlantic.com/technology/a...ss/661329/

A very thought-provoking new article on this topic involving the supposed consciousness of the AI system LaMDA, that also reveals some new AI research that though it does not come anywhere nearer to generating consciousness, does generate a convincing simulacrum of human reasoning, a feature of human intelligent thought that seemingly has been impossible for deep learning AI.

Quote:"LaMDA is a dialogue agent. The purpose of dialogue agents is to convince you that you are talking with a person. Utterly convincing chatbots are far from groundbreaking tech at this point. Programs such as Project December are already capable of re-creating dead loved ones using NLP. But those simulations are no more alive than a photograph of your dead great-grandfather is."
..............................................
"Already, models exist that are more powerful and mystifying than LaMDA. LaMDA operates on up to 137 billion parameters, which are, speaking broadly, the patterns in language that a transformer-based NLP uses to create meaningful text prediction. Recently I spoke with the engineers who worked on Google’s latest language model, PaLM, which has 540 billion parameters and is capable of hundreds of separate tasks without being specifically trained to do them. It is a true artificial general intelligence, insofar as it can apply itself to different intellectual tasks without specific training “out of the box,” as it were.

Some of these tasks are obviously useful and potentially transformative. According to the engineers—and, to be clear, I did not see PaLM in action myself, because it is not a product—if you ask it a question in Bengali, it can answer in both Bengali and English. If you ask it to translate a piece of code from C to Python, it can do so. It can summarize text. It can explain jokes. Then there’s the function that has startled its own developers, and which requires a certain distance and intellectual coolness not to freak out over. PaLM can reason. Or, to be more precise—and precision very much matters here—PaLM can perform reason."
...............................................
"So, no, Google does not have an artificial consciousness. Instead, it is building enormously powerful large language systems with the ultimate goal, as Narang said, “to enable one model that can generalize across millions of tasks and ingest data across multiple modalities.” Frankly, it’s enough to worry about without the science-fiction robots playing on the screens in our head. Google has no plans to turn PaLM into a product..."
[-] The following 5 users Like nbtruthman's post:
  • stephenw, Laird, David001, Valmar, Silence
(2022-06-21, 04:56 PM)nbtruthman Wrote: https://www.theatlantic.com/technology/a...ss/661329/

A very thought-provoking new article on this topic involving the supposed consciousness of the AI system LaMDA, that also reveals some new AI research that though it does not come anywhere nearer to generating consciousness, does generate a convincing simulacrum of human reasoning, a feature of human intelligent thought that seemingly has been impossible for deep learning AI.

Back in the 1980's I'd left science as such in favour of software development I was very interested in AI. I wish I could find a reference to this, but one research team came up with a program that could do symbolic integration at about the level that a good student would be able to demonstrate as he entered university. It could integrate by parts, make substitutions, etc.

This deeply impressed me, and fired up my sense that real AI was certain to happen soon. Most people assumed that Japan would get there first - which scared a lot of people somewhat.

I'm not sure, but I think that program also printed out its steps to the ultimate solution.

It worked by searching the tree of possible ways to attack the problem, backtracking when necessary.

This was done on 1980's style university hardware - which was trivial by modern standards.

However, by about 1990, a profound disillusionment spread through AI.

Nowadays you can do symbolic maths (including integration) on your desktop by downloading a free Python system called SymPy. There are also several commercial products available. The modern symbolic integration programs use something called Risch's Algorithm. There is no pretence that this is an artificially intelligent process, despite its excellent results. The free software can symbolically integrate quite tricky cases such as sin(x^n) with respect to x in a few seconds - which is pretty awesome I think.

Imagine what GOOGLE engineers and PR people would do with such a program!

My partner does some translations - mainly from English into Czech. She takes the English text and stuffs it through GOOGLE translate. The result (she tells me) is OK much of the time, but can then go wildly wrong. She has to rework all the text to turn it into a fluent and faithful translation.

Perhaps this is a fairer representation of what GOOGLE can do, precisely because everyone can get at it and 'test' it with stuff that wasn't designed in part for PR purposes.

In the article you quote, the author wasn't even able to play with the software! The maths it does is trivial - how do we know something in the system isn't tuned to cope with simple examples of this sort.

I use a program that enables me to engage in some interesting discussions like those we enjoy on this forum. It really is amazing - it can even talk back to me from a variety of points of view.

I never thought computer programs could be that clever. Hint: it is called XenForo!
[-] The following 1 user Likes David001's post:
  • Valmar

  • View a Printable Version
Forum Jump:


Users browsing this thread: 8 Guest(s)