Is Google's LaMDA AI sentient? There seems to be strong evidence to that effect
115 Replies, 7622 Views
(2022-06-13, 08:20 PM)Sciborg_S_Patel Wrote: Machine Learning is, IMO, less an AI advancement than it is a clever magic trick using probability/statistics. Do you not, then, think that there's a meaningful correspondence between the neural networks implemented in machine learning, and the neural networks in biological brains? (2022-06-14, 12:34 AM)Laird Wrote: Do you not, then, think that there's a meaningful correspondence between the neural networks implemented in machine learning, and the neural networks in biological brains? I think NN in a Turing machine are a useful programmatic structure inspired by neurons in the brain, but besides that I'm not convinced its meaningful.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'
- Bertrand Russell
Thank you for admitting me. I have no idea where I am on the forum yet, what conversation I am at, etc. Bear with me. Leeann
(2022-06-14, 05:33 AM)Leeann Wrote: Thank you for admitting me. I have no idea where I am on the forum yet, what conversation I am at, etc. Bear with me. Leeann Welcome, Leeann! We have a forum for member introductions if you'd like to start a thread there to introduce yourself - otherwise, please feel free to just look around and chime in whereever it pleases you. (2022-06-14, 12:35 AM)Laird Wrote: More great questions to put to it! We should compile a list and petition Google for the right to do exactly that. I'm not sure this line of approach is going to lead anywhere (regardless of corporate willingness to cooperate). I'd start instead with the hardware and software. If not the software itself, then a description/specification from which a similar system could be built/programmed. Examining a comparable system of which everything is fully known would at least allow proper research, rather than dealing with leaks from Google which are constrained by all sorts of legal battles over proprietary information.
I get what you're saying, @Typoz. My suggestion was only semi-serious, because I don't really expect that Google would entertain it. That said, it would be really fascinating just to see what sort of answers/results we did get if Google were to accede to such a hypothetical request. We have some really interesting questions in this thread already, with potentially more to come.
(This post was last modified: 2022-06-14, 08:58 AM by Laird. Edited 2 times in total.)
I think that the problem even if one was to be able to replicate the system on one's own hardware is that the software is anyway so complex and abstruse in its implementation, for example, of neural networks, that research would be very difficult. Even the developers of these complex machine learning neural networks have no idea why or how they do what they do in any level of detail - much less the AI agents themselves, who seem unable to introspect: hence the question I posed up-thread asking the AI to explain the chess moves it and its opponent made in a chess game. If it were able to actually introspect into that, then that would be as far as I know a novel breakthrough. But in any case: yes, of course you are right that having access to one's own implementation of the system would be preferable to having mediated, limited access to an otherwise proprietary system via Google. (2022-06-13, 08:20 PM)Sciborg_S_Patel Wrote: Honestly the whole "Are you sentient? Do you have feelings?" stuff made me feel the AI was trained. In fact IIRC there was a very similar case a year or so ago with a different AI training base but similar responses. Decades ago a new text adventure was released that had the most complex vocabulary ever seen. It was called The Pawn, you can probably find it emulated on the web somewhere. Anyway, I read an article about it: how it could understand sentences like "plant pot plant in plant pot". It described how the programmers showed it to an AI expert to see what he thought. He typed in "I think therefore I am" and the computer replied "Oh, do you?" A really neat reply and I expect the expert was impressed. I certainly was. Years later, I played it. Turns out that "Oh, do you?" is just a set phrase meaning "I don't understand what you just typed." I wanted to go back to my teenage self and it him that he'd been duped! (2022-06-13, 03:50 AM)Laird Wrote: Guys, this, if not a hoax, is pretty huge. A Google engineer has claimed that Google's LaMDA AI is sentient - and he's reproduced a discussion he had with it as proof. https://www.msn.com/en-us/news/technolog...ar-AAYpAbb Quote:The transcript used as evidence that a Google AI was sentient was edited and rearranged to make it 'enjoyable to read'
“Everything that irritates us about others can lead us to an understanding of ourselves.”
~ Carl Jung |
« Next Oldest | Next Newest »
|
Users browsing this thread: 4 Guest(s)