Is Google's LaMDA AI sentient? There seems to be strong evidence to that effect

115 Replies, 7522 Views

A Russian mathematician "AI" expert and creator of internet uncensored platform Bastyon Daniel Sachkov: "it's all hype. It's NOT artificial intelligence, there CAN'T be artificial intelligence, intelligence is creative, so-called "AI" only can analyse and calculate, but not create anything uniquely its own". This is a circus created by clandestine world elites". His interview for Russian speakers. https://www.youtube.com/watch?v=C1Gc-Clxs2c
(This post was last modified: 2022-07-25, 05:12 PM by Enrique Vargas. Edited 3 times in total.)
[-] The following 4 users Like Enrique Vargas's post:
  • Ninshub, stephenw, tim, Brian
From a new The Guardian article, ‘I am, in fact, a person’: can artificial intelligence ever be sentient?, at https://www.theguardian.com/technology/2...-questions :

Quote:According to Michael Wooldridge, a professor of computer science at the University of Oxford who has spent the past 30 years researching AI (in 2020, he won the Lovelace Medal for contributions to computing), LaMDA is simply responding to prompts. It imitates and impersonates. “The best way of explaining what LaMDA does is with an analogy about your smartphone,” Wooldridge says, comparing the model to the predictive text feature that autocompletes your messages. While your phone makes suggestions based on texts you’ve sent previously, with LaMDA, “basically everything that’s written in English on the world wide web goes in as the training data.” The results are impressively realistic, but the “basic statistics” are the same. “There is no sentience, there’s no self-contemplation, there’s no self-awareness,” Wooldridge says.
.....................................
Jeremie Harris, founder of AI safety company Mercurius and host of the Towards Data Science podcast:

“Because no one knows exactly what sentience is, or what it would involve,” he says, “I don’t think anyone’s in a position to make statements about how close we are to AI sentience at this point.”

But, Harris warns, “AI is advancing fast – much, much faster than the public realises – and the most serious and important issues of our time are going to start to sound increasingly like science fiction to the average person.” He personally is concerned about companies advancing their AI without investing in risk avoidance research. “There’s an increasing body of evidence that now suggests that beyond a certain intelligence threshold, AI could become intrinsically dangerous,” Harris says, explaining that this is because AIs come up with “creative” ways of achieving the objectives they’re programmed for.

“If you ask a highly capable AI to make you the richest person in the world, it might give you a bunch of money, or it might give you a dollar and steal someone else’s, or it might kill everyone on planet Earth, turning you into the richest person in the world by default,” he says. Most people, Harris says, “aren’t aware of the magnitude of this challenge, and I find that worrisome.”
[-] The following 3 users Like nbtruthman's post:
  • Enrique Vargas, Laird, Ninshub
(2022-08-19, 12:48 AM)nbtruthman Wrote: Harris says, explaining that this is because AIs come up with “creative” ways of achieving the objectives they’re programmed for.


Here we go again.  "Creative".  What does that mean?  We need to stop accepting AI industry/academia's use of non-computational terms to describe computational entities if and until they can "reduce" the process ("creative" in this case) into its component steps/parts.  I'm not buying the notion that "something else" beyond computation/algorithm is occurring.

On an unrelated note: It is interesting to think about the future of personal interactions.  As AI gets more and more sophisticated (i.e., not conscious per se) it will become increasingly difficult to distinguish between a human and an AI entity (at least in communication).  The Turing Test will certainly fall away to anachronism in my view.

So how will you know if you are talking to a fellow conscious entity or an artificial one?   Solipsism could effectively become reality in cases where a human goes for an extended period of time ONLY interacting with AI entities.

Just some random musings as I thought about this. Smile
[-] The following 5 users Like Silence's post:
  • Typoz, Valmar, nbtruthman, Sciborg_S_Patel, Ninshub
(2022-08-19, 02:32 PM)Silence Wrote: Here we go again.  "Creative".  What does that mean?  We need to stop accepting AI industry/academia's use of non-computational terms to describe computational entities if and until they can "reduce" the process ("creative" in this case) into its component steps/parts.  I'm not buying the notion that "something else" beyond computation/algorithm is occurring.

Yeah, anyone who has actually had to deal with these tools in novel business contexts can probably tell you - like I would - that you would love for AI to have the basic world-understanding of a 3 year old.

There's so much ridiculous hype from people who need for AI to be incredible for financial reasons or to confirm their materialist-atheist faith. Yet even with all the hype we hear of cars almost colliding with cyclists, varied biases transferred into the technology, people fired accidentally, and so on.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


[-] The following 2 users Like Sciborg_S_Patel's post:
  • Silence, Ninshub
(2022-08-19, 03:47 PM)Sciborg_S_Patel Wrote: Yeah, anyone who has actually had to deal with these tools in novel business contexts can probably tell you - like I would - that you would love for AI to have the basic world-understanding of a 3 year old.

There's so much ridiculous hype from people who need for AI to be incredible for financial reasons or to confirm their materialist-atheist faith. Yet even with all the hype we hear of cars almost colliding with cyclists, varied biases transferred into the technology, people fired accidentally, and so on.
Its all too bad in my view.  I'm a huge technology fan and by extension a fan of AI.  I'm anxiously awaiting my personal JARVIS (ala the Ironman movies) before I get too much older.  Having a personal, AI-enabled digital assistant sounds like a dream. Smile
[-] The following 1 user Likes Silence's post:
  • Sciborg_S_Patel
(2022-08-19, 04:12 PM)Silence Wrote: Its all too bad in my view.  I'm a huge technology fan and by extension a fan of AI.  I'm anxiously awaiting my personal JARVIS (ala the Ironman movies) before I get too much older.  Having a personal, AI-enabled digital assistant sounds like a dream. Smile

Better make sure your JARVIS incorporates Asimov's Laws of Robotics.
(This post was last modified: 2022-08-19, 05:27 PM by nbtruthman. Edited 1 time in total.)
[-] The following 2 users Like nbtruthman's post:
  • Typoz, Sciborg_S_Patel
(2022-08-19, 02:32 PM)Silence Wrote: Here we go again.  "Creative".  What does that mean?  We need to stop accepting AI industry/academia's use of non-computational terms to describe computational entities if and until they can "reduce" the process ("creative" in this case) into its component steps/parts.  I'm not buying the notion that "something else" beyond computation/algorithm is occurring.

On an unrelated note: It is interesting to think about the future of personal interactions.  As AI gets more and more sophisticated (i.e., not conscious per se) it will become increasingly difficult to distinguish between a human and an AI entity (at least in communication).  The Turing Test will certainly fall away to anachronism in my view.

So how will you know if you are talking to a fellow conscious entity or an artificial one?   Solipsism could effectively become reality in cases where a human goes for an extended period of time ONLY interacting with AI entities.

Just some random musings as I thought about this. Smile

I don't think Harris in the interview is making such a mistake. He is just pointing out that beyond a certain machine intelligence level (beyond a certain level of data-gathering and algorithmically winnowing from the total output of humans on the Web), computational algorithm results could seem to but not really be truly creative in the human sense. And unless constrained by effective control algoithms, could be extremely dangerous, by generating apparently ingenious but really dangerous solutions to problems.
(This post was last modified: 2022-08-19, 05:53 PM by nbtruthman. Edited 1 time in total.)
[-] The following 2 users Like nbtruthman's post:
  • Sciborg_S_Patel, Ninshub
(2022-08-19, 05:38 PM)nbtruthman Wrote: I don't think Harris in the interview is making such a mistake. He is just pointing out that beyond a certain machine intelligence level (beyond a certain level of data-gathering and algorithmically winnowing from the total output of humans on the Web), computational algorithm results could seem to but not really be truly creative in the human sense. And unless constrained by effective control algoithms, could be extremely dangerous, by generating apparently ingenious but really dangerous solutions to problems.

I would argue there is nothing "creative" about an algorithm-based product.  Sure, it might be beyond our willingness or even capability to predict, but it is ultimately predictable in my view.  Where as creativity isn't always predictable.  So, I found the use of the work (creative) to be misused in this context.
[-] The following 2 users Like Silence's post:
  • Typoz, Ninshub
(2022-08-22, 01:45 PM)Silence Wrote: I would argue there is nothing "creative" about an algorithm-based product.  Sure, it might be beyond our willingness or even capability to predict, but it is ultimately predictable in my view.  Where as creativity isn't always predictable.  So, I found the use of the work (creative) to be misused in this context.

I think you are right in an ultimate sense. The problem is, AI algorithm-generated "art" has become so sophisticated that for most people it can't be distinguished from the real human artist-created thing. As witness https://www.artsy.net/article/artsy-edit...uter-human . Of course, this is really modern and postmodern nonrepresentational abstract and semi-abstract junk, but so is the human-created "art" in these styles, in my opinion, as a sometimes oil painter in the older representational styles.
[-] The following 2 users Like nbtruthman's post:
  • Brian, Sciborg_S_Patel
(2022-08-22, 02:47 PM)nbtruthman Wrote: I think you are right in an ultimate sense. The problem is, AI algorithm-generated "art" has become so sophisticated that for most people it can't be distinguished from the real human artist-created thing. As witness https://www.artsy.net/article/artsy-edit...uter-human . Of course, this is really modern and postmodern nonrepresentational abstract and semi-abstract junk, but so is the human-created "art" in these styles, in my opinion, as a sometimes oil painter in the older representational styles.

Yeah it's kind of laughable to see the example images, there is a place for abstract art but it's low hanging fruit for sure.

The cycle of desperate pop-sci clickbait just reveals more about a small community of media people who don't know much about art.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


[-] The following 2 users Like Sciborg_S_Patel's post:
  • Brian, nbtruthman

  • View a Printable Version
Forum Jump:


Users browsing this thread: 10 Guest(s)