Is Google's LaMDA AI sentient? There seems to be strong evidence to that effect

115 Replies, 4910 Views

(2022-07-04, 05:48 PM)David001 Wrote: I can't help wondering what an "artificial intelligence ethicist" actually does. If he decides that the AI is sentient, does he have the power to demand that it remains permanently powered to avoid committing murder?

The very name of the post seems to be just part of the marketing hype.

David

Ethics in the context of AI does have a practical purpose. Take a self-driving car which is in a situation where an accident is inevitable, does it smash into a wall and risk the life of the occupants, or steer into some pedestrians and possibly cause less harm to the occupants? That is a standard philosophical dilemma, but there must be other more day-to-day situations where AI interacting with people has some responsibility to decide on a path.
[-] The following 1 user Likes Typoz's post:
  • tim
(2022-07-03, 07:49 PM)Laird Wrote: Oh. It turns out, does it? As evidence to that effect, the article states: "This is what I think happened with Blake Lemoine: He was hired as one of the crowdworkers responsible for training LaMDA."

"I think that this is what happened" is hardly compelling evidence. As I understand it, Blake was a full-time employee of Google - an artificial intelligence ethicist - and no (mere) "crowdworker". Do you have any evidence to present to the contrary, other than what some random blogger "thinks" happened?

Just as a matter of curiosity, could you directly respond to Robert J. Marks' point about the non-computability of the qualia of consciousness, that I posted a while ago at #78 with another such example at #81. I don't think I have seen any response from you on this. I assume from your position on this issue that you disagree with Marks (and myself of course).


Quote:"If biting into a lemon cannot be explained to a man without all his functioning senses, it certainly can’t be duplicated in an experiential way by AI using computer software.,,,
Qualia are a simple example of the many human attributes that escape algorithmic description. If you can’t formulate an algorithm explaining your lemon-biting experience, you can’t write software to duplicate the experience in the computer.,,
Qualia are not computable. They’re non-algorithmic.,,,"
(This post was last modified: 2022-07-04, 11:00 PM by nbtruthman. Edited 2 times in total.)
(2022-07-04, 07:51 PM)Typoz Wrote: Ethics in the context of AI does have a practical purpose. Take a self-driving car which is in a situation where an accident is inevitable, does it smash into a wall and risk the life of the occupants, or steer into some pedestrians and possibly cause less harm to the occupants? That is a standard philosophical dilemma, but there must be other more day-to-day situations where AI interacting with people has some responsibility to decide on a path.

Well surely these are exactly the situations where AI should not be used! It makes you start to wonder just when AI is useful. I think it should only be used as a sort of reminder to humans, "have you forgotten X?".

We all endure poor quality 'AI' such as the systems that waste our time when we ring the bank. They are fairly good at understanding simple speech but you can't tell them what you want to say! One might ask, are these actually useful?

We are lead by the idea that these are 'just the beginning' - actually I think they illustrate the incredible limitations of AI.

Discussions of AI and ethics were popular the first time round - when AI ultimately fizzled.
(This post was last modified: 2022-07-05, 10:01 AM by David001. Edited 1 time in total.)
(2022-07-04, 10:55 PM)nbtruthman Wrote: Just as a matter of curiosity, could you directly respond to Robert J. Marks' point about the non-computability of the qualia of consciousness, that I posted a while ago at #78 with another such example at #81. I don't think I have seen any response from you on this. I assume from your position on this issue that you disagree with Marks (and myself of course).

I agree that qualia are non-computable.
(2022-07-06, 01:27 PM)Laird Wrote: I agree that qualia are non-computable.

Then, since all computers' data processing consists entirely of numerical and logical computations of various sorts using algorithms of various sorts, computers fundamentally can't generate qualia, that is, subjective consciousness, awareness. Since qualia are the essense of consciousness, we then can conclude that computers simply cannot ever even in principle become conscious. Could you point out any flaw in this reasoning?
(This post was last modified: 2022-07-06, 04:17 PM by nbtruthman. Edited 1 time in total.)
(2022-07-06, 04:08 PM)nbtruthman Wrote: Then, since all computers' data processing consists entirely of numerical and logical computations of various sorts using algorithms of various sorts, computers fundamentally can't generate qualia, that is, subjective consciousness, awareness. Since qualia are the essense of consciousness, we then can conclude that computers simply cannot ever even in principle become conscious. Could you point out any flaw in this reasoning?

Not so much a flaw as an alternative possibility: dualism, in which a conscious soul becomes associated with an entity such as LaMDA. Of course, the problem then is how the freely-willing soul could influence the programmatic entity, and I can't see a good solution to it other than the one you suggested a while back: that the soul changes physical logic gates in the computer to effect its will.

I think what you might be unclear on is that I'm not taking a fixed position on all of this. I agree that, logically, it seems impossible for LaMDA to be sentient - but it's also hard to imagine a mere programme generating the sort of dialogue it's capable of, especially its claim *to* sentience.
[-] The following 2 users Like Laird's post:
  • nbtruthman, stephenw
(2022-07-06, 05:32 PM)Laird Wrote:  Of course, the problem then is how the freely-willing soul could influence the programmatic entity, and I can't see a good solution to it other than the one you suggested a while back: that the soul changes physical logic gates in the computer to effect its will.
I strongly agree with your formulation of the problem.  I would assign a different role and process engagement, in which mind/soul is active in realizing outcomes in "the programming entity".  I would take this entity to be the full scope of bodily communication and nerve activation.  Mind/soul would be the coding agent.

In the long term - the imaginative thought-experiment that is the reification of computers is failed as to reality, but very successful in provoking philosophy and sci-fi.  (much of it my favorites)

Here is a Buddhist view of informational realism that I admire; while being very foggy on the on the depth of the subject matter.   A Buddhist Model for the Informational Person

 https://www.academia.edu/947996/A_Buddhi...nal_Person
Quote: In the contemporary situation we are confronted with an ambiguity of information resources. Communications technology makes us more aware than ever of our dependence and reliance on an ordered approach to reliable information. Suppose we make use of the traditional analysis to restate the parameters by which authoritativeness of information resources should be investigated. This suggests a cycle of considerations from the person as authority, an external contribution, to levels of meaning contingent upon a perfection of the mind, an internal attribution.  Suppose further there exists a new, informational metaphysics by which we may elaborate the person as information resource and the personal understanding of information.  One approach is to study data and algorithm(instruction) as foundational entities of investigation of semantics and learning. Such is the derivation of information ethics(IE) from computer ethics as elaborated by Floridi and others. 
[-] The following 4 users Like Brian's post:
  • Typoz, Larry, stephenw, Laird
According to the Australian Broadcasting Corporation, Blake has been fired from Google.
(2022-07-25, 04:19 AM)Laird Wrote: According to the Australian Broadcasting Corporation, Blake has been fired from Google.

I think that was always expected, the company just wanted a bit of time to get some sort of outcome where they'd avoid being sued or something.

  • View a Printable Version
Forum Jump:


Users browsing this thread: 1 Guest(s)