AI machines aren’t ‘hallucinating’. But their makers are

37 Replies, 1176 Views

(2023-05-12, 07:08 PM)Sciborg_S_Patel Wrote: Hmm...not sure about the bike case being a huge problem for AI, guess it would depend on the laws of the particular location.
The point is that you start out with a clean definition of theft, and it becomes festooned with additional arbitrary conditions, like the bike must not be moved more than 5 metres and/or be moved for more than 10 minutes. However then you have the case that someone takes the wrong bike by mistake and returns it 20 minutes later.

And so on, and so on.
Quote:But yes in general I think trying to have machine "learning" based lawyers and judges will be disastrous if not just an embarrassing failure.
Agreed!

David
[-] The following 1 user Likes David001's post:
  • Sciborg_S_Patel
(2023-05-12, 07:04 PM)David001 Wrote: Interestingly enough, near the end of the first decade of AI hype 1980-1989 the cry went up that we need a concrete example of how AI can be useful. Someone came up with the idea that AI would be great for processing legal arguments.

I would argue that even there there is a problem. Imagine a stack of bikes as crated by students attending a lecture. Someone moves a bike to extract their own. Is it valid to argue that this involves taking a bike without the owner's consent?

David

I don’t think that would be that big of an issue if mens rea can be coded into the equation based on simple facts. I.E. did he keep possession of the bike for an extended time period? Did he damage it by mishandling it roughly (negligence)? Preparing/processing paperwork is usually not done by the lawyer themselves, but by interns and clerks. The part that is likely beyond the capacity of AI, both for the likelihood of it becoming a fiasco and due to lacking the capacity to do so, is the subjective part. That attribution that is given to the different elements of the case, the counterweighting of arguments, is beyond AI at the moment, and even if you create an algorithm and feed it how the most memorable judges have carried out their sentences for the last 300 years making it the ultimate mimicry machine, it will go against what the legal system stands (peer based sanctioning, mediation) and a majority of people will protest using it against them in court.
"Deep into that darkness peering, long I stood there, wondering, fearing, doubting, dreaming dreams no mortal ever dared to dream before..."
[-] The following 1 user Likes E. Flowers's post:
  • Sciborg_S_Patel
(2023-05-12, 07:15 PM)David001 Wrote: The point is that you start  out with a clean definition of theft, and it becomes festooned with additional arbitrary conditions, like the bike must not be moved more than 5 metres and/or be moved for more than 10 minutes. However then you have the case that someone takes the wrong bike by mistake and returns it 20 minutes later.

And so on, and so on.
Agreed!

David

Yeah I suspect a lot of the misplaced confidence in machine "learning" comes from technophiles whose lives are divorced from the daily realities of the "lesser" jobs they want to replace.

I have been friends with people in law enforcement and the parole/justice system along with Criminology PhDs and the realities of the field seem much more complex than anything I've domain-modeled as a programmer.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(This post was last modified: 2023-05-12, 07:23 PM by Sciborg_S_Patel. Edited 1 time in total.)
[-] The following 3 users Like Sciborg_S_Patel's post:
  • nbtruthman, Brian, David001
Is Avoiding Extinction from AI Really an Urgent Priority?

Arvind Narayanan

Quote:Start with the focus on risks from AI. This is an ambiguous phrase, but it implies an autonomous rogue agent. What about risks posed by people who negligently, recklessly, or maliciously use AI systems? Whatever harms we are concerned might be possible from a rogue AI will be far more likely at a much earlier stage as a result of a “rogue human” with AI’s assistance.

Quote:Indeed, focusing on this particular threat might exacerbate the more likely risks. The history of technology to date suggests that the greatest risks come not from technology itself, but from the people who control the technology using it to accumulate power and wealth. The AI industry leaders who have signed this statement are precisely the people best positioned to do just that. And in calling for regulations to address the risks of future rogue AI systems, they have proposed interventions that would further cement their power. We should be wary of Prometheans who want to both profit from bringing the people fire, and be trusted as the firefighters.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


[-] The following 1 user Likes Sciborg_S_Patel's post:
  • sbu
ChatGPT (OpenAI) is apparently in trouble. Developing these AIs and promoting them in the marketplace is turning out to be a very money-losing enterprise, a bad sign for the future of this business. A new article going into this is at https://www.firstpost.com/tech/news-anal...2.html/amp.

I for one would shed few tears over such a development, given all the potential harm the ultimate development of these systems may entail. The only bad fallout might be discouragement of enterprises to design and promulgate specialized medical/biological AI applications that have much benefit, such as the AI system developed to derive protein three-dimensional structures from the DNA sequence data of the protein.

Quote:"OpenAI may go bankrupt by 2024, AI bot costs company $700,000 every day
OpenAI spends about $700,000 a day, just to keep ChatGPT going. The cost does not include other AI products like GPT-4 and DALL-E2. Right now, it is pulling through only because of Microsoft's $10 billion funding

OpenAI, the AI studio that practically started the conversation around AI among regular, non-technical folks, may be in massive trouble.

In its bid to become the face of generative AI through their AI chatbot ChatGPT, Sam Altman’s AI development studio has put itself in a position, where it might have to soon declare bankruptcy, as per a report by Analytics India Magazine.

....Sam Altman’s OpenAI is burning through cash at the moment. Furthermore, despite their attempt to monetise GPT-3.5 and GPT-4, OpenAI is not generating enough revenue to break even at this point. This is leading to an alarming situation.

(Part of this is that) the user base is in decline:
While OpenAI and ChatGPT opened up to a wild start and had a record-breaking number of sign-ups in its initial days, it has steadily seen its user base decline over the last couple of months. According to SimilarWeb, July 2023 saw its user base drop by 12 per cent compared to June – it went from 1.7 billion users to 1.5 billion users. Do note that this data only shows users who visited the ChatGPT website, and does not account for users who are using OpenAI’s APIs."
[-] The following 2 users Like nbtruthman's post:
  • Ninshub, Sciborg_S_Patel
(2023-08-17, 04:40 PM)nbtruthman Wrote: ChatGPT (OpenAI) is apparently in trouble. Developing these AIs and promoting them in the marketplace is turning out to be a very money-losing enterprise, a bad sign for the future of this business. A new article going into this is at https://www.firstpost.com/tech/news-anal...2.html/amp.

I for one would shed few tears over such a development, given all the potential harm the ultimate development of these systems may entail. The only bad fallout might be discouragement of enterprises to design and promulgate specialized medical/biological AI applications that have much benefit, such as the AI system developed to derive protein three-dimensional structures from the DNA sequence data of the protein.

i agree with you that ChatGpt is too generalized to be of much commercial use. It’s the more specialized models that’s gonna replace jobs in the years to come. OpenAi’s potential trouble will in no way discourage enterprises from investing in AI. It’s not the technology that’s failing but OpenAi’s specific lack of a sound business case. I wouldn’t read too much into their daily spending anyway as chatGpt is hosted on Azure. So it’s mainly a Microsoft accounting thing within companies in the same cooperation.
[-] The following 2 users Like sbu's post:
  • Sciborg_S_Patel, Ninshub
The term "AI" is hyper emotionally and bias challenged, but.....

I remain very optimistic about the future of digital technology.  It won't be without bumps and bruises to get there, but I do think we'll find our way to a future that is better based in large part on the tech we develop.

I still find ChatGPT to be pretty amazing and I have used it in some small but impressive ways commercially (i.e., in my profession).  And you just know this thing is such a rudimentary glimpse into what can (and I believe will) be.
[-] The following 1 user Likes Silence's post:
  • Sciborg_S_Patel
Ideally ChatGPT would be best if it was hooked to one or more specified repositories of information, but not to whatever it could glean from social media etc.

However, it is worth remembering that Silence was nearly silenced for good by an AI on two occasions, so I suspect they may never deliver definitive information.

David
[-] The following 1 user Likes David001's post:
  • Sciborg_S_Patel
I have been trying to describe in a nutshell what I think is wrong with AI - particularly if it is used as an analogy to human or animal consciosness.

Imagine for a moment that you were asked to take a subject about which you knew absolutely nothing - Amazonian tree frogs, or, the history of Iceland 1600 - 1650, or whatever. You were well motivated (with cash or whatever) to spend a week collecting information about the subject using GOOGLE. I'll bet that many of you would be able to give a talk on the subject, or maybe even write a paper on the subject.

That is a very close analogy to what ChatGPT does for each customer. Because it caters for many customers it collects much of that information in advance in caches, but that is irrelevant.

Now, if you did that experiment, you might say that you are indeed conscious, but just how much of your consciousness would you be using for that week of intensive work? Normally if you spent a week like that, it would be for a personal reason, and a lot of your consciousness would be directed to larger ideas, and connect with some of your own personal themes, but the conscious effort directed at the search task would be fairly limited.

We have always known that computers can excel at tasks that require much conscious effort to achieve, such as evaluating large arithmetic expressions to high precision, for example, so just because we would deploy a narrow slice of our consciousness on this search task would not in any way imply that the AI would have demonstrated its consciousness by doing the same, only faster.

I would urge anyone who hasn't already to read:
https://www.amazon.co.uk/Restless-Clock-...022652826X

(Sorry, I know I have posted this link before)

This is a remarkable account of the intellectual argument that swirled around water powered ornamental features! The question was whether these ornaments, that appear to act consciously, were actually displaying consciousness. Even Liebnitz was involved in the discussion!

I was given that link by J.Scott Turner, an intelligent design author, with whom I had an email discussion after reading one of his books:

https://en.wikipedia.org/wiki/J._Scott_Turner

Actually, I am starting to wonder if we know that we are conscious in an absolute sense - just as many people feel they know certain information after an NDE or other psychic event. Maybe without psychic encounters of some sort, knowing we are indeed conscious is the only form of absolute knowledge we can be certain of.

David
(This post was last modified: 2023-08-27, 01:44 PM by David001. Edited 3 times in total.)
[-] The following 2 users Like David001's post:
  • Ninshub, Sciborg_S_Patel
(2023-08-27, 11:01 AM)David001 Wrote: I have been trying to describe in a nutshell what I think is wrong with AI - particularly if it is used as an analogy to human or animal consciosness.

Imagine for a moment that you were asked to take a subject about which you knew absolutely nothing - Amazonian tree frogs, or, the history of Iceland 1600 - 1650, or whatever. You were well motivated (with cash or whatever) to spend a week collecting information about the subject using GOOGLE. I'll bet that many of you would be able to give a talk on the subject, or maybe even write a paper on the subject.

That is a very close analogy to what ChatGPT does for each customer. Because it caters for many customers it collects much of that information in advance in caches, but that is irrelevant.

Now, if you did that experiment, you might say that you are indeed conscious, but just how much of your consciousness would you be using for that week of intensive work? Normally if you spent a week like that, it would be for a personal reason, and a lot of your consciousness would be directed to larger ideas, and connect with some of your own personal themes, but the conscious effort directed at the search task would be fairly limited.

We have always known that computers can excel at tasks that require much conscious effort to achieve, such as evaluating large arithmetic expressions to high precision, for example, so just because we would deploy a narrow slice of our consciousness on this search task would not in any way imply that the AI would have demonstrated its consciousness by doing the same, only faster.

.................................

David

I think it is necessary to cut to the chase here. It seems to me that most importantly, this boils down to there being an absolutely existential, fundamental difference between your week's work on Amazonian tree frogs for instance and what ChatGPT would do on the same subject. Afterwards, you would know a lot on the subject (having of course forgotten some of the material), whereas ChatGPT would continue as before to know absolutely nothing about anything, being as it is at base despite incredible data processing power, an algorithmic computing system. Of course, it would have accumulated and integrated a much more massive amount of information than you did, and lost none of it. The key point though, is that unlike you, it continues to know absolutely nothing. ChatGPT has not demonstrated consciousness at all in searching for and integrating all this information - it just has demonstrated an advanced degree of information processing.

This is my contention. I think that thought, consciousness, and awareness are fundamentally non-algorithmic and non-computable - period, and that this is enough of an argument. All that is necessary is that the AI proponent show how awareness can "emerge" from computation, which has existentially different properties. Or perhaps the AI proponent could prove the positive side, that ChatGPT (or some much more advanced version of such AI) truly is conscious. Unfortunately, that doesn't seem possible, because any given degree of uncanny resemblance to a conscious entity could be expected to result from some degree of sufficiently advanced mimicry of human sentience. Except by the unlikely resort of finding a psychic medium who can connect with the AI as a sentient entity and report that experience.  
[-] The following 3 users Like nbtruthman's post:
  • Ninshub, Valmar, Sciborg_S_Patel

  • View a Printable Version
Forum Jump:


Users browsing this thread: 1 Guest(s)