AI machines aren’t ‘hallucinating’. But their makers are

37 Replies, 2722 Views

(2023-08-28, 08:47 AM)Typoz Wrote: I'm not sure that even mentioning the word conscious or the idea of consciousness is relevant in a discussion of AI. The essence of AI is that it is mechanistic, algorithmic. That is, following a set of rules and instructions which is how it arrives at some quirky and absurd outcomes.

I was thinking about the mistakes made when driving a car. I know when I first started driving on my own I made mistakes, often quite frightening ones, but each time I learned something. My driving changed as a result. Even with much experience I still made mistakes when in an unfamiliar situation, such as in a foreign country where road layouts and expected habits can be very different. Again my driving adapted and changed very rapidly.

In the case of AI, when it makes an obvious mistake, the system itself does not have any way to quickly learn and adapt, indeed it seemingly does not even detect that something 'bad' has happened. The concepts we use as humans simply don't translate into the algorithmic rules, except in terms of probabilities and weightings, but not as meanings.

Good point. I do think the AI system would have the capability at least to detect that it has made an error. What it couldn't do is to readily come up with an operational solution to update its software with, since it is limited to an existing database which evidently didn't include the (perhaps rare) anomaly causing the accident or near-accident.
[-] The following 3 users Like nbtruthman's post:
  • Sciborg_S_Patel, Ninshub, Typoz

Messages In This Thread
RE: AI machines aren’t ‘hallucinating’. But their makers are - by nbtruthman - 2023-08-28, 03:28 PM

  • View a Printable Version
Forum Jump:


Users browsing this thread: 1 Guest(s)