Epic AI Fails — A List of Failed Machine Learning Projects

12 Replies, 1236 Views

Epic AI Fails — A List of Failed Machine Learning Projects

Mohit Pandey

Quote:Data shows that nearly a quarter of companies reported up to 50% of AI project failure rate. In another study, nearly 78% of AI or ML projects stall at some stage before deployment, and 81% of the process of training AI with data is more difficult than they expected.

Quote:AI in healthcare is clearly a risky business. This was further proven when IBM’s Watson started providing incorrect and several unsafe recommendations for the treatment of cancer patients. Similar to the case with Google’s diabetic detection, Watson was also trained on unreliable scenarios and unreal patient data.

Initially it was trained on real data but, since it was difficult for the medical practitioners, they shifted to unreal data. Documents revealed by Andrew Norden, the former deputy health chief, showed that instead of treating the patients through right methods, the model was trained to assist doctors in their treatment preferences.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


[-] The following 3 users Like Sciborg_S_Patel's post:
  • Brian, ersby, Typoz
Thanks Sci, that is a super list of dodgy AI!

When you consider that these were all high-profile projects, I think it tells you a lot about AI.

I was a bit staggered by the smoothness of ChatGPT, but I think at heart it simply parses text, and reports the result - indeed if you ask it, it says that is what it does. Then it is given some way to exclude parts of the internet that are saying dissident things.

These programs are undoubtedly fascinating in what they tell you about human consciousness - the parts that can be mimicked, but the parts that they can't mimick.

I would recommend that anyone in doubt, read "The Myth of Artificial Intelligence" by Erik J Larson.

David
[-] The following 2 users Like David001's post:
  • Brian, Sciborg_S_Patel
(2023-04-11, 10:53 PM)Sciborg_S_Patel Wrote: Epic AI Fails — A List of Failed Machine Learning Projects

Mohit Pandey

I don't understand the message you are trying to convey with this post. It seems to me that you are conflating failed commercial applications of AI with philosophical musings about AI not working as a whole. AI works well! However, the quality of training data will always be the limiting factor, much like the education of a human student in a specific discipline.
(2023-04-12, 08:15 PM)sbu Wrote: I don't understand the message you are trying to convey with this post. It seems to me that you are conflating failed commercial applications of AI with philosophical musings about AI not working as a whole. AI works well! However, the quality of training data will always be the limiting factor, much like the education of a human student in a specific discipline.

It works well but continuously fails?

It might work well in theory, but these fails are what happened in practice.

I don't see how it's like a human student at all, save by very loose analogies at best.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


[-] The following 1 user Likes Sciborg_S_Patel's post:
  • Brian
(2023-04-12, 08:25 PM)Sciborg_S_Patel Wrote: It works well but continuously fails?

It might work well in theory, but these fails are what happened in practice.

I don't see how it's like a human student at all, save by very loose analogies at best.

There are loads of successfull AIs running commercials applications already. Have you ever had a webcall over Microsoft Teams or Google Meet? Software that allows you to replace the background is AI!

Researchers are making greater strides than ever in researching disease and new medicines with the emergence of Alphafold that can predict how a protein folds in 3d from its chain of amino acids.
(2023-04-12, 08:35 PM)sbu Wrote: There are loads of successfull AIs running commercials applications already. Have you ever had a webcall over Microsoft Teams or Google Meet? Software that allows you to replace the background is AI!

Researchers are making greater strides than ever in researching disease and new medicines with the emergence of Alphafold that can predict how a protein folds in 3d from its chain of amino acids.

Not sure why you're overselling machine "learning". These fails happened, regardless of other applications working.

While some applications work, I'm not convinced that every problem could be solved with more/better data - consider the issue with catastrophic forgetting or even what failure rate we should consider acceptable.

There are also problems like driving where it's becoming clear the technology isn't there because machine "learning" is the wrong answer.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


[-] The following 2 users Like Sciborg_S_Patel's post:
  • nbtruthman, Brian
(2023-04-12, 09:06 PM)Sciborg_S_Patel Wrote: There are also problems like driving where it's becoming clear the technology isn't there because machine "learning" is the wrong answer.

There’s no evidence for this claim. It’s your own beliefs surfacing here.
(2023-04-13, 05:43 AM)sbu Wrote: There’s no evidence for this claim. It’s your own beliefs surfacing here.

See this thread.

Also, as pseudoskeptics continuously reminded us, it's the person making the claim that has to give evidence. The null hypothesis is that machine "learning" can't do X - in this case driving.

And even putting the shoe on my foot - from evidence it seems to be the case they can't. If nothing else they shouldn't be tested on public roads as it's only a matter of time before driverless cars cause increasing fatalities in their current state.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(This post was last modified: 2023-04-13, 03:24 PM by Sciborg_S_Patel. Edited 2 times in total.)
[-] The following 2 users Like Sciborg_S_Patel's post:
  • David001, Brian
(2023-04-13, 05:43 AM)sbu Wrote: There’s no evidence for this claim. It’s your own beliefs surfacing here.

I don't know if you are old enough to have experienced the first decade of AI hype - 1980 - 1989. There are some interesting parallels to the current AI hype - but there are also differences.

Back then the main focus was on symbolic AI, and there were articles warning of the imminent collapse of white-collar jobs, and there was the idea that Japan was far ahead of the rest of the world. It seemed as though one week AI was still being hyped, and the next, everyone wanted to research other things. It was really hard to produce anything of industrial strength.

This was followed by a period of rather more muted hype about the potential for networks of artificial neurons (very much simpler than the real things) - ANN's. These would seem to have lasted as the pattern matching that has become popular in modern AI.

There seems to be a certain parallel between ANN's and unconscious mental processing. For example if you recognise someone's face it is almost impossible to explain how you do that task, whereas if you can differentiate an algebraic expression, or parse a natural language sentence, you can also explain how you did it.

Ideally, you need a blend of the two types of inference, but symbolic AI seemed to hit insuperable barriers that were not fixable by throwing more resources at the problem. Conversely, ANN's can improve if more resources are available, but you run into another problem. ANN's are trained on real-world patterns, so the answers to the questions whether an object in front is a bicycle, and whether it being ridden will come back as real numbers based on exactly what training patterns it was exposed to. In activities such as driving, we probably use a lot of unconscious pattern matching, blended with a more symbolic type of thought as needed.

Continuing with that example, a human can probably apply some more abstract analysis to what it sees ahead. For example, a person who is wobbling on a bike represents a more serious risk than one riding smoothly. Someone riding on the back wheel only, is a severe risk even if they are performing smoothly.

David
[-] The following 3 users Like David001's post:
  • nbtruthman, Brian, Sciborg_S_Patel
I don't know if this is the right thread for this.

This Montreal newspaper tested ChatGPT for the provincial bar (law) exam. It got 12%!

Article translated here:
https://www-lapresse-ca.translate.goog/a..._tr_pto=sc
Quote:Generalities, window dressing and “completely false” information: the conversational robot failed in a test carried out by La Presse at the École du Barreau
[-] The following 4 users Like Ninshub's post:
  • nbtruthman, David001, Sciborg_S_Patel, Brian

  • View a Printable Version
Forum Jump:


Users browsing this thread: 4 Guest(s)