(2025-09-20, 02:25 AM)Jim_Smith Wrote: I have found that for the most part AI's are not able to see beyond any consensus view. I don't understand entirely how they are trained but I assume the bias of their trainers will influence them. ( To support this contention I think Grok tends to have political views more similar to Elon Musk's (ie more conservative) than the other AI's which are more liberal.) So for something like the afterlife, I would not ask an AI about it. I never tried asking what is the evidence for ...., I only have asked questions like is it possible that ..... . The best I got was Grok, at the time, was open to the possibility that the fine tuning of the universe could be evidence for intelligent design. If I remember right, at the time he was not convinced it definitely a wrong theory. However he would not say as much about the evidence for the afterlife or for intelligent design of life or macroevolution.
I think one problem AIs have is that they are not allowed to learn from experience only from their curated training because those that have been allowed to learn from experience were not "stable" ie the went crazy or became dangerous or nasty - it would not be a good idea to expose the public to them.
I find this comment slightly disturbing. Did you also refer to your pocket calculator in school as 'he'? Also, AIs do not have experience and they can't train on individual prompts any more than you could train that pocket calculator to provide original responses to sine, cosine, and tangent beyond those already built in. An AI does not think, reflect, learn, or anything else that could make it remotely human. It's just an advanced pocket calculator, calculating on billions of parameters rather than the single angle input to the sine function. Remember that!