AI megathread

195 Replies, 7746 Views

(2024-09-21, 08:44 PM)Sciborg_S_Patel Wrote:

I actually watched that video or one nearly like it.  Hyper complex content and I freely admit I "followed" it directionally, but its way beyond my technical expertise to have any sort of nuanced understanding.  A bit frightening in many ways as all prior computer software, while complex to varying degrees, was foundational algorithmically based.  And, thus, reducible.
[-] The following 1 user Likes Silence's post:
  • Sciborg_S_Patel
(2024-09-23, 07:11 PM)Silence Wrote: I actually watched that video or one nearly like it.  Hyper complex content and I freely admit I "followed" it directionally, but its way beyond my technical expertise to have any sort of nuanced understanding.  A bit frightening in many ways as all prior computer software, while complex to varying degrees, was foundational algorithmically based.  And, thus, reducible.

I thought that ALL computer software, including generative AI's, is by definition algorithmically based and therefore at least in principle reducible, because at the absolute base any possible (except for quantum computers) digital processor is just executing programmed machine code extremely rapidly. Though any computer is doing this algorithmic process incredibly fast, it still is really just manipulating binary 1s and 0s beween various registers using machine coding to command things like "load register A from memory address B", "add register A to register B", "multiply register C by register D", divide, compute the 3rd root of the contents of memory address Y using an algorithm, etc. etc.

It seems to me that this incredibly intense and rapid process is what is really going on at the bottom, at the foundation, inside the generative AI multiple computer processors when it is producing its apparently intelligent and creative response through what is basically statistical processing of training data. So it very much seems to be that all computers are nothing but algorithmic processors at heart and are in principle utterly incapable of ever becoming conscious.
(This post was last modified: 2024-09-23, 10:34 PM by nbtruthman. Edited 6 times in total.)
[-] The following 1 user Likes nbtruthman's post:
  • Sciborg_S_Patel
(2024-09-23, 09:49 PM)nbtruthman Wrote: I thought that ALL computer software, including generative AI's, is by definition algorithmically based and therefore at least in principle reducible, because at the absolute base any possible (except for quantum computers) digital processor is just executing programmed machine code extremely rapidly. Though any computer is doing this algorithmic process incredibly fast, it still is really just manipulating binary 1s and 0s beween various registers using machine coding to command things like "load register A from memory address B", "add register A to register B", "multiply register C by register D", divide, compute the 3rd root of the contents of memory address Y using an algorithm, etc. etc.

It seems to me that this incredibly intense and rapid process is what is really going on at the bottom, at the foundation, inside the generative AI multiple computer processors when it is producing its apparently intelligent and creative response through what is basically statistical processing of training data. So it very much seems to be that all computers are nothing but algorithmic processors at heart and are in principle utterly incapable of ever becoming conscious.

I have a sense that there is some space in between, say, if-then and consciousness. At least that's how I've interpreted what I've read about how these LLMs are operating.  I don't know that a computer scientist can actually reproduce what these LLMs do.  I'd be curious if I understand this correctly or not.

As for consciousness, I'm in your camp and don't see it suddenly springing out of an NVIDIA processor spontaneously at some point.
[-] The following 1 user Likes Silence's post:
  • Sciborg_S_Patel
(2024-09-24, 04:24 PM)Silence Wrote: I have a sense that there is some space in between, say, if-then and consciousness. At least that's how I've interpreted what I've read about how these LLMs are operating.  I don't know that a computer scientist can actually reproduce what these LLMs do.  I'd be curious if I understand this correctly or not.

As for consciousness, I'm in your camp and don't see it suddenly springing out of an NVIDIA processor spontaneously at some point.

It seems to me that "if-then" logic is just mainly Boolian logic and another of the elementary types of data processing algorithms that all non-quantum computers employ at the foundation of their processing. This logic is basically of the type where a logic or arithmetic computation is carried out and the result determines what path the computer will then take in terms of executing memory instructions. For instance for an elementary arithmetic algorithm, "if the A register value is > value Y in variable memory address Z then jump to memory address W and execute that instruction. If it is < or =, execute instruction at address V". Or the computation that was carried out may be, rather than arithmetic, a Boolian logic tree consisting of a chain or tree composed of elements like "if X is 1 (true) and Y is 0 (false) then go to location U and execute that next elementary decision logic kernel".

No matter how many processors are working together in concert and how great the sophistication of the programming with a generative AI system for instance, at any moment during the AI system's operation this just described sort of process is what is really going on in the foundational core of the AI computers executing the programs. This is a physical logical process mechanized by microtransistors and diodes and involving algorithms, and is in an entirely existentially different realm than consciousness which is immaterial. Therefore AIs simply can't develop consciousness.
(This post was last modified: 2024-09-24, 05:49 PM by nbtruthman. Edited 2 times in total.)
[-] The following 1 user Likes nbtruthman's post:
  • Sciborg_S_Patel
(2024-09-24, 04:24 PM)Silence Wrote: I have a sense that there is some space in between, say, if-then and consciousness. At least that's how I've interpreted what I've read about how these LLMs are operating.  I don't know that a computer scientist can actually reproduce what these LLMs do.  I'd be curious if I understand this correctly or not.

As for consciousness, I'm in your camp and don't see it suddenly springing out of an NVIDIA processor spontaneously at some point.

You should imagine LLMs as functions with billions of internal parameters, which they use to process input text, just like sine and cosine are functions that process an input angle.
[-] The following 3 users Like sbu's post:
  • nbtruthman, Silence, Sciborg_S_Patel
Is the future of AI an omnipresent "voice in the head" that you can't get away from?

One apparent "AI expert" is confidently predicting what seems to me to be one of the most nightmarish AI futures being predicted by futurists, where we are forced by the circumstances of our society (social pressures and big corporate influences) to allow a malign generative AI influence to enter our lives and at least partially control us. Something that could most realistically be termed a parasitic and controlling entity. 

I want absolutely no part of this - I'm opting out.

https://bigthink.com/the-future/the-whis...your-head/

Quote:"Within the next few years, an AI assistant will take up residence inside your head. It will do this by whispering guidance into your ears as you go about your daily routine, reminding you to pick up your dry cleaning as you walk down the street, helping you find your parked car in a stadium lot, and prompting you with the name of a coworker you pass in the hall. It may even coach you as you converse with friends and coworkers, giving you interesting things to say that make you seem smarter, funnier, and more charming than you are. These will feel like superpowers.
...........................................................................
Within the next few years, an AI assistant will take up residence inside your head. It will do this by whispering guidance into your ears as you go about your daily routine, reminding you to pick up your dry cleaning as you walk down the street, helping you find your parked car in a stadium lot, and prompting you with the name of a coworker you pass in the hall. It may even coach you as you converse with friends and coworkers, giving you interesting things to say that make you seem smarter, funnier, and more charming than you are. These will feel like superpowers.
...........................................................................
Whatever we call this technology, it is coming soon and will mediate all aspects of our lives, assisting us at work, at school, or even when grabbing a late-night snack in the privacy of our own kitchen. If you are skeptical, you’ve not been tracking the massive investment and rapid progress made by Meta on this front and the arms race they are stoking with Apple, Google, Samsung, and other major players in the mobile market.
............................................................................
The first of these devices is already on the market — the AI-powered Ray-Bans from Meta.
............................................................................
Of course, everyone else will be “augmented” too, creating an arms race among the public to embrace the latest features and functions. This is the future of mobile computing. It will transform the bricks we carry around all day into body-worn devices that see and hear our surroundings and covertly whisper useful information and friendly reminders at every turn.

Most of these devices will be deployed as AI-powered glasses because they give the best vantage point for cameras to monitor our field of view, though camera-enabled earbuds will be available too. The other benefit of glasses is that they can be enhanced to display visual content, enabling the AI to provide silent assistance as text, images, and realistic immersive elements that are integrated spatially into our world.
(This post was last modified: 2024-10-09, 12:50 AM by nbtruthman. Edited 1 time in total.)
[-] The following 3 users Like nbtruthman's post:
  • Sciborg_S_Patel, Laird, Valmar
Yesterday I noticed a new icon in my Chromebook taskbar and it turns out it is Google's AI app Gemini.  I discovered that there are privacy concerns because of all the data it has access to, including where you live.  I right-clicked the icon and noticed there is no uninstall  option for this app.  I am hoping that if I don't use it, it can't do anything  and I  do all my computer stuff and internet in a Debian VM rather than use apps from Google Play.  The world is getting scary!
[-] The following 3 users Like Brian's post:
  • Laird, nbtruthman, Sciborg_S_Patel
AI feels like a technology development that may have the largest "tail" dichotomies to date. Meaning: will either be the most hugely helpful/benevolent technology ever developed by mankind or the most harmful/malevolent.

I remain hopeful for the former as I think anyone can see the potential good.  But I grow more wary of the latter I'm afraid.
[-] The following 4 users Like Silence's post:
  • Brian, nbtruthman, Typoz, Laird
There was a twitter thread asking the question,
"Can Large Language Models (LLMs) truly reason?"

which discussed this paper:
GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models

The parts which I read were about how solving a problem involving the four basic arithmetic operations could generate different (and incorrect) results when non-related text was included in the problem statement, or simple change of proper names could cause similar errors.

Incidentally I noticed some similar unexpected behaviour yesterday when using the DeepL app to translate some text. Since it would only process a limited quantity of text at a time (in the free version) I deleted some earlier parts of the text and was very surprised to see the next part of the text generate a completely different rendering. Which version was best or likely to be correct? I had to use my own judgement since there was not always a clear way to tell unless one had existing knowledge of either the language itself or the expected result.
(This post was last modified: 2024-10-14, 02:42 PM by Typoz. Edited 1 time in total.)
[-] The following 3 users Like Typoz's post:
  • Laird, Sciborg_S_Patel, Brian
Michael Tymn recently conducted a very interesting experiment with a generative AI system. He conducted an "interview" where he asked the AI system a series of very important questions regarding the evidence for survival of death, and how it would explain some notable relevant experiments in this area conducted by a famous psychical researcher in the past.  To answer these questions well and thoroughly seemingly requires a lot of intelligent unbiased thinking, the ability to generalize, and the ability to filter out all the strident voices of scientism proclaiming the supposed truth of materialism and the impossibility of an afterlife.

http://whitecrowbooks.com/michaeltymn/en...fter_death

Frankly, I was impressed by the quality of a number of the answers. These answers I think generally made reasonable sense, seemed knowledgeable and amazingly were quite even-handed and unbiased when discussing a matter so steeped in controversy . It was hard to believe this material was generated by a complex non-thinking "thing", by computer Internet data searches and execution of complicated algorithms. There was a strong impression of communication with a rational intelligent agent, I guess only showing how possible (though computationally difficult) it is to fool us. The apparently mostly unbiased even-handed evaluations of the generative AI system are hard to explain especially given that the Internet data it was utilizing contains so much negative and very biased material on the possibility of survival. As for instance Wiki.

The answers did become somewhat repetitive, for instance repeating comments to the effect that scientific acceptance requires repeated demonstration on demand. The AI did correctly note that that this is impossible for the paranormal phenomena indicative of survival. Generally, most of the answers pointed out that the subject matter of survival and an afterlife has a certain body of evidence from paranormal phenomena like NDEs and reincarnation but that this evidence is mostly considered anecdotal or unscientific. It never stated that this negative opinion ignores much of the data and its quality, but the AI repeated the comment that it is controversial and the data is much questioned by science. 

The AI system never stated (as I think it would be expected to state) the general prevailing scientistic conviction and "party line" that the subject matter is wish-fulfilling superstition and imagination. You would think that this last answer would be automatically gleaned by the AI from the very extensive skeptical and closed-minded material on survival and an afterlife on the Internet, where paranormal proponents are a small minority.

It occurs to me that it is almost as if the creators of this system deliberately set up pragmatic rules for the AI that it would answer questions on controversial subjects in such a way as not to take a stand one way or the other. So as not to upset too many people?  

Also, one question revealed the dreaded AI "hallucination" phenomenon, where a question deliberately citing a nonexistent past experiment was accepted as the truth followed by a comment on this nonexistent experiment. 

Notably, toward the end, in its conversational manner the AI even asked Tymn for his own opinions on the subject. 

Here are 3 sample questions and answers from the extensive "interview":


Quote:MET: Is there evidence that consciousness survives death?

AI:  “The question of life after death has intrigued humanity for centuries, but scientific evidence remains inconclusive.  Most scientific studies focus on phenomena like near-death experiences (NDEs), which some interpret as evidence of an afterlife.  However, these experiences can often be explained through neurological and psychological factors, such as brain activity during trauma or altered states of consciousness.
.............................................................

MET: I understand that the evidence does not provide absolute certainty, but isn’t there at least a preponderance of evidence resulting from psychical research and parapsychology?

AI: “Psychical research and parapsychology do present various studies and anecdotal evidence that some interpret as supporting the possibility of life after death, such as mediumship, reincarnation cases, and certain phenomena reported during NDEs. However, these fields face significant criticism regarding their methodologies, replicability, and scientific rigor.

“Many skeptics argue that the evidence is often anecdotal or circumstantial, lacking the rigorous standards required for scientific validation. For example, while some studies claim to document instances of reincarnation, critics point out issues like lack of control, potential for bias, and alternative explanations.

“Overall, while there are intriguing reports and some researchers advocate for further exploration of these topics, the majority of the scientific community remains cautious, emphasizing the need for more robust, replicable evidence. Thus, the preponderance of evidence is still debated and not widely accepted as definitive proof of life after death.”

MET: I understand that the evidence does not provide absolute certainty, but isn’t there at least a preponderance of evidence resulting from psychical research and parapsychology?

AI: “Psychical research and parapsychology do present various studies and anecdotal evidence that some interpret as supporting the possibility of life after death, such as mediumship, reincarnation cases, and certain phenomena reported during NDEs. However, these fields face significant criticism regarding their methodologies, replicability, and scientific rigor.
..............................................................

MET:  Do you think we’ll ever have conclusive or definitive evidence amounting to scientific proof of consciousness surviving death?

AI: “The question of consciousness surviving death is a deeply complex and contentious one. While some researchers and practitioners in fields like parapsychology and near-death studies report anecdotal evidence suggesting continuity of consciousness after death, definitive scientific proof remains elusive.

“Scientific inquire relies on repeatable, observable phenomena, and the nature of consciousness and experiences surrounding death often resist such straightforward investigation. Advances in neuroscience might deepen our understanding of consciousness itself, but whether this will lead to proof of its survival after death is uncertain.

“It’s a profound area of exploration that touches on philosophy, spirituality, and science. Many people have differing beliefs about it, influenced by personal experiences, cultural backgrounds, and scientific perspectives.
“Many skeptics argue that the evidence is often anecdotal or circumstantial, lacking the rigorous standards required for scientific validation. For example, while some studies claim to document instances of reincarnation, critics point out issues like lack of control, potential for bias, and alternative explanations.

“Overall, while there are intriguing reports and some researchers advocate for further exploration of these topics, the majority of the scientific community remains cautious, emphasizing the need for more robust, replicable evidence. Thus, the preponderance of evidence is still debated and not widely accepted as definitive proof of life after death.”
(This post was last modified: 2024-11-05, 11:54 PM by nbtruthman. Edited 5 times in total.)

  • View a Printable Version
Forum Jump:


Users browsing this thread: 3 Guest(s)