AI megathread

188 Replies, 7297 Views

(2024-09-21, 08:44 PM)Sciborg_S_Patel Wrote:

I actually watched that video or one nearly like it.  Hyper complex content and I freely admit I "followed" it directionally, but its way beyond my technical expertise to have any sort of nuanced understanding.  A bit frightening in many ways as all prior computer software, while complex to varying degrees, was foundational algorithmically based.  And, thus, reducible.
[-] The following 1 user Likes Silence's post:
  • Sciborg_S_Patel
(2024-09-23, 07:11 PM)Silence Wrote: I actually watched that video or one nearly like it.  Hyper complex content and I freely admit I "followed" it directionally, but its way beyond my technical expertise to have any sort of nuanced understanding.  A bit frightening in many ways as all prior computer software, while complex to varying degrees, was foundational algorithmically based.  And, thus, reducible.

I thought that ALL computer software, including generative AI's, is by definition algorithmically based and therefore at least in principle reducible, because at the absolute base any possible (except for quantum computers) digital processor is just executing programmed machine code extremely rapidly. Though any computer is doing this algorithmic process incredibly fast, it still is really just manipulating binary 1s and 0s beween various registers using machine coding to command things like "load register A from memory address B", "add register A to register B", "multiply register C by register D", divide, compute the 3rd root of the contents of memory address Y using an algorithm, etc. etc.

It seems to me that this incredibly intense and rapid process is what is really going on at the bottom, at the foundation, inside the generative AI multiple computer processors when it is producing its apparently intelligent and creative response through what is basically statistical processing of training data. So it very much seems to be that all computers are nothing but algorithmic processors at heart and are in principle utterly incapable of ever becoming conscious.
(This post was last modified: 2024-09-23, 10:34 PM by nbtruthman. Edited 6 times in total.)
[-] The following 1 user Likes nbtruthman's post:
  • Sciborg_S_Patel
(2024-09-23, 09:49 PM)nbtruthman Wrote: I thought that ALL computer software, including generative AI's, is by definition algorithmically based and therefore at least in principle reducible, because at the absolute base any possible (except for quantum computers) digital processor is just executing programmed machine code extremely rapidly. Though any computer is doing this algorithmic process incredibly fast, it still is really just manipulating binary 1s and 0s beween various registers using machine coding to command things like "load register A from memory address B", "add register A to register B", "multiply register C by register D", divide, compute the 3rd root of the contents of memory address Y using an algorithm, etc. etc.

It seems to me that this incredibly intense and rapid process is what is really going on at the bottom, at the foundation, inside the generative AI multiple computer processors when it is producing its apparently intelligent and creative response through what is basically statistical processing of training data. So it very much seems to be that all computers are nothing but algorithmic processors at heart and are in principle utterly incapable of ever becoming conscious.

I have a sense that there is some space in between, say, if-then and consciousness. At least that's how I've interpreted what I've read about how these LLMs are operating.  I don't know that a computer scientist can actually reproduce what these LLMs do.  I'd be curious if I understand this correctly or not.

As for consciousness, I'm in your camp and don't see it suddenly springing out of an NVIDIA processor spontaneously at some point.
[-] The following 1 user Likes Silence's post:
  • Sciborg_S_Patel
(2024-09-24, 04:24 PM)Silence Wrote: I have a sense that there is some space in between, say, if-then and consciousness. At least that's how I've interpreted what I've read about how these LLMs are operating.  I don't know that a computer scientist can actually reproduce what these LLMs do.  I'd be curious if I understand this correctly or not.

As for consciousness, I'm in your camp and don't see it suddenly springing out of an NVIDIA processor spontaneously at some point.

It seems to me that "if-then" logic is just mainly Boolian logic and another of the elementary types of data processing algorithms that all non-quantum computers employ at the foundation of their processing. This logic is basically of the type where a logic or arithmetic computation is carried out and the result determines what path the computer will then take in terms of executing memory instructions. For instance for an elementary arithmetic algorithm, "if the A register value is > value Y in variable memory address Z then jump to memory address W and execute that instruction. If it is < or =, execute instruction at address V". Or the computation that was carried out may be, rather than arithmetic, a Boolian logic tree consisting of a chain or tree composed of elements like "if X is 1 (true) and Y is 0 (false) then go to location U and execute that next elementary decision logic kernel".

No matter how many processors are working together in concert and how great the sophistication of the programming with a generative AI system for instance, at any moment during the AI system's operation this just described sort of process is what is really going on in the foundational core of the AI computers executing the programs. This is a physical logical process mechanized by microtransistors and diodes and involving algorithms, and is in an entirely existentially different realm than consciousness which is immaterial. Therefore AIs simply can't develop consciousness.
(This post was last modified: 2024-09-24, 05:49 PM by nbtruthman. Edited 2 times in total.)
[-] The following 1 user Likes nbtruthman's post:
  • Sciborg_S_Patel
(2024-09-24, 04:24 PM)Silence Wrote: I have a sense that there is some space in between, say, if-then and consciousness. At least that's how I've interpreted what I've read about how these LLMs are operating.  I don't know that a computer scientist can actually reproduce what these LLMs do.  I'd be curious if I understand this correctly or not.

As for consciousness, I'm in your camp and don't see it suddenly springing out of an NVIDIA processor spontaneously at some point.

You should imagine LLMs as functions with billions of internal parameters, which they use to process input text, just like sine and cosine are functions that process an input angle.
[-] The following 3 users Like sbu's post:
  • nbtruthman, Silence, Sciborg_S_Patel
Is the future of AI an omnipresent "voice in the head" that you can't get away from?

One apparent "AI expert" is confidently predicting what seems to me to be one of the most nightmarish AI futures being predicted by futurists, where we are forced by the circumstances of our society (social pressures and big corporate influences) to allow a malign generative AI influence to enter our lives and at least partially control us. Something that could most realistically be termed a parasitic and controlling entity. 

I want absolutely no part of this - I'm opting out.

https://bigthink.com/the-future/the-whis...your-head/

Quote:"Within the next few years, an AI assistant will take up residence inside your head. It will do this by whispering guidance into your ears as you go about your daily routine, reminding you to pick up your dry cleaning as you walk down the street, helping you find your parked car in a stadium lot, and prompting you with the name of a coworker you pass in the hall. It may even coach you as you converse with friends and coworkers, giving you interesting things to say that make you seem smarter, funnier, and more charming than you are. These will feel like superpowers.
...........................................................................
Within the next few years, an AI assistant will take up residence inside your head. It will do this by whispering guidance into your ears as you go about your daily routine, reminding you to pick up your dry cleaning as you walk down the street, helping you find your parked car in a stadium lot, and prompting you with the name of a coworker you pass in the hall. It may even coach you as you converse with friends and coworkers, giving you interesting things to say that make you seem smarter, funnier, and more charming than you are. These will feel like superpowers.
...........................................................................
Whatever we call this technology, it is coming soon and will mediate all aspects of our lives, assisting us at work, at school, or even when grabbing a late-night snack in the privacy of our own kitchen. If you are skeptical, you’ve not been tracking the massive investment and rapid progress made by Meta on this front and the arms race they are stoking with Apple, Google, Samsung, and other major players in the mobile market.
............................................................................
The first of these devices is already on the market — the AI-powered Ray-Bans from Meta.
............................................................................
Of course, everyone else will be “augmented” too, creating an arms race among the public to embrace the latest features and functions. This is the future of mobile computing. It will transform the bricks we carry around all day into body-worn devices that see and hear our surroundings and covertly whisper useful information and friendly reminders at every turn.

Most of these devices will be deployed as AI-powered glasses because they give the best vantage point for cameras to monitor our field of view, though camera-enabled earbuds will be available too. The other benefit of glasses is that they can be enhanced to display visual content, enabling the AI to provide silent assistance as text, images, and realistic immersive elements that are integrated spatially into our world.
(This post was last modified: 2024-10-09, 12:50 AM by nbtruthman. Edited 1 time in total.)
[-] The following 3 users Like nbtruthman's post:
  • Sciborg_S_Patel, Laird, Valmar
Yesterday I noticed a new icon in my Chromebook taskbar and it turns out it is Google's AI app Gemini.  I discovered that there are privacy concerns because of all the data it has access to, including where you live.  I right-clicked the icon and noticed there is no uninstall  option for this app.  I am hoping that if I don't use it, it can't do anything  and I  do all my computer stuff and internet in a Debian VM rather than use apps from Google Play.  The world is getting scary!
[-] The following 3 users Like Brian's post:
  • Laird, nbtruthman, Sciborg_S_Patel
AI feels like a technology development that may have the largest "tail" dichotomies to date. Meaning: will either be the most hugely helpful/benevolent technology ever developed by mankind or the most harmful/malevolent.

I remain hopeful for the former as I think anyone can see the potential good.  But I grow more wary of the latter I'm afraid.
[-] The following 4 users Like Silence's post:
  • Brian, nbtruthman, Typoz, Laird
There was a twitter thread asking the question,
"Can Large Language Models (LLMs) truly reason?"

which discussed this paper:
GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models

The parts which I read were about how solving a problem involving the four basic arithmetic operations could generate different (and incorrect) results when non-related text was included in the problem statement, or simple change of proper names could cause similar errors.

Incidentally I noticed some similar unexpected behaviour yesterday when using the DeepL app to translate some text. Since it would only process a limited quantity of text at a time (in the free version) I deleted some earlier parts of the text and was very surprised to see the next part of the text generate a completely different rendering. Which version was best or likely to be correct? I had to use my own judgement since there was not always a clear way to tell unless one had existing knowledge of either the language itself or the expected result.
(This post was last modified: 2024-10-14, 02:42 PM by Typoz. Edited 1 time in total.)
[-] The following 2 users Like Typoz's post:
  • Sciborg_S_Patel, Brian

  • View a Printable Version
Forum Jump:


Users browsing this thread: 7 Guest(s)