AI megathread

301 Replies, 12297 Views

The World Reacts to OpenAI's Unveiling of o3!

The most impressive result to me is that o3 achieved 25% on the FrontierMath problem set, which is a set of problems apparently very hard even for Fields Medalists to solve, with some requiring hours and even days. The previous record achieved by an AI was only 2%.
Here is a scientist who uses AI to count particular types of galaxies:

https://www.youtube.com/watch?v=EUrOxh_0leE

I think she makes an interesting point.

David
[-] The following 1 user Likes David001's post:
  • Sciborg_S_Patel
Here lies the internet, murdered by generative AI

Erik Hoel

Quote:Corruption everywhere, even in YouTube's kids content

Quote:This isn’t what everyone feared, which is AI replacing humans by being better—it’s replacing them because AI is so much cheaper. Sports Illustrated was not producing human-quality level content with these methods, but it was still profitable.

Quote:Sadly, the people affected the most by generative AI are the ones who can’t defend themselves. Because they don’t even know what AI is. Yet we’ve abandoned them to swim in polluted information currents. I’m talking, unfortunately, about toddlers. Because let me introduce you to…

the hell that is AI-generated children’s YouTube content.

YouTube for kids is quickly becoming a stream of synthetic content. Much of it now consists of wooden digital characters interacting in short nonsensical clips without continuity or purpose. Toddlers are forced to sit and watch this runoff because no one is paying attention. And the toddlers themselves can’t discern that characters come and go and that the plots don’t make sense and that it’s all just incoherent dream-slop. The titles don’t match the actual content, and titles that are all the parents likely check, because they grew up in a culture where if a YouTube video said BABY LEARNING VIDEOS and had a million views it was likely okay. Now, some of the nonsense AI-generated videos aimed at toddlers have tens of millions of views.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


[-] The following 1 user Likes Sciborg_S_Patel's post:
  • Typoz
Humans in the Loop

Lesley Stahl

Quote:The familiar narrative is that artificial intelligence will take away human jobs: machine-learning will let cars, computers and chatbots teach themselves - making us humans obsolete. 

Well, that's not very likely, and we're gonna tell you why. There's a growing global army of millions toiling to make AI run smoothly. They're called "humans in the loop:" people sorting, labeling, and sifting reams of data to train and improve AI for companies like Meta, OpenAI, Microsoft and Google.

Quote:The workers told us they were tricked into this work by ads like this that described these jobs as "call center agents" to "assist our clients' community and help resolve inquiries empathetically." 

Fasica: I was told I was going to do a translation job.

Lesley Stahl: Exactly what was the job you were doing?

Fasica: I was basically reviewing content which are very graphic, very disturbing contents. I was watching dismembered bodies or drone attack victims. You name it. You know, whenever I talk about this, I still have flashbacks.

Lesley Stahl: Are any of you a different person than they were before you had this job?

Fasica: Yeah. I find it hard now to even have conversations with people. It's just that I find it easier to cry than to speak.

Nathan: You continue isolating you-- yourself from people. You don't want to socialize with others. It's you and it's you alone.

Lesley Stahl: Are you a different person?
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


[-] The following 1 user Likes Sciborg_S_Patel's post:
  • Valmar
Shifting the discussion from the 3 Substances in 3 Environments thread into this more appropriate one:

(2024-12-23, 05:51 PM)Sciborg_S_Patel Wrote: I recall learning a lot of the tricks in my Computational Linguistics class. Once you get a sense of how the magic trick works it just becomes less impressive?

Or rather I appreciate the effort that goes into the trick, while not believing the magic?

(2024-12-23, 10:25 PM)Valmar Wrote: If you don't understand the basics of how LLMs are programmed, then it will appear unfathomable.

You need to think of LLMs as effectively a very fancy next-word predictor, because that is effectively what they boil down to, despite the apologism from those lost in the hype. Yes, the algorithms can be very fancy, but it's still nothing more than that ~ a fancy, dumb algorithm operating on inputs and going through a database.

Don't let the metaphors confuse you.

I don't expect that I as a layman have any particularly more impoverished understanding of LLMs than you guys as laymen too, but I still think that their performance on tasks that for humans would require deep understanding is very unexpected.

Consider in particular that, as I posted above, OpenAI's new, yet-to-be-released o3 model achieved 25% on the FrontierMath problem set, which is a set of problems apparently very hard even for Fields Medalists to solve, with some requiring hours and even days.

Think about that. The best mathematicians in the world have to deeply understand and think hard about how to solve these problems. This is not the sort of "trick" where an AI is paraphrasing and regurgitating insights that it's hoovered up from the web. Nor is it like Mathematica or MatLab where the algorithmic solutions to problems are preprogrammed by humans. This is an AI that has learnt without task-specific training to solve novel, unseen problems in what seems to me to be a paradigmatic case that we would expect to require deep understanding and real thought from the best and brightest humans - yet here is a machine, incapable of true understanding and thought, coming up with correct solutions anyhow.

Still don't find that hard to fathom? If so, I still find that hard to fathom. The best I can make of it is that AI of this capacity challenges some aspects of the mainstream views we hold on this forum, and it's easier to summarily dismiss it than to reflect on the nature of the challenge and on how to respond more thoughtfully.
[-] The following 1 user Likes Laird's post:
  • nbtruthman
(2024-12-30, 11:13 AM)Laird Wrote: Shifting the discussion from the 3 Substances in 3 Environments thread into this more appropriate one:



I don't expect that I as a layman have any particularly more impoverished understanding of LLMs than you guys as laymen too, but I still think that their performance on tasks that for humans would require deep understanding is very unexpected.

Consider in particular that, as I posted above, OpenAI's new, yet-to-be-released o3 model achieved 25% on the FrontierMath problem set, which is a set of problems apparently very hard even for Fields Medalists to solve, with some requiring hours and even days.

Think about that. The best mathematicians in the world have to deeply understand and think hard about how to solve these problems. This is not the sort of "trick" where an AI is paraphrasing and regurgitating insights that it's hoovered up from the web. Nor is it like Mathematica or MatLab where the algorithmic solutions to problems are preprogrammed by humans. This is an AI that has learnt without task-specific training to solve novel, unseen problems in what seems to me to be a paradigmatic case that we would expect to require deep understanding and real thought from the best and brightest humans - yet here is a machine, incapable of true understanding and thought, coming up with correct solutions anyhow.

Still don't find that hard to fathom? If so, I still find that hard to fathom. The best I can make of it is that AI of this capacity challenges some aspects of the mainstream views we hold on this forum, and it's easier to summarily dismiss it than to reflect on the nature of the challenge and on how to respond more thoughtfully.

It’s impressive as a piece of programming, but I don’t know why I would think this was a sign that AI was conscious?

Just to note there are Materialists like Searle who deny AI could be conscious, whereas immaterialists like Chalmers, Hoffman, & Arvan among others do think AI can be conscious. So I don’t see this as necessarily being a metaphysical issue?

I think Theorem Provers are an incredible feat of human ingenuity. I don’t think the Provers are conscious though.

Can AI do math yet?

Xeno Press

Quote:So why make such a dataset? The problem is that grading solutions to “hundreds” of answers to “prove this theorem!” questions is expensive (one would not trust a machine to do grading at this level, at least in 2024, so one would have to pay human experts), whereas checking whether hundreds of numbers in one list correspond to hundreds of numbers in another list can be done in a fraction of a second by a computer. As Borcherds pointed out, mathematics researchers spend most of the time trying to come up with proofs or ideas, rather than numbers, however the FrontierMath dataset is still extremely valuable because the area of AI for mathematics is desperately short of hard datasets, and creating a dataset such as this is very hard work (or equivalently very expensive). This recent article by Frieder et al talks in a lot more depth about the shortcomings in datasets for AI in mathematics.

Quote:However, as Borcherds points out, even if we ended up with a machine which was super-human at “find this number!” questions, it would still have limited applicability in many areas of research mathematics, where the key question of interest is usually how to “prove this theorem!”. In my mind, the biggest success story in 2024 is DeepMind’s AlphaProof, which solved four out of the six 2024 IMO (International Mathematics Olympiad) problems. These were either “prove this theorem!” or “find a number and furthermore prove that it’s the right number” questions and for three of them, the output of the machine was a fully formalized Lean proof. Lean is an interactive theorem prover with a solid mathematics library mathlib containing many of the techniques needed to solve IMO problems and a lot more besides; DeepMind’s system’s solutions were human-checked and verified to be “full marks” solutions. However, we are back at high school level again; whilst the questions are extremely hard, the solutions use only school-level techniques. In 2025 I’m sure we’ll see machines performing at gold level standard in the IMO. However this now forces us to open up the “grading” can of worms which I’ve already mentioned once, and I’ll finish this post by talking a little more about it.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(This post was last modified: 2024-12-30, 06:32 PM by Sciborg_S_Patel.)
(2024-12-30, 06:32 PM)Sciborg_S_Patel Wrote: It’s impressive as a piece of programming, but I don’t know why I would think this was a sign that AI was conscious?

That wasn't where I was going; I was alluding more to the doubts this sort of performance raise about the arguments some of us wield against epiphenomenalism (such as the epiphenomenalism David Chalmers advocates as property dualism): that thinking and understanding are intimately tied up with phenomenal experience, and therefore that the phenomenal experience of thought and understanding cannot be a mere epiphenomenon. This level of performance by AI raises the possibility that something akin to thinking and understanding could occur entirely prior to consciousness, with the phenomenal experience associated with that thinking and understanding a mere epiphenomenal tack-on.

(2024-12-30, 06:32 PM)Sciborg_S_Patel Wrote: Can AI do math yet?

That was an interesting article, but all it demonstrates at worst is that the best current AIs can solve all or almost all undergraduate level maths problems. Given that the models are still improving, this isn't such a damning critique.
[-] The following 1 user Likes Laird's post:
  • Sciborg_S_Patel
(2024-12-30, 07:47 PM)Laird Wrote: That wasn't where I was going; I was alluding more to the doubts this sort of performance raise about the arguments some of us wield against epiphenomenalism (such as the epiphenomenalism David Chalmers advocates as property dualism): that thinking and understanding are intimately tied up with phenomenal experience, and therefore that the phenomenal experience of thought and understanding cannot be a mere epiphenomenon. This level of performance by AI raises the possibility that something akin to thinking and understanding could occur entirely prior to consciousness, with the phenomenal experience associated with that thinking and understanding a mere epiphenomenal tack-on.


That was an interesting article, but all it demonstrates at worst is that the best current AIs can solve all or almost all undergraduate level maths problems. Given that the models are still improving, this isn't such a damning critique.

I feel the argument against epiphenomenalism is we shouldn't expect epiphenomenal consciousness to reflect on its own self-reflexiveness? 

I mean we've had Theorem Provers for years now, so unclear what "Find the Number" questions like those in the Frontier Math dataset have to add to this debate?

Admittedly my personal argument against epiphenomenalism is the best accounts for how causation works involve Mind in some way. At the very least I think the gaps we have in explaining causation are a good reason to doubt epipheomenalism?
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(2024-12-30, 07:53 PM)Sciborg_S_Patel Wrote: I feel the argument against epiphenomenalism is we shouldn't expect epiphenomenal consciousness to reflect on its own self-reflexiveness?

That's the knock-down argument which AI can't touch, yes. I was thinking more though of the sort of counterarguments EJ Lowe makes against "the hard problem", which you endorse.

(2024-12-30, 07:53 PM)Sciborg_S_Patel Wrote: I mean we've had Theorem Provers for years now, so unclear what "Find the Number" questions like those in the Frontier Math dataset have to add to this debate?

I haven't looked into theorem provers beyond those using the proof tree aka semantic tableau method, but, having written a proof tree tool myself, I know that at least they are very rule-based and not in any meaningful sense intelligent compared to LLMs. I don't know about theorem provers more generally.

"Find the Number" is again to me a surprisingly dismissive phrase: a human would need deep understanding to perform the "search" for these numbers; the AI which is solving these problems, though fundamentally algorithmic, is not simple brute-force nor at a higher level even rule-based stuff. What it's doing seems quite remarkable to me.

(2024-12-30, 07:53 PM)Sciborg_S_Patel Wrote: Admittedly my personal argument against epiphenomenalism is the best accounts for how causation works involve Mind in some way. At the very least I think the gaps we have in explaining causation are a good reason to doubt epipheomenalism?

Ah, as you know, we see things somewhat differently here, and there's probably no point in rehashing our differences. (Not that I'm claiming to fully understand causation).
[-] The following 1 user Likes Laird's post:
  • Sciborg_S_Patel
(2024-12-30, 08:22 PM)Laird Wrote: "Find the Number" is again to me a surprisingly dismissive phrase: a human would need deep understanding to perform the "search" for these numbers; the AI which is solving these problems, though fundamentally algorithmic, is not simple brute-force nor at a higher level even rule-based stuff. What it's doing seems quite remarkable to me.

I'm not sure what you mean by saying the AI is algorithmic but not rule-based?

Quote:That's the knock-down argument which AI can't touch, yes. I was thinking more though of the sort of counterarguments EJ Lowe makes against "the hard problem", which you endorse.

I'm not sure what AI is doing that refutes EJ Lowe's argument?
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(This post was last modified: 2024-12-30, 08:35 PM by Sciborg_S_Patel. Edited 1 time in total.)

  • View a Printable Version
Forum Jump:


Users browsing this thread: 6 Guest(s)