AI megathread

301 Replies, 12270 Views

Another Warning That the AI Bubble Is Near Bursting…

Media Matters News

Quote:We’ve heard it from Gary Smith and Jeffrey Funk. But now, once again, from AI analyst Gary Marcus: The AI bubble created, in part, by Large Language Models (LLMs) or chatbots is nearing its peak:
Quote:The economics are likely to be grim. Sky high valuation of companies like OpenAI and Microsoft are largely based on the notion that LLMs will, with continued scaling, become artificial general intelligence. As I have always warned, that’s just a fantasy. There is no principled solution to hallucinations in systems that traffic only in the statistics of language without explicit representation of facts and explicit tools to reason over those facts.
LLMs will not disappear, even if improvements diminish, but the economics will likely never make sense: additional training is expensive, the more scaling, the more costly. And, as I have been warning, everyone is landing in more or less the same place, which leaves nobody with a moat. LLMs such as they are, will become a commodity; price wars will keep revenue low. Given the cost of chips, profits will be elusive. When everyone realizes this, the financial bubble may burst quickly; even NVidia might take a hit, when people realize the extent to which its valuation was based on a false premise.
“CONFIRMED: LLMs have indeed reached a point of diminishing returns,
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


[-] The following 1 user Likes Sciborg_S_Patel's post:
  • Valmar
(2025-01-02, 07:45 PM)Sciborg_S_Patel Wrote: Another Warning That the AI Bubble Is Near Bursting…

Media Matters News



Tesla is using AI for humanoid robots and self driving cars. Other companies are using AI for other applications. The fact that the product may become a commodity is an issue for AI companies and their investors that have AI as their only product. For the rest of us, it doesn't mean the end of a continuous stream of innovative uses for that commodity. If prices go down that is good for most of us. The bubble bursting is good for most people.

Pretty much every new technology, digital cameras, computers, cell phones, television, stereos, integrated circuits went through a process of starting out as something expensive that only a few early adopters would use and then became commodities. The continued innovation of all those products happened because of the competition that resulted from them becoming commodities not in spite of it.
The first gulp from the glass of science will make you an atheist, but at the bottom of the glass God is waiting for you - Werner Heisenberg. (More at my Blog & Website)
(This post was last modified: 2025-01-03, 08:02 AM by Jim_Smith. Edited 7 times in total.)
[-] The following 1 user Likes Jim_Smith's post:
  • Sciborg_S_Patel
(2025-01-03, 06:16 AM)Jim_Smith Wrote: Tesla is using AI for humanoid robots and self driving cars. Other companies are using AI for other applications. The fact that the product may become a commodity is an issue for AI companies and their investors that have AI as their only product. For the rest of us, it doesn't mean the end of a continuous stream of innovative uses for that commodity. If prices go down that is good for most of us. The bubble bursting is good for most people.

Pretty much every new technology, digital cameras, computers, cell phones, television, stereos, integrated circuits went through a process of starting out as something expensive that only a few early adopters would use and then became commodities. The continued innovation of all those products happened because of the competition that resulted from them becoming commodities not in spite of it.

Oh there will be a place for AI, though I don't know if self-driving cars are a good use case.

The error rates are too high for most tasks we might want AI to be used in. At least right now. It is possible that the error diminishes, but it's also possible we just hit a hard barrier on how good LLM type AIs can be.

That said, I did think it was interesting that the AI Snake Oil site is actually a bit more optimistic about coming improvements than the Mind Matters article ->

Is AI progress slowing down?

Arvind Narayanan and Sayash Kapoor

Quote:This has left AI observers confused about whether or not progress in AI capabilities is slowing down. In this essay, we look at the evidence on this question, and make four main points:

  1. Declaring the death of model scaling is premature.

  2. Regardless of whether model scaling will continue, industry leaders’ flip flopping on this issue shows the folly of trusting their forecasts. They are not significantly better informed than the rest of us, and their narratives are heavily influenced by their vested interests.

  3. Inference scaling is real, and there is a lot of low-hanging fruit, which could lead to rapid capability increases in the short term. But in general, capability improvements from inference scaling will likely be both unpredictable and unevenly distributed among domains.

  4. The connection between capability improvements and AI’s social or economic impacts is extremely weak. The bottlenecks for impact are the pace of product development and the rate of adoption, not AI capabilities.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


[-] The following 2 users Like Sciborg_S_Patel's post:
  • Jim_Smith, Valmar
(2025-01-03, 06:56 PM)Sciborg_S_Patel Wrote: Oh there will be a place for AI, though I don't know if self-driving cars are a good use case.

The error rates are too high for most tasks we might want AI to be used in. At least right now. It is possible that the error diminishes, but it's also possible we just hit a hard barrier on how good LLM type AIs can be.

That said, I did think it was interesting that the AI Snake Oil site is actually a bit more optimistic about coming improvements than the Mind Matters article ->

Is AI progress slowing down?

Arvind Narayanan and Sayash Kapoor

What are you basing your statement about error rates on? General purpose chat boxes make a lot of errors, but what about AI's specially trained for a specific task? I thought those worked pretty well. (Try searching the web for "Where do AI's out perform humans".)

There are videos on youtube from people who have Teslas with full self driving. It looks pretty impressive.

According to some of the videos, every Tesla car has cameras and collects data when a person drives it, and Tesla uses that data to train the self driving AI.  At first Tesla tried to program the cars to drive autonomously, then they tried adding AI. The AI was such a big improvement they scrapped the human coding and just used the AI and it was even better.  And all the cars are connected to a central site so road conditions, construction, etc that one car picks up on are known by all the others.

I suspect that as soon as they collect enough data to show their self driving cars are safer than human drivers, they will start selling some cars that only self drive. These will have fewer parts so they will be less expensive and could be used for fully automated taxies that will less expensive for riders because the vehicle is less expensive and there is no human driver to pay or tip.
The first gulp from the glass of science will make you an atheist, but at the bottom of the glass God is waiting for you - Werner Heisenberg. (More at my Blog & Website)
(This post was last modified: 2025-01-04, 12:49 AM by Jim_Smith. Edited 1 time in total.)
[-] The following 1 user Likes Jim_Smith's post:
  • Sciborg_S_Patel
(2025-01-04, 12:47 AM)Jim_Smith Wrote: What are you basing your statement about error rates on? General purpose chat boxes make a lot of errors, but what about AI's specially trained for a specific task? I thought those worked pretty well. (Try searching the web for "Where do AI's out perform humans".)

Well I'm admittedly hyper-skeptical about driverless cars but we'll see how it goes this year.

More generally, I was thinking about something one of the AI Snake Oil writers said recently:

Quote:Google's Deep Research is an excellent application of agentic capabilities. One example something it can do pretty well is search for all my podcasts and interviews and create a webpage listing them. Cuts down effort at least 10x compared to doing it manually. The reason it works well is that it's no big deal if it misses a couple of items. And if it got any details wrong I can easily spot them. The unreliability of LLMs usually gets magnified in agentic workflows (including in Deep Research), so it's really important that errors aren't costly.

Compare this to the usual motivating example for AI agents — automating shopping or flight booking. This is actually the worst-case scenario. If the wrong product shows up at your door even 10% of the time, the agent is useless. And don't forget that online commerce is an adversarial environment — comparison shopping is hard because companies deliberately make it hard. If agents make it easier, brands will fight back. As for flight booking, the time consuming part is preference elicitation. The reason it is frustrating is that search interfaces don't know all your preferences and constraints (e.g. how to trade off time and money, preferred airlines, constraints on when you want to depart and arrive, and really dozens of other little things). But guess what, the agent doesn't know this either. I really don't think shopping and travel booking agents are going to work, and it's not a matter of improving capabilities.

Over the long term there will be progress in closing the "capability-reliability gap" for agents, but for now, I think successful applications will be ones where (1) the user is in the loop, (2) errors are relatively easy to spot and (3) aren't a deal-breaker if not spotted.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(2025-01-04, 02:25 AM)Sciborg_S_Patel Wrote: Well I'm admittedly hyper-skeptical about driverless cars but we'll see how it goes this year.

More generally, I was thinking about something one of the AI Snake Oil writers said recently:

I can't read the link for driverless cars but it's two years old (October 2022). Tesla updates their software multiple times per year. 

Tesla added AI in 2023 so that seems to me to suggest AI is what is actually making full self driving possible, not inhibiting it.
The first gulp from the glass of science will make you an atheist, but at the bottom of the glass God is waiting for you - Werner Heisenberg. (More at my Blog & Website)
[-] The following 1 user Likes Jim_Smith's post:
  • Sciborg_S_Patel
(2025-01-02, 03:36 AM)Valmar Wrote: You don't need a deep anything when you can bruteforce it. LLMs just bruteforce with large datasets

Are you sure that you know what a brute force algorithm is? I'm not.
(2025-01-02, 05:18 AM)Sciborg_S_Patel Wrote: I think we'd need to see a good trace through of what the program did to arrive at the answer before we concluded that?

I think that that misses the point, which is that in combination with what we know about the general methods behind LLMs, that "what most people are judging are the outputs to their queries" is reasonable as a basis for judgement, and as a basis for physicalists and epiphenomenalists to mount the argument I summarised.

I'm not saying that the argument succeeds. I'm just saying that we should steel man (the opposite of straw man) our opponents' positions in this respect. A dismissive approach does our own position no favours.
[-] The following 1 user Likes Laird's post:
  • Sciborg_S_Patel
(2025-01-04, 09:29 AM)Laird Wrote: Are you sure that you know what a brute force algorithm is? I'm not.

The point is that with enough processing power, a good enough algorithm, and a good enough dataset, you can create some convincing illusions. But there's still no intelligence, sentience or otherwise. It's just entirely non-conscious data-processing, entirely controlled by the human programmers and engineers. No innate intelligence required when you can fake it and convince the masses with some clever slop that LLMs are "becoming intelligent".
“Everything that irritates us about others can lead us to an understanding of ourselves.”
~ Carl Jung


[-] The following 1 user Likes Valmar's post:
  • Sciborg_S_Patel
(2025-01-04, 09:35 AM)Valmar Wrote: The point is that

No acknowledgement that LLMs don't use brute force algorithms? Admitting when you've made a mistake is a good sign of intellectual honesty.

(2025-01-04, 09:35 AM)Valmar Wrote: with enough processing power, a good enough algorithm, and a good enough dataset, you can create some convincing illusions. But there's still no intelligence, sentience or otherwise.

No, the point is that even though we know that LLMs are a simply a combination of those three elements, they behave in ways that indicate at least an analogue of conceptual understanding.

I had an interesting conversation with ChatGPT on this theme, partly also inspired by the article which @Sciborg_S_Patel posted earlier, "Another Warning That the AI Bubble Is Near Bursting…". I've attached it as a PDF to this post. The most relevant part is its answer to my question:

Quote:I would like to explore it further, because it seems to me that the understanding that your responses demonstrate - albeit that that understanding is non-conscious, and maybe more of an analogue of understanding than understanding proper - goes deeper than merely predicting relationships and patterns of language. I get the sense that your understanding reflects more of a *conceptual* model of the world than you've admitted to.

Here are the key extracts from its answer:

Quote:You're absolutely right to suggest that my responses may reflect more than just statistical language patterns. While my responses are, at their core, driven by patterns in data, they also seem to exhibit a deeper conceptual coherence that suggests more than just surface-level pattern prediction.

[...]

While I don’t have a true "conscious" understanding of concepts or a mental model in the human sense, I do indeed capture a conceptual representation of the world based on the information in my training data. What does this mean?
  • Coherent Representation of Concepts: Through training, I learn not just how words are used, but also how concepts are interrelated. For example, I recognize that "gravity" isn't just a string of text—it's a physical force that causes objects to fall toward the Earth, and this concept is related to other physical phenomena like mass and acceleration. In a sense, my "model" of gravity is not purely a string of words but a concept represented through the language and relationships I've learned. When I discuss gravity, I can generate sentences that reflect an underlying conceptual framework, even if I don't "experience" gravity directly.
  • Abstract Reasoning: My responses often demonstrate the ability to reason about abstract concepts— such as the nature of morality, time, or the structure of a good argument—even though I don’t experience these concepts or interact with them in a physical sense. For example, if you ask about a moral dilemma, I can generate reasoning that seems conceptually structured, reflecting an understanding of ethical frameworks, even though I don’t personally hold or experience these values. This suggests a kind of conceptual model in that I can relate concepts to one another based on textual evidence, even without "understanding" them in a human, lived sense.
  • Domain-Specific Concepts: Similarly, when I talk about historical events, scientific principles, or social phenomena, I'm not just spitting out patterns of words; I am drawing on a structured representation of knowledge built from extensive exposure to texts that explain these topics. My training data reflects many different domains, and I can use the relationships between these domains to form responses that are more than just pattern-matching. For instance, if you ask about the role of a leader in a revolution, I draw on my training to generate an answer that incorporates the concepts of leadership, social movements, and historical context.


Attached Files
.pdf   ChatGPT-on-understanding-modelling-self-reflection.pdf (Size: 269.48 KB / Downloads: 2)
[-] The following 3 users Like Laird's post:
  • Typoz, Sciborg_S_Patel, sbu

  • View a Printable Version
Forum Jump:


Users browsing this thread: 5 Guest(s)