AI machines aren’t ‘hallucinating’. But their makers are

37 Replies, 1181 Views

(2023-05-11, 06:22 AM)sbu Wrote: It’s definitely already starting. I have used chatGPT 4 every day in my work since March
some days increasing productivity with maybe up to 50% - now if everybody does this will there then still be work enough or will people be made redundant? I think the latter.
That assumes that people need to work for some reason, in reality the work-to-live model of society died in the 1920's once mass production started. But instead of embracing that in a way that benefited people they instead introduced planned obsolescence and other things to make things purposefully lower quality and temporary just so they could keep the machine going. Now I don't think they have an option for that. People are more than their jobs, people don't really need money for the most part at this stage, not with the technology available. People could use these innovations to remove the need for most any work so they can focus on things they actually care about, which itself would certainly make things better when they don't need to do it for profit just to survive, but its up to people to make that change in their own life and create that world.
"The cure for bad information is more information."
[-] The following 3 users Like Mediochre's post:
  • Ninshub, Sciborg_S_Patel, Typoz
(2023-05-11, 06:31 AM)Mediochre Wrote: People could use these innovations to remove the need for most any work so they can focus on things they actually care about, which itself would certainly make things better when they don't need to do it for profit just to survive, but its up to people to make that change in their own life and create that world.

I don't really know about that. The real problem will be to keep the masses occupied and content. People must engage in purposeful activities, or they go crazy. A lot already today struggle with purpose after retirement.
[-] The following 2 users Like sbu's post:
  • Ninshub, nbtruthman
(2023-05-11, 09:16 AM)sbu Wrote: A lot already today struggle with purpose after retirement.

But most don't AND we haven't taken a full societal look at what a non-working default society might look like.
[-] The following 2 users Like Silence's post:
  • Ninshub, Sciborg_S_Patel
(2023-05-11, 06:31 AM)Mediochre Wrote: That assumes that people need to work for some reason, in reality the work-to-live model of society died in the 1920's once mass production started. But instead of embracing that in a way that benefited people they instead introduced planned obsolescence and other things to make things purposefully lower quality and temporary just so they could keep the machine going. Now I don't think they have an option for that. People are more than their jobs, people don't really need money for the most part at this stage, not with the technology available. People could use these innovations to remove the need for most any work so they can focus on things they actually care about, which itself would certainly make things better when they don't need to do it for profit just to survive, but its up to people to make that change in their own life and create that world.

I agree this would be an ideal scenario, but sadly I agree with Klein that the ultimate net effect will be quite negative ->

Quote:Is all of this overly dramatic? A stuffy and reflexive resistance to exciting innovation? Why expect the worse? Altman reassures us: “Nobody wants to destroy the world.” Perhaps not. But as the ever-worsening climate and extinction crises show us every day, plenty of powerful people and institutions seem to be just fine knowing that they are helping to destroy the stability of the world’s life-support systems, so long as they can keep making record profits that they believe will protect them and their families from the worst effects. Altman, like many creatures of Silicon Valley, is himself a prepper: back in 2016, he boasted: “I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force and a big patch of land in Big Sur I can fly to.”

I’m pretty sure those facts say a lot more about what Altman actually believes about the future he is helping unleash than whatever flowery hallucinations he is choosing to share in press interviews.

That said I do feel Klein's focus on the doom scenario may mask the actual problems as noted by the writers of the AI Snake Oil substack.

[Image: https%3A%2F%2Fsubstack-post-media.s3.ama...12x974.png]
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


[-] The following 1 user Likes Sciborg_S_Patel's post:
  • Ninshub
(2023-05-11, 09:16 AM)sbu Wrote: I don't really know about that. The real problem will be to keep the masses occupied and content. People must engage in purposeful activities, or they go crazy. A lot already today struggle with purpose after retirement.

I think this is a basic human problem traced to basic human nature; the normal curve of intellectual and artistic and other capabilities over the population would seem to dictate that a great number will so struggle, because of in-built fundamental limitations in capability.
(2023-05-11, 12:01 PM)Silence Wrote: But most don't AND we haven't taken a full societal look at what a non-working default society might look like.

I think we'll see civilization collapse due to some X-risk or another before we get to that point...but it would be nice to think a very different world would open up our Psi functioning as we are more able to separate the Self from evolutionary drives the "rat race" keeps us in.

That said, I actually think the backlash against a lot of AI is going to be huge, especially when driverless cars cause more fatalities.

There will be more humans-in-the-loop AI but IMO machine "learning" isn't going to give us across-the-board automation.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(This post was last modified: 2023-05-11, 04:55 PM by Sciborg_S_Patel.)
(2023-05-11, 04:54 PM)Sciborg_S_Patel Wrote: There will be more humans-in-the-loop AI but IMO machine "learning" isn't going to give us across-the-board automation.

Obviously, physical work isn't part of the scope we are discussing here. However, in principle, any work process that exhibits patterns can be learned by machine learning. That likely amounts to at least 90% of all desktop work.
(This post was last modified: 2023-05-12, 12:39 PM by sbu. Edited 1 time in total.)
(2023-05-12, 12:38 PM)sbu Wrote: Obviously, physical work isn't part of the scope we are discussing here. However, in principle, any work process that exhibits patterns can be learned by machine learning. That likely amounts to at least 90% of all desktop work.

Well a lot of work like writing clickbait articles or summaries based on other people's writing...that stuff is going to be automated and arguably it can't come soon enough. In an ideal world the flood of junk makes people sick of clicking on that kind of stuff.

Something like handling a legal case....I have more doubts there that AI will replace rather than assist lawyers. Reading X-rays and other medical observation...I think liability will leave humans-in-the-loop as a single error can cost a hospital a lot of money.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(2023-05-12, 05:08 PM)Sciborg_S_Patel Wrote: Something like handling a legal case....I have more doubts there that AI will replace rather than assist lawyers. Reading X-rays and other medical observation...I think liability will leave humans-in-the-loop as a single error can cost a hospital a lot of money.
Interestingly enough, near the end of the first decade of AI hype 1980-1989 the cry went up that we need a concrete example of how AI can be useful. Someone came up with the idea that AI would be great for processing legal arguments.

I would argue that even there there is a problem. Imagine a stack of bikes as crated by students attending a lecture. Someone moves a bike to extract their own. Is it valid to argue that this involves taking a bike without the owner's consent?

David
[-] The following 1 user Likes David001's post:
  • Sciborg_S_Patel
(2023-05-12, 07:04 PM)David001 Wrote: Interestingly enough, near the end of the first decade of AI hype 1980-1989 the cry went up that we need a concrete example of how AI can be useful. Someone came up with the idea that AI would be great for processing legal arguments.

I would argue that even there there is a problem. Imagine a stack of bikes as crated by students attending a lecture. Someone moves a bike to extract their own. Is it valid to argue that this involves taking a bike without the owner's consent?

David

Hmm...not sure about the bike case being a huge problem for AI, guess it would depend on the laws of the particular location.

But yes in general I think trying to have machine "learning" based lawyers and judges will be disastrous if not just an embarrassing failure.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell



  • View a Printable Version
Forum Jump:


Users browsing this thread: 1 Guest(s)