Almost frightening progress in AI technology

17 Replies, 1123 Views

There is astonishing new progress in AI-generated art, molecular biology, text and other fields that is revealed in a new NY Times article (paywall, at https://www.nytimes.com/2022/08/24/techn...e4358ee2df):

Quote:DALL-E 2 "art": It isn’t just the art it generates. It’s how it generates art. These aren’t composites made out of existing internet images — they’re wholly new creations made through a complex A.I. process known as “diffusion,” which starts with a random series of pixels and refines it repeatedly until it matches a given text description. And it’s improving quickly — DALL-E 2’s images are four times as detailed as the images generated by the original DALL-E, which was introduced only last year.

...last year, DeepMind’s AlphaFold — an A.I. system descended from the Go-playing one — did something truly profound. Using a deep neural network trained to predict the three-dimensional structures of proteins from their one-dimensional amino acid sequences, it essentially solved what’s known as the “protein-folding problem,” which had vexed molecular biologists for decades.

This summer, DeepMind announced that AlphaFold had made predictions for nearly all of the 200 million proteins known to exist — producing a treasure trove of data that will help medical researchers develop new drugs and vaccines for years to come. Last year, the journal Science recognized AlphaFold’s importance, naming it the biggest scientific breakthrough of the year.

...now, large language models like OpenAI’s GPT-3 are being used to write screenplays, compose marketing emails and develop video games. (I even used GPT-3 to write a book review for this paper last year — and, had I not clued in my editors beforehand, I doubt they would have suspected anything.)

A.I. is writing code, too — more than a million people have signed up to use GitHub’s Copilot, a tool released last year that helps programmers work faster by automatically finishing their code snippets.

Finally, LaMDA and other state-of-the-art language models are becoming eerily good at having humanlike text conversations.


Quote:.........................
But skeptics say that models like GPT-3 and LaMDA are just glorified parrots, blindly regurgitating their training data, and that we’re still decades away if ever from creating true A.G.I. — artificial general intelligence — that is capable of “thinking” for itself.
........................
However, the practical ramifications are profound....the best A.I. systems are now so capable — and improving at such fast rates — that the conversation in Silicon Valley is starting to shift. Fewer experts are confidently predicting that we have years or even decades to prepare for a wave of world-changing A.I.; many now believe that major changes are right around the corner, for better or worse. One estimate, for example, has been that there is a 35% chance that AI will make all white collar knowledge jobs obsolete by 2036.
.................................
In just a few years, the vast majority of the photos, videos and text we encounter on the internet could be A.I.-generated. Our online interactions could become stranger and more fraught, as we struggle to figure out which of our conversational partners are human and which are convincing bots. And tech-savvy propagandists could use the technology to churn out targeted misinformation on a vast scale, distorting the political process in ways we won’t see coming.

Comment:
The bottom line is, though, that the most interesting aspect of all this to us is that despite having transformative potential in changing our society, none of this new high computer software technology is exhibiting the ghost of a chance of actually generating consciousness. Meaning that the apocalyptic predictions of a downloading of human consciousness into advanced AI systems or cyborgs, or of some malevolent conscious AI system taking over the world and wiping out humanity, are still just as indefinitely far in the future, or more likely, simply impossible mostly because of the Hard Problem, and the fact that human consciousness is fundamentally nonalgorithmic.
(This post was last modified: 2022-08-25, 03:18 PM by nbtruthman. Edited 2 times in total.)
[-] The following 3 users Like nbtruthman's post:
  • Brian, Sciborg_S_Patel, Ninshub
Interesting article. The two examples, mobster taking a selfie and a knitted sailboat are indeed impressive. I'm glad I saw those. Because the very first example, an image representing "infinite joy" left me completely unmoved. I didn't recognise either infinity or joy in that image. Possibly sorrow or despair.

Perhaps abstract concepts are where such systems don't have any capabilities - at present.


Actually I had my own revelation a few years ago when I played around with a free photo-morphing program. I fed it two images of two of my friends, very different and distinctive individuals, and produced various composites which looked just like a real person, but did not exist anywhere in the real world. It was quite disturbing, yet the technology behind it was an easily-understood transformation of a set of triangles to a new set of triangles.
[-] The following 2 users Like Typoz's post:
  • Brian, Sciborg_S_Patel
(2022-08-25, 11:09 AM)nbtruthman Wrote: One estimate, for example, has been that there is a 35% chance that AI will make all white collar knowledge jobs obsolete by 2036.


I think my thoughts on AI -> conscious are known (i.e., ain't happening in my view Wink ).

That said I can clearly see how more sophisticated and powerful AI systems can have the type of directional impact quoted above.  I'm not sure about the 2036 prediction, but the notion that AI will be able to handle many (most?) while collar jobs doesn't seem far fetched to me.  I work in the investment management business which has long attracted some very bright people; those generally motivated by money of course.  Still, when I look at the work we do behind the scenes, the sausage making if you will, it is ripe for this type of AI replacement.  I do think a trusted human will be needed at the tip of the client-facing spear, mostly on the point of trust, but the world is going to look very, very different in 10, 20 and 50 years from now.

This is part of the reason I have interest in concepts like fusion energy, 3D printers, robotics, and UBI.  A future that is not dystopian will feature many (all?) of these things.
[-] The following 4 users Like Silence's post:
  • stephenw, Brian, Sciborg_S_Patel, nbtruthman
I've found that for all the big claims about AI, it is still vulnerable to edge cases that can at times be willfully produced.

We'll see how effective AI is when people start to deliberately mess with the vulnerabilities of curve fitting and probability weighting. Recall that Tesla FSD almost hit a biker in the last year. In fact there is talk of an AI winter in the driverless car arena.

My feeling is that while we should definitely take into consideration the risks of AI, part of that risk assessment is critically looking at increasing over-reliance & over hype regarding tools that are not "intelligent".

In fact I think if lay people were given a better explanation for what is actually going on with machine "learning" the public would better understand the actual issues.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(This post was last modified: 2022-08-25, 05:04 PM by Sciborg_S_Patel. Edited 5 times in total.)
[-] The following 4 users Like Sciborg_S_Patel's post:
  • Brian, Ninshub, Silence, nbtruthman
(2022-08-25, 11:09 AM)nbtruthman Wrote: of some malevolent conscious AI system taking over the world and wiping out humanity
Unfortunately, AI doesn't need to be conscious to cause an apocalypse, it merely needs the hubris of programmers and the callousness of the average board of faceless execs.


In the more immediate, I think the threat to human dignity is paramount as an AI will soon replace the vast majority of art based careers, therefore robbing humanity of its ability for self expression. While people will still be able to create art in their homes and to share with others, it will not be a viable career for the vast majority who could expect to be able to survive making art full time otherwise. The difference will be a massive leveling off of peoples ability to truly excel at art, and the gap will only widen between AI and the average artist, who must now compete with the sum total of human achievement in the various artistic arenas which will be puppeteered by an ever enriched set of elites.
[-] The following 4 users Like letseat's post:
  • Typoz, nbtruthman, stephenw, Ninshub
(2022-08-26, 09:20 AM)letseat Wrote: Unfortunately, AI doesn't need to be conscious to cause an apocalypse, it merely needs the hubris of programmers and the callousness of the average board of faceless execs.
I strongly agree.

Programming using advanced technology can produce very efficient simulations of bad human intent.  And while only mimicking mind and actual organic intent -- these information objects can initiate and enforce actions that destroy.  Talk about "cold".  Strict safeguards must evolve.

Art is about meaning.  Computers can simulate art skills and find likely social memes to carry the craft --- BUT --- creative meanings will still be in people's hearts and communicate above and beyond AI.

My father-in-law was a draftsman extraordinaire, if you believe his co-workers in the satellite business.  Like in your comments about art, his graphic skills today are mainly automated.  But, there is still a level of creative communication that is subtle.  When his company (famous name) lost a gov bid, the (famous company) who won the bid - insisted that (let's call him Homer) do the drawings for their winning bid, because they had seen them on the competitive bid.  He was leased out for the job to their competitor.  Fit for use data - and specific, clear and clean meaning - are different.  Homer's vision of engineered parts was different and connected to focused minds and not just to data.

Homer was one of the folks called in, to work away from home, 24/7, thru during the Apollo 13 crisis.
[-] The following 1 user Likes stephenw's post:
  • Ninshub
I'd recommend that people read this book:

https://www.amazon.co.uk/Myth-Artificial...B08TV31WJ3

The author works in AI, and pinpoints what exactly AI lacks. He calls it abductive reasoning, I think I would call it open-ended reasoning.

Furthermore I think back to the decade of AI hype in the 1980's, and the subsequent AI winter. I'd believe in the modern AI hype if someone were to explain exactly why the modern hype looks so much like the hype back then - the Japanese were supposedly leading the race, and after that all white-collar work would vanish.

We were promised driverless cars, and where are they? Moreover since we can drive a car without extra radar scanners, always-on GPS, assorted other sensors, why can't the automatic driving module fit in any modern car with a simple video camera to view what is ahead and also what is in the mirrors!

To drive a car through cluttered roads shared by people, their children, and their pets, maybe some road works, or the odd pothole, you need open-ended reasoning.
[-] The following 1 user Likes David001's post:
  • Sciborg_S_Patel
I don't think the driverless car is the right hill to die on if you're trying to call out the limits of AI.  I think they'll get there one day.  Driving isn't particularly imaginative in my view.  Its largely mechanistic.  I bet everything can attest to going into your own version of auto-pilot while driving on a highway.  Remarkable to me how I sometimes can't recall anything about the prior 5 minutes or so of driving.  I was just doing it.

That's a lot different, again in my view, than the highest order human experiences.

All that said your point still stands: I think the notion of artificial personhood is still just goofy.
[-] The following 1 user Likes Silence's post:
  • Ninshub
(2022-08-29, 01:59 PM)Silence Wrote: I don't think the driverless car is the right hill to die on if you're trying to call out the limits of AI.  I think they'll get there one day.  Driving isn't particularly imaginative in my view.  Its largely mechanistic.  I bet everything can attest to going into your own version of auto-pilot while driving on a highway.  Remarkable to me how I sometimes can't recall anything about the prior 5 minutes or so of driving.  I was just doing it.

That's a lot different, again in my view, than the highest order human experiences.

All that said your point still stands: I think the notion of artificial personhood is still just goofy.

I think, also, that the AI driverless car technology will get there "one day". But that day will be long delayed by a series of nearly intractable problems that will take a long time to resolve. One of them is the question of just how good does the system need to be for public perception to take off and go into mass buying of the vehicles (that is, just how low does the accident rate have to be when the tradeoff of AI-caused accidents versus human driver-caused accidents is right). This will take a very long time, but AI driverless cars' first and main performance goal will be to achieve parity with the human driver accident rate, and that triumph would theoretically and logically lead to many buyers' decision that the equation is now in favor of getting the driverless cars, since the outbalancing positive factor is now the added convenience and utility of not having to manually drive the vehicle. All bets are off, however, whether the public psychology will go that way. They may insist on 100% reliability of the technology, and that probably will never be achievable.

The other main confounding factor, I think, will also be be societal and will be (in the absence of proven 100% reliable, zero accident rate AI driverless cars), the litigation morass that will inevitably be involved in the case of accidents, especially in determining who or what was at fault ,whether it was the car or the driver that caused the accident, if it was the car then was it the software design company or the car company, etc., and in coming up with the the actual failure in software design that caused the failure, so as to be able to fix it. As has been becoming apparent, with the deep learning technology being used, often the programmers themselves can't determine how the AI system came up with its decision, whether or not that decision was good or bad. This is because the AI system itself developed the software design (which may be exceedingly complicated), over thousands or millions of iterations of its deep learning algorithms. A fundamental limitation of the computer AI technology and its learning algorithms. (Edit: The lawyers could even argue that the driver shares the fault even though the car AI was driving, because the owner/driver did make the original decision to buy the AI driverless auto knowing that it may possibly cause an accident.) Obviously new laws regarding these situations will have to be developed, and any existing ones greatly changed.
(This post was last modified: 2022-08-29, 07:59 PM by nbtruthman. Edited 9 times in total.)
[-] The following 3 users Like nbtruthman's post:
  • Silence, Laird, Sciborg_S_Patel
(2022-08-29, 02:54 PM)nbtruthman Wrote:  All bets are off, however, whether the public psychology will go that way. They may insist on 100% reliability of the technology, and that probably will never be achievable.

The other main confounding factor, I think, will also be be societal and will be (in the absence of proven 100% reliable, zero accident rate AI driverless cars), the litigation morass that will inevitably be involved in the case of accidents, especially in determining who or what was at fault ,whether it was the car or the driver that caused the accident, if it was the car then was it the software design company or the car company, etc., and in coming up with the the actual failure in software design that caused the failure, so as to be able to fix it. 
I don't see any of that as a problem.  I live where the insurance is a "no fault" configuration already.  Robot cars don't have to be perfect - just better than the accident rates of humans.  The impersonal nature will make it easier to process claims.  Insurance data and rates can handle it.

While promoting human creativity over machines -- maybe less creative and aggressive driving could be a good thing.

  • View a Printable Version
Forum Jump:


Users browsing this thread: 1 Guest(s)