AI megathread

134 Replies, 2508 Views

(2024-02-17, 06:23 PM)Silence Wrote: I also see the potential for amazing things in:

Medicine
Commerce
Increases (perhaps geometric) in Standards of Living
Transparency, education, exchanges of (real, thoughtful) ideas, etc

As always, we're a messy species.  We screw things up.

That is why I will never trust AI to safely do the things that you think of as "amazing things."
(This post was last modified: 2024-02-18, 01:13 PM by Brian. Edited 1 time in total.)
[-] The following 2 users Like Brian's post:
  • Sciborg_S_Patel, nbtruthman
(2024-02-17, 06:23 PM)Silence Wrote: While I share concerns with the ongoing evolution of AI tech, I find myself wary of being alarmist.  I'm not sure this is fundamentally any different that many leaps of human technology/industry/engineering from our past.

I spend too much time on social media, especially of late.  While I try like hell to challenge my own biases, I feel like I'm holding on to my core belief in rationality.  That said the social media world, especially Twitter/X, seems like a cesspool of grifters, anarchists, and a lot of the irrational.  That's why I said I'm probably spending too much time there.  Its really quite depressing to see so many peddling partisan bullshit (both left/right) and especially so many who hold positions of real influence in the world.  (e.g., Musk, others)

Its always darkest before the dawn as they saying goes so I expect some of these AI developments to create real societal stress while we adjust to this new paradigm.  However, I also see the potential for amazing things in:

Medicine
Commerce
Increases (perhaps geometric) in Standards of Living
Transparency, education, exchanges of (real, thoughtful) ideas, etc

As always, we're a messy species.  We screw things up.  But most people that I personally know, those in my broadest social network (i.e., family, friends, business, etc.) are overwhelmingly good and kind people.  You don't get this sense scanning through Twitter, but I think its just the ugly, early stage of the digital landscape.

Time will tell I guess.

What I really like about this forum is that we are not discussing politics. I also like there’s a good mix of european and american posters. It would be great if there were users from Asia and maybe even Africa too. (sorry if I’m missing some here). I hope more will join.
(This post was last modified: 2024-02-18, 03:27 PM by sbu. Edited 1 time in total.)
[-] The following 4 users Like sbu's post:
  • stephenw, Silence, Sciborg_S_Patel, Brian
Funny because true ->

'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(2024-02-18, 01:13 PM)Brian Wrote: That is why I will never trust AI to safely do the things that you think of as "amazing things."

To each his own, but I find this sentiment to be akin to sticking one's head in the sand.

Time will tell, but I'd bet a small fortune you'll have a different view point in a few years.  (Maybe even sooner)
You might get a laugh out of the latest example of junk "scientific" papers generated by lazy and irresponsible scientists using large language processor AI. It was actually peer reviewed. Some peers.

At https://www.vice.com/en/article/dy3jbz/s...g-incident:

Quote:"AI-generated images in a new academic paper included a rat with a gigantic penis; a peer reviewer who spoke to Motherboard said it wasn't their concern.

One figure in the paper is a diagram of a dissected rat penis, and although a textual description will not do it justice, it looks like the rat’s penis is more than double the size of its body and has all the hallmarks of janky AI generation, including garbled text. Labels in the diagram include “iollotte sserotgomar cell,” “testtomcels,” and “dck,” though the AI program at least got the label “rat” right. The paper credits the images to Midjourney, a popular generative AI tool.
...........................
[Image: 1708029211579-your-paragraph-text-76.jpe...size=500:*]
...........................
The incident is the latest example of how generative AI has seeped into academia, a trend that is worrying to scientists and observers alike. On her personal blog, science integrity consultant Elisabeth Bik wrote that “the paper is actually a sad example of how scientific journals, editors, and peer reviewers can be naive—or possibly even in the loop—in terms of accepting and publishing AI-generated crap.”

“These figures are clearly not scientifically correct, but if such botched illustrations can pass peer review so easily, more realistic-looking AI-generated figures have likely already infiltrated the scientific literature. Generative AI will do serious harm to the quality, trustworthiness, and value of scientific papers,” Bik added.
...........................
Nature banned the use of generative AI for images and figures in articles last year, citing risks to integrity."
[-] The following 3 users Like nbtruthman's post:
  • Sciborg_S_Patel, Brian, Silence
US and China agree to map out framework for developing AI responsibly. Here's what you need to know

By Toby Mann for the ABC on 25 February, 2024

Quote:In November, the US and more than a dozen other countries, with the notable exception of China, unveiled a 20-page non-binding agreement carrying general recommendations on AI.

The agreement covered topics including monitoring AI systems for abuse, protecting data from tampering and vetting software suppliers.

But it didn't mention things like the appropriate uses of AI, or how the data that feeds these models is gathered.

At a global AI safety summit in the UK in November, Wu Zhaohui, China's vice minister of science and technology, said Beijing was ready to increase collaboration on AI safety to help build an "international mechanism, broadening participation, and a governance framework based on wide consensus delivering benefits to the people".
[-] The following 2 users Like Laird's post:
  • stephenw, Sciborg_S_Patel
I remember using an AI image generator and discovering that paedophiles were using it to create child abuse images.  I tried reporting those that I found but nothing ever got done about it.

https://www.theguardian.com/society/2023...s-watchdog

Quote:Freely available artificial intelligence software is being used by paedophiles to create child sexual abuse material (CSAM), according to a safety watchdog, with offenders discussing how to manipulate photos of celebrity children or known victims to create new content.

The Internet Watch Foundation said online forums used by sex offenders were discussing using open source AI models to create fresh illegal material. The warning came as the chair of the government’s AI taskforce, Ian Hogarth, raised concerns about CSAM on Tuesday as he told peers that open source models were being used to create “some of the most heinous things out there”.
[-] The following 2 users Like Brian's post:
  • stephenw, Sciborg_S_Patel
(2024-03-03, 11:57 AM)Brian Wrote: I remember using an AI image generator and discovering that paedophiles were using it to create child abuse images.  I tried reporting those that I found but nothing ever got done about it.

https://www.theguardian.com/society/2023...s-watchdog

Honestly while not accepting conspiracy theories I am increasingly worried that there are pedophiles spread across varied institutions.

It doesn't mean that there is any network but rather a disturbing amount of sympathy and tolerance. Something I've talked about with people around the globe regarding cases where judges are bizarrely lenient with pedophiles.

Also in the past I've dismissed the connections between people in tech/science and Epstein, but I now worry I was actually too dismissive...

Really it would not surprised me if a lot of support for art generators were from varied unsavory corners.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(This post was last modified: 2024-03-03, 07:50 PM by Sciborg_S_Patel. Edited 1 time in total.)
[-] The following 3 users Like Sciborg_S_Patel's post:
  • stephenw, Silence, Brian
I thought we had a general AI thread but I can't find it.  Anyway...


Quote:Reported $60M Reddit deal signed to train AI models with user data

Quote:Reddit has reportedly signed a $60 million deal with an unnamed AI biz to hand over user conversations for model training.The deal comes as Reddit looks to boost interest in its upcoming IPO. Reddit reportedly told prospective investors about the $60 million contract earlier this year, and indicated that its execs may repeat this type of content-sharing-for-model-training deal in the future.

https://www.theregister.com/2024/02/20/r...t_ai_deal/
[-] The following 2 users Like Brian's post:
  • Laird, Silence
(2024-01-23, 01:15 PM)Laird Wrote: Much more recently, on 9 December, 2023, the European Union reached a deal on the world's first rules for artificial intelligence

[...]

The next steps are for the agreed text "to be formally adopted by both Parliament and Council to become EU law. Parliament’s Internal Market and Civil Liberties committees will vote on the agreement in a forthcoming meeting." I'm not sure when that meeting is/was scheduled for.

Re the bit I've bolded in green, that meeting and the vote were held about a week ago:

Europe’s world-first AI rules get final approval from lawmakers. Here’s what happens next

By Kelvin Chan for AP on March 13, 2024

Quote:Lawmakers in the European Parliament voted overwhelmingly in favor of the Artificial Intelligence Act, five years after regulations were first proposed.

Quote:The AI Act is expected to officially become law by May or June, after a few final formalities, including a blessing from EU member countries. Provisions will start taking effect in stages, with countries required to ban prohibited AI systems six months after the rules enter the lawbooks.

Rules for general purpose AI systems like chatbots will start applying a year after the law takes effect. By mid-2026, the complete set of regulations, including requirements for high-risk systems, will be in force.

When it comes to enforcement, each EU country will set up their own AI watchdog, where citizens can file a complaint if they think they’ve been the victim of a violation of the rules. Meanwhile, Brussels will create an AI Office tasked with enforcing and supervising the law for general purpose AI systems.
[-] The following 1 user Likes Laird's post:
  • Brian

  • View a Printable Version


Users browsing this thread: 3 Guest(s)