AI megathread

134 Replies, 2528 Views

(2023-11-26, 10:17 AM)sbu Wrote: I hope the example above gives an idea of how it works (for example it was able to analyze the code I copied in). It cannot (yet) autonomously write large pieces of software but you can iteratively instruct it to refine it's work until you get the output you want. The IDE integrated versions are not as limited as the browser version, the browser version only allows it to write limited pieces before it caps like in this example.

Thanks for going to all this trouble to produce this example, but I'm not sure it is quite what I though I'd get, because the program presumably didn't contain a flaw, and all you asked it to do was to provide tests (I'm not sure what you mean exactly by "unit tests" - does that just mean tests on independent portions (functions) of the code).

Looking up Dijkstra's algorithm I see it is a program to find the shortest distance between two points in a graph (To make all this a bit clearer to everyone, the word 'graph' has two different meanings, it can refer to a plot of a function e.g. sin(x) or it can refer to a a set of nodes such as telephone exchanges, and the connections between them). Clearly from ChatGPT's output, it recognises that algorithm, but perhaps the name of the function "DijkstraAlgo" gave it a clue!

Suppose you damage the code by replacing
int distance[6];
bool Tset[6];

with

int distance[5];
bool Tset[5];

That will probably crash in a rather messy sort of way because the arrays will get overfilled. And C++ does not normally perform array bound checking. It is even conceivably that it will still work, but if you fiddle around with the array bounds you will soon get it to crash.

Maybe you should also rename anything that refers to Dijkstra, remove the assert statements (we don't want to make it easy for ChatGPT) and then ask for ChatGPT's help.

You would also, of course, need to tell the system what the code was meant to do - not just for that specific case, but in general - otherwise how can it tell if it is working correctly?

I hope this reply doesn't sound overly cynical, but I am wary of the claims of large corporations.

BTW, how do you feed ChatGPT with text you want to discuss?

David
(This post was last modified: 2023-11-26, 05:58 PM by David001. Edited 3 times in total.)
[-] The following 1 user Likes David001's post:
  • Sciborg_S_Patel
(2023-11-26, 05:11 PM)David001 Wrote: I hope this reply doesn't sound overly cynical, but I am wary of the claims of large corporations.

David

It's worth looking up the varied human written implementations of the algorithm...kinda shows how all this machine "learning" is stealing from something humans have produced over & over...
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


[-] The following 1 user Likes Sciborg_S_Patel's post:
  • David001
(2023-11-26, 05:11 PM)David001 Wrote: Thanks for going to all this trouble to produce this example, but I'm not sure it is quite what I though I'd get, because the program presumably didn't contain a flaw, and all you asked it to do was to provide tests (I'm not sure what you mean exactly by "unit tests" - does that just mean tests on independent portions (functions) of the code).

Looking up Dijkstra's algorithm I see it is a program to find the shortest distance between two points in a graph (To make all this a bit clearer to everyone, the word 'graph' has two different meanings, it can refer to a plot of a function e.g. sin(x) or it can refer to a a set of nodes such as telephone exchanges, and the connections between them). Clearly from ChatGPT's output, it recognises that algorithm, but perhaps the name of the function "DijkstraAlgo" gave it a clue!

Suppose you damage the code by replacing
int distance[6];
bool Tset[6];

with

int distance[5];
bool Tset[5];

That will probably crash in a rather messy sort of way because the arrays will get overfilled. And C++ does not normally perform array bound checking. It is even conceivably that it will still work, but if you fiddle around with the array bounds you will soon get it to crash.

Maybe you should also rename anything that refers to Dijkstra, remove the assert statements (we don't want to make it easy for ChatGPT) and then ask for ChatGPT's help.

You would also, of course, need to tell the system what the code was meant to do - not just for that specific case, but in general - otherwise how can it tell if it is working correctly?

I hope this reply doesn't sound overly cynical, but I am wary of the claims of large corporations.

BTW, how do you feed ChatGPT with text you want to discuss?

David

I use chatGPT professionally every day, mainly as a super-googler, but also for code related tasks.

So it’s important for me to stress that this thing in no way exhibits human intelligence. I don’t think it would have recognized Dijkstra’s algorithm without the naming clue. But this doesn’t matter, the point was as demonstrated, that you can provide it with some arbitrary code and it can expand on it in a meaningful way given a few quick instructions as in the example.

Keep in mind, this is technology born in 2017 with 6 years of evolution behind it. The human brain has evolved for million of years. There’s absolutely no reason to believe we have reached the peak of AI technology. But it can already do incredible things most would have believed 10 years ago.

Regarding how you feed chatGPT I’m not sure I understand the question? Recently at work we wanted to challenge it so I took a picture of some university level math exercises, uploaded the picture, and asked it to solve one of them. You can feed it text, pictures - it’s easy to setup the context of what you want to discuss. Obviously if you want to discuss page 102 in ‘Irreducible Mind’ it can’t do that (you can only provide a limited context for discussion, not a whole book).
(This post was last modified: 2023-11-26, 09:09 PM by sbu. Edited 2 times in total.)
(2023-11-26, 05:52 PM)Sciborg_S_Patel Wrote: It's worth looking up the varied human written implementations of the algorithm...kinda shows how all this machine "learning" is stealing from something humans have produced over & over...

Yes, the problem is real programmers get problems in complicated contexts which make it hard to tell ChatGPT (or whatever) exactly what it needs to know. At the very least it is better to start with a program to do something more specific - such as a program that takes a string and checks if it is well formed in the sense that quotes and brackets are paired properly, bearing in mind that brackets inside quotes don't count, and possibly that quotes come in two flavours (' and "). That is not too complicated, but still it has room for some bugs to appear!

David
[-] The following 1 user Likes David001's post:
  • Sciborg_S_Patel
(2023-11-26, 09:10 PM)David001 Wrote: Yes, the problem is real programmers get problems in complicated contexts which make it hard to tell ChatGPT (or whatever) exactly what it needs to know. At the very least it is better to start with a program to do something more specific - such as a program that takes a string and checks if it is well formed in the sense that quotes and brackets are paired properly, bearing in mind that brackets inside quotes don't count, and possibly that quotes come in two flavours (' and "). That is not too complicated, but still it has room for some bugs to appear!

David

I think Chat GPT can probably find the solution to the question of code lines being well formed in the terms you state, largely because humans have discussed such problems over & over online.

However, at work I use a functional programming language which doesn't have as much exposure online...which is why it wasn't surprising when we saw Chat GPT give us advice that was mere fiction where it told us to seek out and install additional programming libraries that don't exist...

This isn't to say Chat GPT is useless, just reports of its brilliance are exaggerated...
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(This post was last modified: 2023-11-26, 09:15 PM by Sciborg_S_Patel. Edited 1 time in total.)
[-] The following 2 users Like Sciborg_S_Patel's post:
  • Typoz, David001
(2023-11-26, 09:14 PM)Sciborg_S_Patel Wrote: I think Chat GPT can probably find the solution to the question of code lines being well formed in the terms you state, largely because humans have discussed such problems over & over online.

However, at work I use a functional programming language which doesn't have as much exposure online...which is why it wasn't surprising when we saw Chat GPT give us advice that was mere fiction where it told us to seek out and install additional programming libraries that don't exist...

This isn't to say Chat GPT is useless, just reports of its brilliance are exaggerated...

Certainly. Ask me a question in french and I’m also out of my depth.
(2023-11-26, 09:26 PM)sbu Wrote: Certainly. Ask me a question in french and I’m also out of my depth.

I think there's a difference between a human being out of their depth on a subject versus machine "learning" not producing correct answers because there weren't enough examples of human labor to steal from using web scraping...
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(2023-11-26, 09:31 PM)Sciborg_S_Patel Wrote: I think there's a difference between a human being out of their depth on a subject versus machine "learning" not producing correct answers because there weren't enough examples of human labor to steal from using web scraping...

As I wrote in a previous post, this tech has evolved since 2017 - there’s absolutely no reason to believe it is at its peak. The human brain had millions of years to evolve. The similarities in “learning” are striking, who knows where this will end.
(2023-11-26, 09:14 PM)Sciborg_S_Patel Wrote: I think Chat GPT can probably find the solution to the question of code lines being well formed in the terms you state, largely because humans have discussed such problems over & over online.

However, at work I use a functional programming language which doesn't have as much exposure online...which is why it wasn't surprising when we saw Chat GPT give us advice that was mere fiction where it told us to seek out and install additional programming libraries that don't exist...

This isn't to say Chat GPT is useless, just reports of its brilliance are exaggerated...

That strengthens my view that ChatGPT can take a large database of text, and use it to answer questions - rather like a comprehension test.

That is obviously no mean feat, but the implied hype isn't helpful. Back in the early days of AI, a guy called Winograd wrote a program that worked in "Blocks World". it simulated a set of toy bricks, and took English commands to move the blocks about. It would only move blocks that were accessible (i.e. nothing on top of them) to surfaces that were also accessible.

This program made a big impression, and obviously people imagined all sorts of ways in which it could be extended. In the end Winograd abandoned his own project, writing that it was a dead end.

I suspect that modern AI will go the same way, except that because it can do something useful, it will stick around as a better way to access the internet.

When I saw your example was in C++, I thought WOW - if ChatGPT can debug an arbitrary C++ program that would be really something!

C, on which C++ is based, is almost at the assembler level, in that the mapping between assembler and C is easy to imagine. Other languages, like functional/logic programming languages sacrifice a lot of the speed speed and flexibility of C and similar languages such as Fortran, to create a sort of toy computer that can fool people. For example, they may fool them into thinking that AI (at least in its present form) is up to debugging computer code, when in fact it might only be able to debug toy computer code.

AI would be so much more interesting if they described its capabilities in a more modest way.

David
[-] The following 1 user Likes David001's post:
  • Sciborg_S_Patel
(2023-11-27, 07:28 AM)sbu Wrote: As I wrote in a previous post, this tech has evolved since 2017 - there’s absolutely no reason to believe it is at its peak. The human brain had millions of years to evolve. The similarities in “learning” are striking, who knows where this will end.

I don't think we can tell how close we are to the peak.

People thought we'd have driverless cars by now, but obviously that's failure.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


[-] The following 1 user Likes Sciborg_S_Patel's post:
  • David001

  • View a Printable Version


Users browsing this thread: 2 Guest(s)