AI megathread

195 Replies, 7749 Views

(2023-11-24, 05:19 PM)nbtruthman Wrote: I also believe that computer consciousness is impossible, but new developments in LLM AI seem to be making a few dents in it. Apparently the developers of ChatGPT4 are getting closer to demonstrating general artificial computer intelligence, which would be at the very least much closer to perfectly mimicking a conscious entity. A perfect mimic would be impossible to distinguish by behavior from a real human being (except perhaps for paranormal phenomena), then making it at least theoretically able to manifest dangerous all too human antisocial and destructive actions.

But to even consider computational consciousness, in the sense that the same Turing Machine running Windows and some video games is non-conscious but it achieves consciousness when you run some Holy Grail program...it would be some unknown law of nature that maps a particular structure to consciousness that is only conscious when in very particular states.

Yet the registers and so on haven't changed, only their movements. This is like saying magic spells are possible with your vocal cords and wiggling fingers and nothing else. Which is "nonsense" in the philosophical sense of an effect being produced by a cause even though outside of the "magic" we've no reason to believe vocal cords and fingers have that kind of power.

Now *maybe* it's like the novel @David001 mentioned, and the structural change is sufficient due to some kind of underlying Platonism...but I feel like this is a solution that would create more problems like whether an abacus is conscious or the Tinker Toy computer can become conscious without any electrical current.

Given the odd cult-like behavior at Open AI - such as burning an evil AI idol - I suspect all that's happened is an irrational leap of materialist faith rather than any real AGI threat...
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(This post was last modified: 2023-11-24, 06:45 PM by Sciborg_S_Patel. Edited 2 times in total.)
[-] The following 1 user Likes Sciborg_S_Patel's post:
  • Typoz
(2023-11-24, 12:08 PM)Typoz Wrote: Consciousness and AI are of different types, there is no formula to transform one into the other.


In my view it breaks down very early on by (incorrectly) postulating that
ai = c
where ai is artificial intelligence and c is consciousness. The rest may be very interesting but it is fantasy rather than fiction I think.

I have thought for a long time that it is best to remain flexible enough to try to imagine the opposite POV - that we live in a materialist universe.

That means I don't want to discard the idea that AI=C just because I don't want it to be true. I'd say current AI is at least an interesting challenge to my POV.

Fascinating though it is, I suspect AI simply profits on the mountain of stuff that exists on the internet. I suspect it will become the next search engine.

David
[-] The following 2 users Like David001's post:
  • Sciborg_S_Patel, Typoz
Current AI is very complex but built from very simple components. Essentially an on/off switch controlled by a current or voltage supplied by another similar on/off switch. All of this can be done using pen and paper - the word computer used to be a job title, a person whose job it was to carry out computations, or an office clerk if you like. Because we can understand there is nothing special about an on/off switch, there doesn't seem any reason to suppose that piling a heap of them together in a complicated arrangement would do anything that can't, at least in principle, be done by someone sitting down with a pen and paper and labouring away over their task.

In my opinion, looking for consciousness among all this is simply a folly. Indeed, the sort of work done by the clerk could be described as soul-destroying, it is the very opposite of generating consciousness.

What is necessary is to turn our attention to consciousness itself - understand its nature and its origin. Of course the latter is what people have been doing throughout all of history and before recorded history began, the same study and search was going on too. Until we can determine what consciousness is, even to define it, I see little prospect of manufacturing it.

I'd say what technology is doing is generating an illusion. We may mimic the appearance of something, but that doesn't mean it is the thing. Wax fruit or even a painting of fruit may appear deceptively life-like, but we cannot eat it. If we want something we can eat, then it needs to be made of the same substances that actual edible food is made of. Likewise, if we want to manufacture consciousness, it needs to be made of the same substance that actual consciousness is made of. But what is it?
[-] The following 2 users Like Typoz's post:
  • Sciborg_S_Patel, David001
(2023-11-25, 10:36 AM)David001 Wrote: Fascinating though it is, I suspect AI simply profits on the mountain of stuff that exists on the internet. I suspect it will become the next search engine.

This is something that has been considered by others too. Online content, or a selection of it, was used to train the AI. But new online content may be directly or indirectly generated by AI - indirectly if for example someone queries an AI system and shares the result without any clear attribution or identification. That means in future AI may be fed content which was itself generated by AI and not necessarily validated or checked by a reliable expert. Thus soon AI may be re-digesting its own previous output, with unknown consequences, though probably increasing the noise content, degrading the quality.
(2023-11-25, 11:45 AM)Typoz Wrote: I'd say what technology is doing is generating an illusion. We may mimic the appearance of something, but that doesn't mean it is the thing. Wax fruit or even a painting of fruit may appear deceptively life-like, but we cannot eat it. If we want something we can eat, then it needs to be made of the same substances that actual edible food is made of. Likewise, if we want to manufacture consciousness, it needs to be made of the same substance that actual consciousness is made of. But what is it?

I just hope not too many people are fooled - particularly the technologists!

I think it is here to stay, at least as a search mechanism that doesn't need to be bang up-to-date. It is easy to get ChatGPT-3 to tell you that it is built from a snapshot taken in early 2022.

David
[-] The following 2 users Like David001's post:
  • Sciborg_S_Patel, Typoz
chatGPT is much more than a search engine. It can write code and find bugs in your code. It can be directly integrated into modern software IDEs.
[-] The following 2 users Like sbu's post:
  • David001, Typoz
(2023-11-24, 06:43 PM)Sciborg_S_Patel Wrote: But to even consider computational consciousness, in the sense that the same Turing Machine running Windows and some video games is non-conscious but it achieves consciousness when you run some Holy Grail program...it would be some unknown law of nature that maps a particular structure to consciousness that is only conscious when in very particular states.

Yet the registers and so on haven't changed, only their movements. This is like saying magic spells are possible with your vocal cords and wiggling fingers and nothing else. Which is "nonsense" in the philosophical sense of an effect being produced by a cause even though outside of the "magic" we've no reason to believe vocal cords and fingers have that kind of power.

Now *maybe* it's like the novel @David001 mentioned, and the structural change is sufficient due to some kind of underlying Platonism...but I feel like this is a solution that would create more problems like whether an abacus is conscious or the Tinker Toy computer can become conscious without any electrical current.

Given the odd cult-like behavior at Open AI - such as burning an evil AI idol - I suspect all that's happened is an irrational leap of materialist faith rather than any real AGI threat...

My point was that if an LLM AI like ChatGPT5 achieves the level of Artificial General Intelligence, it would seem that it would nearly perfectly mimic human beings responding to the same questions. These questions could concern crucial matters of war and peace and cruelty vs. helping. Depending on the human, his human answers might be deeply antisocial and destructive. Isn't it possible that this hypothetical ChatGPT5 with AGI would mimic the bad human being rather than the good human being in its answers, which then might disastrously be used by policy makers?

I guess this would depend on how closely the AGI AI mimicked the complex human arrangement of psychological personality characteristics, a lot of which are negative in nature. Could an advanced AGI take on the collective good and also bad personality characteristics of its creators, or much worse, of the collective billions of human beings that have furnished written material in the Internet?

Anyway, it seems to me that the danger of this happening strikes me as very significant, enough to prudently erect controls and barriers for the research. Not that (in the current business climate) that is likely to happen.
(This post was last modified: 2023-11-25, 05:43 PM by nbtruthman. Edited 1 time in total.)
[-] The following 1 user Likes nbtruthman's post:
  • Sciborg_S_Patel
(2023-11-25, 04:30 PM)sbu Wrote: chatGPT is much more than a search engine. It can write code and find bugs in your code. It can be directly integrated into modern software IDEs.

Well can you give us some examples of what you mean? Integrating something into an IDE may be convenient, but it isn't of fundamental significance.

I'm extremely cautious about accepting statements from large corporations - sometimes that 1/2 true, sometimes even less.

David
[-] The following 1 user Likes David001's post:
  • Sciborg_S_Patel
(2023-11-25, 06:33 PM)David001 Wrote: Well can you give us some examples of what you mean? Integrating something into an IDE may be convenient, but it isn't of fundamental significance.

I'm extremely cautious about accepting statements from large corporations - sometimes that 1/2 true, sometimes even less.

David

Sure, let me give you an example made with chatGPT4 using it's browser interface.
First I googled some random code. I searched for Dijkstra's algorithm as I wanted something unit test friendly. This link came up:
Dijkstra's Algorithm in C++ | Shortest Path Algorithm | FavTutor

I then took the code from this link and instructed chatGPT as following:

sbu Wrote:Please write unit tests for this piece of code “include
#include
using namespace std;

int miniDist(int distance[], bool Tset[]) // finding minimum distance
{
int minimum=INT_MAX,ind;

for(int k=0;k<6;k++)
{
if(Tset[k]==false && distance[k]<=minimum)
{
minimum=distance[k];
ind=k;
}
}
return ind;
}

void DijkstraAlgo(int graph[6][6],int src) // adjacency matrix
{
int distance[6]; // // array to calculate the minimum distance for each node
bool Tset[6];// boolean array to mark visited and unvisited for each node


for(int k = 0; k<6; k++)
{
distance[k] = INT_MAX;
Tset[k] = false;
}

distance[src] = 0; // Source vertex distance is set 0

for(int k = 0; k<6; k++)
{
int m=miniDist(distance,Tset);
Tset[m]=true;
for(int k = 0; k<6; k++)
{
// updating the distance of neighbouring vertex
if(!Tset[k] && graph[m][k] && distance[m]!=INT_MAX && distance[m]+graph[m][k]<distance[k])
distance[k]=distance[m]+graph[m][k];
}
}
cout<<"Vertex\t\tDistance from source vertex"<<endl;
for(int k = 0; k<6; k++)
{
char str=65+k;
cout<<str<<"\t\t\t"<<distance[k]<<endl;
}
}”

chatGPT Wrote:To write unit tests for the provided code, which implements Dijkstra's algorithm, you need to focus on testing both the `miniDist` function and the `DijkstraAlgo` function. Here's an approach to create these tests:

1. **Test `miniDist` Function:**
- Test with a distance array where all elements are `INT_MAX`.
- Test with a distance array having a mix of `INT_MAX` and other values.
- Test with an all-false `Tset` array and then with some elements set to true.
- Test boundary conditions, like an empty distance array.

2. **Test `DijkstraAlgo` Function:**
- Test with a simple graph, like a linear graph (straight line).
- Test with a graph that includes cycles.
- Test with a graph that has no edges (all distances in adjacency matrix are `INT_MAX`).
- Test with negative edge weights (if your implementation is supposed to handle them).
- Test with a graph of a single node.
- Test with `src` (source vertex) as each possible vertex in the graph.
- Test the output to ensure the distances and paths are correctly calculated.

Here's an example of how you might structure a test for the `miniDist` function:

```cpp
#include <cassert>
#include <climits>

void testMiniDist() {
// Test with all distances INT_MAX
int distance1[] = {INT_MAX, INT_MAX, INT_MAX, INT_MAX, INT_MAX, INT_MAX};
bool Tset1[] = {false, false, false, false, false, false};
assert(miniDist(distance1, Tset1) == 0); // Expected index can be any, as all are INT_MAX

// Test with a mix of values
int distance2[] = {3, 2, 5, 1, INT_MAX, INT_MAX};
bool Tset2[] = {false, false, false, false, false, false};
assert(miniDist(distance2, Tset2) == 3); // Should return index 3 as it has min distance 1

// Add more tests as needed
}

int main() {
testMiniDist();
// Add more tests for DijkstraAlgo
cout << "All tests passed!" << endl;
return 0;
}
```

For `DijkstraAlgo`, you would similarly set up a series of graph matrices and source nodes, then check if the distances calculated are as expected. Remember to include assert statements to automatically verify the outcomes against expected results.

sbu Wrote:Please write the full code for all relevant cases with asserts

sbu Wrote:Sure, I can provide a more comprehensive suite of unit tests for the `miniDist` and `DijkstraAlgo` functions. The tests will cover various scenarios to ensure the correctness of the implementation. Here is the complete test code:

```cpp
#include <iostream>
#include <climits>
#include <cassert>

using namespace std;

int miniDist(int distance[], bool Tset[]) {
int minimum=INT_MAX,ind=-1;

for(int k=0;k<6;k++) {
if(Tset[k]==false && distance[k]<=minimum) {
minimum=distance[k];
ind=k;
}
}
return ind;
}

void DijkstraAlgo(int graph[6][6], int src, int* result) {
int distance[6];
bool Tset[6];

for(int k = 0; k<6; k++) {
distance[k] = INT_MAX;
Tset[k] = false;
}

distance[src] = 0;

for(int k = 0; k<6; k++) {
int m = miniDist(distance, Tset);
Tset[m] = true;
for(int k = 0; k<6; k++) {
if(!Tset[k] && graph[m][k] && distance[m] != INT_MAX && distance[m]+graph[m][k] < distance[k]) {
distance[k] = distance[m] + graph[m][k];
}
}
}

for(int k = 0; k<6; k++) {
result[k] = distance[k];
}
}

void testMiniDist() {
int distance1[] = {INT_MAX, INT_MAX, INT_MAX, INT_MAX, INT_MAX, INT_MAX};
bool Tset1[] = {false, false, false, false, false, false};
assert(miniDist(distance1, Tset1) == -1); // Should return -1 as all distances are INT_MAX

int distance2[] = {3, 2, 5, 1, INT_MAX, INT_MAX};
bool Tset2[] = {false, false, false, false, false, false};
assert(miniDist(distance2, Tset2) == 3); // Should return index 3 as it has the minimum distance of 1
}

void testDijkstraAlgo() {
int graph[6][6] = {
{0, 1, 4, 0, 0, 0},
{1, 0, 4, 2, 7, 0},
{4, 4, 0, 3, 5, 0},
{0, 2, 3, 0, 4, 6},
{0, 7, 5, 4, 0, 7},
{0, 0, 0, 6, 7, 0}
};
int result[6];
DijkstraAlgo(graph, 0, result);
int expected[] = {0, 1, 4, 3, 7, 9};
for (int i = 0; i < 6; i++) {
assert(result[i] == expected[i]);
}

// Add more tests with different graphs and source nodes
}

int main() {
testMiniDist();
testDijkstraAlgo();
cout << "All unit tests passed!" << endl;
return 0;
}
```

### Explanation:

1. **`miniDist` Tests:**
- The first test checks the behavior when all distances are `INT_MAX`.
- The second test uses a mix of values to ensure it picks the index with the smallest distance.

2. **`DijkstraAlgo` Tests:**
- A specific graph is tested with a known correct result.
- The distances calculated by `DijkstraAlgo` are compared against the expected output.

These tests can be expanded with more cases, including edge cases and different graph configurations, to ensure robust testing of the Dijkstra algorithm implementation.
I hope the example above gives an idea of how it works (for example it was able to analyze the code I copied in). It cannot (yet) autonomously write large pieces of software but you can iteratively instruct it to refine it's work until you get the output you want. The IDE integrated versions are not as limited as the browser version, the browser version only allows it to write limited pieces before it caps like in this example.
(This post was last modified: 2023-11-26, 10:18 AM by sbu. Edited 2 times in total.)
[-] The following 1 user Likes sbu's post:
  • David001

  • View a Printable Version
Forum Jump:


Users browsing this thread: 2 Guest(s)