More thoughts on the simulation hypothesis

37 Replies, 4314 Views

An MIT computer scientist, Rizwan Virk, has a new book out, The Simulation Hypothesis, making the case that we do not live in a “base reality” (a real world) but, more likely, in a computer simulation. Concerning the probability, he thinks "it’s somewhere between 50 and 100 percent. I think it’s more likely that we’re in simulation than not.”

There is an interesting interview with the author of the new book, at https://www.digitaltrends.com/cool-tech/...ypothesis/. Virk has apparently worked out the implications of the simulation hypothesis better than most thinkers on the subject.

He points out that the concept comes in (at least) two forms:

Quote:"The basic idea is that everything we see around us, including the Earth and the universe, is part of a very sophisticated MMORPG (a massively multiplayer online roleplaying game) and that we are players in this game. The hypothesis itself comes in different forms: in one version, we’re all A.I. within a simulation that’s running on somebody else’s computer. In another version, we are “player characters,” conscious things that exist outside the simulation and we inhabit characters, just like you might take on the character of an elf or dwarf in a fantasy RPG."

The first version, that we're all AI "beings" totally generated in a simulation, runs into a lot of theoretical problems mostly having to do with the unlikelihood that it is at all possible for simulation computation to generate consciousness. As Michael Egnor pointed out,      

Quote:"...meaning is precisely what computation lacks. The most fundamental human power — the power of thought to have meaning — is just what a computer simulation cannot do.
Computation is syntax, whereas thought is semantics. If we were living in a computer simulation, and our mind were computation, the one thing we couldn’t do is think.
We couldn’t ask the question “Are we living in a computer simulation?” if we were living in a computer simulation. The irony here is that, of all the possible fundamental truths of reality, the notion that we are living in such a simulation is the one we can rule out simply because it’s self-refuting.
If we are living (entirely as part of) a computer simulation, we couldn’t think to ask the question." 

In the other concept, we are conscious biological (or something else) beings outside of the simulation and each of us controls a character. This is the world envisioned in the Matrix movie series. This seems to me to be by far the more likely scenario. 

Virk has thought about some of the consequences this possibility would have for our lives. From the interview:

Quote:"If we are inside a video game that was set up say like in Fortnite, we would want to know what the goals of the game are and what our individual quests might be. One section of the book delves into Eastern mystical traditions, including Buddhism and karma, and Western religious traditions. It discusses that we might have scores that are being kept and desires being recorded in the same way we do in esports.

We all want to know what our individual quests are and what our achievements are. There are things each of us individually feels called to do, whether it’s to be a writer or a video game designer.

I don’t necessarily think we’re in a simulation that has one purpose, such as to see if we can handle climate change. Instead, just like in any multiplayer video game, every character has their own individual set of quests and the freedom of choice to decide what to do next.

(Interviewer): So, in that view, each of us may be a social experiment in its own right?

That’s right, and especially if each of us is a player character, which means there’s a part of us that is outside the game. We might have certain goals or experiences that we’re here to try to fulfill. As a video game designer, we think about what kinds of paths people can follow. We might give the illusion of free will or we might lay out a specific character."
(This post was last modified: 2019-04-17, 10:18 PM by nbtruthman.)
[-] The following 6 users Like nbtruthman's post:
  • hypermagda, Hurmanetar, stephenw, Sciborg_S_Patel, Doug, Ninshub
(2019-04-17, 10:15 PM)nbtruthman Wrote: The first version, that we're all AI "beings" totally generated in a simulation, runs into a lot of theoretical problems mostly having to do with the unlikelihood that it is at all possible for simulation computation to generate consciousness. As Michael Egnor pointed out,      
All of this, is about how symbols are grounded to the real-world meanings they are linked with.  The simulation fantasy is one way to imagine information processing.  I think there are more viable models, which enable deeper levels of analysis. 

have to shout out for Franz
Quote: Now, the next question. What is the mind? What is the human ability by which we ask the question “Am I living in a computer simulation?” What is it about a thought that distinguishes a thought from other things, like physical objects? Nineteenth-century German philosopher Franz Brentano gave an answer that seems decisive: Thoughts are always about something, whereas physical objects are never (intrinsically) about anything. He called this aboutness of thoughts “intentionality,” using a word derived from the scholastic philosophers’ theory of mind that dates back to Aristotle.
[-] The following 1 user Likes stephenw's post:
  • Sciborg_S_Patel
The notion that a computer can't process meaning, only syntax, is outdated in my opinion. The first computers processed linearly and sequentially so they are the "left brain" and they cannot process meaning. Meaning is something that arises out of the non-linear networked type of processing. We are working on the "right brain" for AI right now and making a lot of advances. It seems more and more clear to me that the non-linear networked type of processing - or something analogous to it - is behind the manifestation of physical reality.

The simulation hypothesis provides a way for modern mainstream thinkers to accept a kind of dualism between mind and body although it still only pushes the ultimate ontological questions back one layer. Nevertheless, I think it is a valid metaphor with a lot of explanatory power.
[-] The following 1 user Likes Hurmanetar's post:
  • Laird
(2019-04-18, 03:18 PM)Hurmanetar Wrote: The notion that a computer can't process meaning, only syntax, is outdated in my opinion. The first computers processed linearly and sequentially so they are the "left brain" and they cannot process meaning. Meaning is something that arises out of the non-linear networked type of processing. We are working on the "right brain" for AI right now and making a lot of advances. It seems more and more clear to me that the non-linear networked type of processing - or something analogous to it - is behind the manifestation of physical reality.

The simulation hypothesis provides a way for modern mainstream thinkers to accept a kind of dualism between mind and body although it still only pushes the ultimate ontological questions back one layer. Nevertheless, I think it is a valid metaphor with a lot of explanatory power.

It seems to me that the "aboutness" or meaning issue still applies no matter what type of single-processor or multiple-processor networked computation is involved. Data processing is still data processing. At base looking at it in detail it is still just adding and subtracting and logical processing and reprocessing of bits in a string according to some algorithm, with no inherent meaning. Meaning is a basic attribute of consciousness. Closely related to this is the famous "Hard Problem" of qualia.
[-] The following 3 users Like nbtruthman's post:
  • Valmar, Hurmanetar, Typoz
(2019-04-18, 04:00 PM)nbtruthman Wrote: It seems to me that the "aboutness" or meaning issue still applies no matter what type of single-processor or multiple-processor networked computation is involved. Data processing is still data processing. At base looking at it in detail it is still just adding and subtracting and logical processing and reprocessing of bits in a string according to some algorithm, with no inherent meaning. Meaning is a basic attribute of consciousness. Closely related to this is the famous "Hard Problem" of qualia.

It is difficult to put this into words so I made a sketch...

[Image: IMG-0730.jpg]

Basically... there is oneness and in order for there to be anything interesting this oneness begins fractal splitting... And the very first split is internal/external or subject/object...or another way to put it... the very first split means that everything that is not an experience is a computer.

There is linear processing which is useful for certain things, but you're right there is no symbol grounding in qualia. But the linear model is really just a segment of a circular highly networked parallel processing. In fact every node is networked to every other node so all nodes can be defined as Oneness. The networks of information transformation extending out from this single node are both generative and perceptive.
(This post was last modified: 2019-04-18, 06:41 PM by Hurmanetar.)
[-] The following 3 users Like Hurmanetar's post:
  • laborde, stephenw, Sciborg_S_Patel
(2019-04-18, 06:14 PM)Hurmanetar Wrote: It is difficult to put this into words so I made a sketch...

[Image: IMG-0730.jpg]

Basically... there is oneness and in order for there to be anything interesting this oneness begins fractal splitting... And the very first split is internal/external or subject/object...or another way to put it... the very first split means that everything that is not an experience is a computer.

There is linear processing which is useful for certain things, but you're right there is no symbol grounding in qualia. But the linear model is really just a segment of a circular highly networked parallel processing. In fact every node is networked to every other node so all nodes can be defined as Oneness. The networks of information transformation extending out from this single node are both generative and perceptive.
Love the process flow!

I would not be on board with - "everything that is not an experience is a computer".

At the top, when you say "objects", do you mean physical objects or would informational objects like math entities and logic gates be allowed?
[-] The following 1 user Likes stephenw's post:
  • Hurmanetar
(2019-04-18, 07:31 PM)stephenw Wrote: Love the process flow!

Thanks Smile

Quote:I would not be on board with - "everything that is not an experience is a computer".

Okay... I understand that this is quite the leap!

Quote:At the top, when you say "objects", do you mean physical objects or would informational objects like math entities and logic gates be allowed?

I meant any structure (which is itself a network of connections, boundaries, and spaces) with a low enough rate of change to be assigned a useful boundary and label. There is the instantiation of the object which is just as transient as a moment of experience and is in actuality the flip side of the qualia which is why I have it attached to the lightning bolt which is correlated to the moment of qualia on the inside. Many similar objects (similarity as defined by the neural network) can be arbitrarily considered to be the same object if it is useful to do so. So the mug on my desk exists as an instantiation of a physical object at the moment I perceive it. And it is similar enough to the mug that was there 5 seconds ago such that I can consider it to be the same mug. The low rate of change means the physical object of the mug on my desk exists as a kind of memory storing information about my experience of the mug and I can access this information again later by looking/touching it again. We can consider this low-rate-of-change property of an object to carry information forward in time. But there is another way information is stored in the universe (I believe) because nothing ever really goes away, we only move away from it... but it still exists in the same time and place in which it was instantiated and everything to which it bears any degree of similarity is still connected to it... it is just that as we move in time away from that particular place, the number of connections thins out as similarity across all dimensions decreases.
(2019-04-18, 03:18 PM)Hurmanetar Wrote: The notion that a computer can't process meaning, only syntax, is outdated in my opinion. The first computers processed linearly and sequentially so they are the "left brain" and they cannot process meaning. Meaning is something that arises out of the non-linear networked type of processing. We are working on the "right brain" for AI right now and making a lot of advances. It seems more and more clear to me that the non-linear networked type of processing - or something analogous to it - is behind the manifestation of physical reality.

The simulation hypothesis provides a way for modern mainstream thinkers to accept a kind of dualism between mind and body although it still only pushes the ultimate ontological questions back one layer. Nevertheless, I think it is a valid metaphor with a lot of explanatory power.

From https://aeon.co/essays/your-brain-probab...that-means:

Quote:"Many have thought that computation must involve meaningful representation or information. Far from saying that we are meaningless machines, the claim that our brains are computers would require that our mental lives are rich with meaningful information. Yet if computation requires giving meaningful answers to meaningful problems then, in order to say what it would be for the brain to be a computer, one must also say what it would be for activity in the brain to be meaningful. One difficult question begging another. There are a number of dissenting and intermediate positions about whether computations must be meaningful."

One expert, Gualtiero Piccinini, weighs in on whether computation has meaning, from http://philosophyofbrains.com/2015/08/11...ation.aspx. He answers in the negative, and much prefers the mechanistic view of computation:

Quote:"Most of the philosophers who discuss computation are interested in computation because they are interested in the computational theory of cognition. Cognitive systems are typically assumed to represent things, and computation is supposed to help explain how they represent. So many philosophers conclude that computation is the manipulation of representations. Or perhaps computation is a specific kind of manipulation of a specific kind of representation. (People disagree about which representations and manipulations are needed for computation.)

This semantic view of computation, popular as it may be, doesn’t hold up under scrutiny. The main problem is that just about any paradigmatic example of computing system can be defined without positing representations. A digital computer can be programmed to alphabetize meaningless strings of letters. A Turing machine can be defined so as to manipulate meaningless symbols in meaningless ways. And so on.

The traditional alternative to the semantic view is the mapping view, according to which all there is to physical computation is a mapping between a physical and a computational description of a system. According to the mapping view, a physical system computes just in case there is a computational description that maps onto it. The main problem with the mapping view is that it leads to pancomputationalism–the view that everything computes.
........................
The mechanistic account of computation attempts to give an adequate account of physical computation, which does justice to the practices of the computational sciences, without requiring that computations manipulate representations and without falling into pancomputationalism."   
(This post was last modified: 2019-04-20, 06:49 PM by nbtruthman.)
[-] The following 1 user Likes nbtruthman's post:
  • Sciborg_S_Patel
I think this question of a simulation gets associated too often with computationalist theories of mind.

A structural representation of relevant structural aspects could be enough to, in a metaphysically neutral sense, enough to include both us and the environment in a simulation.

An example of this would be Orch OR, where Hammeroff has suggested that lattice structure could be utilized to preserve the resonances needed for consciousness. Of course more evidence would be needed to decide one way or another if that's sufficient, but it does possible to me that some other structure could suffice even if Turing Machines could not.

[For example in Kastrup's Idealism the structure of the brain represents an alter from the One, and so it seems possible to emulate that structure.]
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(This post was last modified: 2019-04-23, 01:25 PM by Sciborg_S_Patel.)
[-] The following 1 user Likes Sciborg_S_Patel's post:
  • stephenw
(2019-04-23, 01:22 PM)Sciborg_S_Patel Wrote: I think this question of a simulation gets associated too often with computationalist theories of mind.

A structural representation of relevant structural aspects could be enough to, in a metaphysically neutral sense, enough to include both us and the environment in a simulation.

An example of this would be Orch OR, where Hammeroff has suggested that lattice structure could be utilized to preserve the resonances needed for consciousness. Of course more evidence would be needed to decide one way or another if that's sufficient, but it does possible to me that some other structure could suffice even if Turing Machines could not.

[For example in Kastrup's Idealism the structure of the brain represents an alter from the One, and so it seems possible to emulate that structure.]

The notion that unique "relevant structural aspects" of the brain including "lattice structure to preserve needed resonances" generate human consciousness clearly implies that mind = physical brain in some sophisticated sense, where a very special sort of biological data processing mechanism is needed for consciousness to be generated. This is untenable, mainly because there is a large body of empirical paranormal evidence to the contrary, especially veridical NDEs and reincarnation cases. There also are theoretical difficulties, such as the "Hard Problem".

Of course, the Hammeroff/Penrose Orch OR theory could be right in a dualist sense, that preexisting consciousness needs that precise kind of mechanism to manifest in the physical. Also, interactional dualist concepts assume that the soul still requires some sort of "spirit body" to manifest itself even in higher spiritual realms of existence. 

I take your point as being that if this dualist view is correct, then perhaps preexisting human consciousness could manifest in the physical in some other sort of very special sort of data processing mechanism, one designed to host the massively multiplayer world simulation.
(This post was last modified: 2019-04-23, 09:47 PM by nbtruthman.)
[-] The following 1 user Likes nbtruthman's post:
  • stephenw

  • View a Printable Version
Forum Jump:


Users browsing this thread: 1 Guest(s)