Reading over these ideas of Whitehead:
It made me think Marcus Arvan's Peer to Peer Hypothesis:
So God's Body is the Res Potentia Kauffman talks about in his quantum consciousness theory, the Sea of Possibility. This is also expressible in a different way as potential paths we users can take in the Simulation. Whitehead presents this as theological - God gaining meaning from the actions of his creations choosing paths. Arvan gives us an explanation of how particular possibilities are realized while also explaining superposition:
Quote:Whitehead thus sees God and the world as fulfilling one another. He sees entities in the world as fluent and changing things that yearn for a permanence which only God can provide by taking them into God's self, thereafter changing God and affecting the rest of the universe throughout time. On the other hand, he sees God as permanent but as deficient in actuality and change: alone, God is merely eternally unrealized possibilities, and requires the world to actualize them. God gives creatures permanence, while the creatures give God actuality and change
It made me think Marcus Arvan's Peer to Peer Hypothesis:
Quote:It's a curious fact that it seems to many of us that no matter how complete a physical explanation might be, such an explanation could never possibly account for consciousness (i.e. the "soul"). The P2P hypothesis predicts and explains this problem. Observers trapped in a P2P simulation would be convinced--just as many of us are--that there is something about their subjective point-of-view that cannot be captured in the physics of their world. And they would be right. The hardware upon which the simulation is running--the processing apparatus (viz. DVD laser apparatus/processor)--would comprise their subjective point-of-view, and be inaccessible to them within the simulation. More generally, the P2P model holds a reality like ours is comprised by two fundamentally different types of things: (A) "hardware" (i.e. consciousness/measurement appartus), and (B) "software" (i.e. physical information) interacting.
Quote:There are broadly two theories of time in philosophy, the "A-theory" which says that time passes (viz. a "moving spotlight"), and the "B-theory" which says that time is nothing more than an ordered series of events (viz. time just is some events ordered before/after others). Both theories seem to face problems. A-theories seem hopelessly mysterious. B-theories seem to face problems making sense of change (i.e. if an ordered series of events is all that time is, how does time pass?). The P2P Hypothesis provides a new answer: one that synthesizes both positions via a kind of mechanism/model that we already understand. When I go to play back a CD, the CD is a series of ordered information, and that information is experienced in real-time moving forward only insofar as a distinct observation-mechanism (the CD-player's processor) reads the information. This suggests that in order to make sense of time (i.e. it's being ordered and moving), we need a dualist theory--and the P2P Hypothesis gives us a concrete example of how such a dualist theory works.
So God's Body is the Res Potentia Kauffman talks about in his quantum consciousness theory, the Sea of Possibility. This is also expressible in a different way as potential paths we users can take in the Simulation. Whitehead presents this as theological - God gaining meaning from the actions of his creations choosing paths. Arvan gives us an explanation of how particular possibilities are realized while also explaining superposition:
Quote:
- The location of any "object" within the simulation is a computational superposition, i.e. an object represented at position A on computer A, position B on computer B, position C on computer C, etc. will be coded, at the level of the whole simulation, as being simultaneously in positions A, B, C, etc. (superimposed in all of those locations at once).
- "The" location of any object or property in a P2P simulation is therefore also indeterminate, given that each computer on the network has its own representation of where "the" object or property is, and there is no dedicated server on the network to represent where the object or property "really" is (any object or property "really" is represented at many different positions on the network, thanks to slightly different representations on many computers all operating in parallel),
- Any measurement taken by any single measurement device a P2P network also thereby affects the network as a whole (since what one computer measures will affect what other computers on the network are likely to measure at any given instant), giving rise to a massive measurement problem (one can only measure an object is on the network by disturbing the entire network, thereby altering where other computers on the network will represent the particle as being).
- Because different machines on the network represent the same object in slightly different positions at any given instant (with some number n of machines representing a given object at position P, some other number n* of machines representing a given object at position P*, etc.) a dynamical description of where a given object/property probably is in the environment will have features of a wave (viz. an amplitude equivalent to the number of computers representing the object at a given instant, and wavelength equivalent to dynamical change of how many computers represent the object at a given point at the next instant).
- By a similar token, any particular measurement on any particular computer will result in the observation of the object as located at a specific point.
- Any particular measurement on any particular computer will result in the appearance of a “collapse” of wave-like dynamics of the simulation into a single, determinate measurement.
- It is also a natural result of a peer-to-peer network that single objects can “split in two”, becoming entangled (in a peer-to-peer network multiple computers can, in a manner of speaking, get slightly out of phase, with one or more computers on the network coding for the particle passing through a boundary, while one or more other computers on the network coding for the particle to bounce backwards – in which case, if the coding is right, all of the computers on the network will treat the “two” resulting objects as simply later continuants of what was previously a single object).
- All time measurements in a P2P simulation are relative to observers. Each measurement device on a P2P simulation (i.e. game console) has its own internal clock, and there is no universal clock or standard of time that all machines share.
- Because the quantized data comprising the physical information of a P2P simulation will have to be separated/non-continuous much as there are "spaces" between pits of data on a CD/DVD/Blu-Ray disc (see image below), there must be within any such simulation something akin to the Planck length, an absolute minimum length below which measurements of space-time cannot be taken in principle (a feature of our world for which, at present, "there is no proven physical significance").
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'
- Bertrand Russell
(This post was last modified: 2018-04-08, 04:37 PM by Sciborg_S_Patel.)
- Bertrand Russell