File talk:Relativistic P = NP Computation.png

From Wikimedia Commons, the free media repository
Jump to navigation Jump to search

The P and NP observers are working on the same problem which requires non-polynomial (enormous) time to calculate, possibly because of an N! term in the computation time. Observer P uses an ordinary algorithm, such as those known to us of today, where the calculation requires full execution of N! steps. Observer NP can be thought of as having a hypothetical short-cut to compute the same problem in polynomial time, for example scaling as only n(3/2). Observer P, in the reference frame of a ground-based computer, sees a calculation time of tP. Observer NP leaves Observer P at t0 = 0 with his own computer. Define the time required to calculate the NP problem in the high velocity reference frame as tNP, i.e., in the reference frame of the travelling observer, named “NP”, who is at a velocity of vNP. In the reference frame of a second observer named “P”, the velocity of his ground-based computer is vP = 0. Observer P and NP reunite at tP and tNP, respectively as the calculations finish in both time frames. As has been proven using ground-based and high-velocity atomic clocks, tP >> tNP, consistent with Einstein's prediction of time dilation. This is commonly referred to as the twin paradox. Then, introducing the Lorentz Factor for Relativistic time dilation, tNP = tP (1 – (vNP 2/ c2))0.5 . (Eq. 1) A similar equation can be found at http://www.hotquanta.com/twinslit.html , hosting a very nice white paper, entitled: “Historic Background & The Twin Slit Example”, by John K. N. Murphy. Now, as vNP approaches c, tNP = tP (1 - 1)0.5 tNP = 0, such that Observer NP sees an instantaneous calculation time. As an example, consider the Travelling Salesman Problem (TSP): We can calculate the velocity required for the NP Observer so as to reduce the computation time of an n-city problem compared to a trivial single operation by using Eq. 1. At 0.9999999999 c, vNP 2/ c2 = 0.99999999995 ; tNP / tP = (1 - 0.99999999995)0.5 = 0.707 e-6 which is in the range of 9 about cities (1/9! =2.75 e-6). While this problem has been cast in the rather unrealistic terms of the twin paradox, any calculation that could be achieved within an interacting and unconventional computation using a particle beam should enjoy the same advantage of time dilation. In terms of P and NP algorithms applied to the same problem, consider the solution to an overdetermined matrix by conventional Singular Value Decomposition (SVD) versus the special case of a "scroll matrix", where the former requires almost infinite time to calculate and the latter is quick. The conventional algorithms can be found in Numerical Recipes in C or by using the PseudoInverse function of Mathematica. The quick method, possibly a special case to P versus NP, can be found at: http://youvan.com/Examples/Example_18._Rapid_PseudoInverse_for_Scroll_Matrices.html . I now digress to explain the "Youvan Inverse", "YI", an algorithm that operates on a special class of matrices, defined as Scroll Matrices, D, where w is symbolic of word length and a is the alphabet size. In a discrete mathematics definition, we compute the scroll matrix D (in Mathematica syntax):

D = Partition[Flatten[Tuples[Reverse[IdentityMatrix[a]],w]],(a*w)] (Eq. 2)

We can show that the pseudoinverse of matrix D can be found in rapid computation time by simple scaling of the elements within the transpose of the scroll matrix. YI is equal to:

		http://youvan.com/Examples/Example_18._Rapid_PseudoInverse_for_Scroll_Matrices.html 

The canonical Singular Value Decomposition (SVD) derived pseudoinverse of matrix D, "PI", scales very poorly in computation time as a and w increase in size. Again, using Mathematica syntax, with equivalence denoted "= =" :

YI = = PI, for all values of a and w > 0

At the largest values of a and w we computed, i.e., (5, 5), we showed that the PI solution requires 59,000 seconds of CPU time (on a 2010-style computer), whereas the YI solution is found in less than 62 milliseconds, approximately one million times faster. Now, consider starting the green smiley face and the red smiley face with the same 50 x 500 overdetermined matrix to solve. They will both calculate the same pseudoinverse to this matrix using these two different methods, but in almost-infinite versus almost-zero computational time, respectively. (This is deduced from smaller matrices that can actually be tested.) One would not expect such "special solutions" to be generally applicable to the P versus NP problem - within static frames of reference. However, P != NP is currently without verified proof. If, in fact, P != NP in static (non-moving) algorithms and methods (Turing machines), then the relativistic approach (such as "photon computers") is important to implement. The current literature indicates that SVD is NP-hard, but such proofs are extremely difficult to verify. One might consider an interference pattern to be a real-time NP calculation. In analogy with the twin paradox (above), and by stating Eq. 1 as its reciprocal, we find: tP / tNP = infinity which is consistent with the quasi-instantaneous and unconventional calculation of the Fourier transform that is formed in the interference pattern. To account for light travel time from the slits to the illuminated surface (distance, d), the actual computation time is: tNP = d / c (Eq. 4) This assumes a sufficient photon flux to register an image that exceeds any significant photon shot noise in the detector. We can also ask: Is the mathematics that so well fits this 2-slit experiment simply an abstract description of the physics or is the math an integral part of the interference phenomenon by taking the form of a "photon computer"? One could argue that the abstract form of the math is the version that is written with symbols, whereas the actual math is an integral part of the physical reality of the experiment. If the latter is true, ironically, the wave-particle duality would become more of a wave-particle-math trinity. In terms of analogue devices, would we create a better dataset for solving the TSP if a computer-generated set of Fourier transforms of a “plate of holes” was simulated at wavelengths ranging from the closest distance between cities to the farthest distance between cities? One then has an image stack of interference images wherein light itself helps us to understand the relationships between the cities as represented by aperture pin holes. Either in simulation or experiment, one would use coherent and coplanar light to illuminate the plate. For an actual experiment, one might appreciate setting the distance between cities (holes) on the order of the wavelengths ranging from UV - IR light, so as to make a scan over a large wavelength range wherein imaging is feasible through a single 2-D sensor (camera). Micro-lithography would be used to fabricate the specific plate that mimics the arrangement of cities. As a practical problem in computation, the question is not whether P = NP, or P != NP, nor whether P → NP (via the Lorentz factor). The problem is how to achieve an actual computational result that presently seems to require an almost infinite computation time using our current algorithms and computational devices. That would seem to require a completely different mechanism for computation by using (near) light-speed particles with zero mass rather than a static, electronics-embodied Turing machine. Interferometry might be the right place to start. One might wish to ask the following question: Is there an optical device that produces an effect, image, or sequence of images or effects that would require the solution to N! term to simulate? For such devices (or experiments), one might consider the physical embodiment of the problem to be an NP calculator. Thus a "photon computer" exists and is known to us, but we don't ordinarily think of such a device as a computer performing an NP calculation in (near) real time. In aerodynamics, the analogy would be between a computer simulation and an actual wind tunnel. The simulation that we would really like to calculate involves so many particles that we are faced with almost infinite calculation time (NP), whereas the wind tunnel performs this "calculation" in real time (P). Again, we do not usually think of a wind tunnel as a calculator because it is non-algorithmic, yet follows the laws of physics, somewhat of an oxymoron. I would predict, therefore, that it is much more likely for mathematicians and scientists to accept the results of a light-based experiment (e.g., interference patterns) as being more like a computer computation (e.g., calculating a Fourier Transform). In both the physical-embodied experiment and the computer calculation of this light-based problem, it is easier for us to see an underlying algorithm; however, the question again is whether the math is actually part of the experiment or whether it is simply descriptive of the experiment.

Start a discussion about File:Relativistic P = NP Computation.png

Start a discussion