Text Size




My torsion field warp drive-stargate time travel equations.
Like ·  · Share
  • Jack Sarfatti On Oct 7, 2013, at 6:42 PM, jacksarfattiwrote:

    Sent from my iPhone

    On Oct 7, 2013, at 5:51 PM, Paul Zelinsky <yksnilez@gmail.com> wrote:

    Thus by 1920 Einstein had understood that the g_uv were dynamical properties of a physical vacuum that are not fully determined by matter stress-energy. 

    It's the curvature R that is dynamical (also possibly torsion K in Einstein-Cartan)


    That is the transverse curl part of the spin connection that describes disclination defects aka curvature

    The exact part of the spin connection 1-form

    Sexact = df

    f = 0-form

    (actually a set of 0-forms fIJ where I,J are the LIF indices.

    It's really SIJ and RIJ , but KI and eI

    corresponds to artificial Newtonian gravity fields in Minkowski space

    Technically GR in a nutshell

    e is set of four tetrad Cartan 1-forms

    S is the spin connection 1-form

    The affine metric connection in general is

    A = S + K

    K = De = de + S/e 

    = torsion 2-form - corresponding to dislocation defects in Kleinert's world crystal lattice

    R = DS = dS + S/S 
    = curvature 2-form

    Einstein's 1916 GR is the limit

    K = 0

    Which gives LC = 0 in LIF EEP


    D*R = 0 Bianchi identity

    *R + A^-1e/e/e = k*T = Einstein field equation

    * = Hodge duality operator

    D*(T - A^-1e/e/e) = 0 is local conservation of stress-energy current densities

    Note if there is torsion De = K =/= 0 then we have a direct coupling between matter fields T and the geometrodynamic field K - for warp drive & stargate engineering?

    Einstein Hilbert action density including the cosmological constant A^-1 is the 0 form

    *R/e/e + *A^-1e/e/e/e

    A = area-entropy 

    of our dark energy future cosmological event horizon bounding our causal diamond.

    Gauge transformations (corresponding to general coordinate transformations) are

    d^2 = 0

    S -> S' = S + df'

    S/f = 0

    R = DS --> R' = DS' 

    R' = dS' + S'/S'

    = dS + d^2f' + (S + df')/(S + df')

    = dS + S/S + S/df' + df'/S + df'/df'

    / is antisymmetric

    df'/df' = 0

    (analogous to AxA = 0 in 3-vector analysis cross-product)


    Physically, the GR gauge transformations are

    LNIF(Alice) < ---> LNIF(Bob)

    where Alice and Bob are "coincident" i.e. separations small compared to radii of curvature.

    Zielinski wrote:

    He tried to call this new ether "Machian", but it is hard to see what is Machian about it, other than that the g_uv field is at least partially determined by T_uv. But that is an action-reaction principle, not a Machian relativity of inertia principle. So if this new ether is at all
    "Machian", it is only in the very weak sense that the spacetime geodesics depend on the distribution of matter according to the GR field equations (plus boundary conditions).


    On 10/7/2013 2:46 PM, jack quoted Harvey Brown et-al
    "The growing recognition, on Einstein’s part, of the tension between the field equations in GR and his 1918 version of Mach’s Principle led him, as we have seen, to effectively assign genuine degrees of freedom to the metric field in the general case (not for the Einstein universe). This development finds a clear expression in a 1920 paper,62 where Einstein speaks of the electromagnetic and the gravitational “ether” of GR as in principle different from the ether conceptions of Newton, Hertz, and Lorentz. The new, generally relativistic or “Machian ether”, Einstein says, differs from its predecessors in that it interacts (bedingt und wird bedingt) both with matter and with the state of the ether at neighbouring points.63 There can be little doubt that the discovery of the partial dynamical autonomy of the metric field was an unwelcome surprise for Einstein; that as a devotee of Mach he had been reluctant to accept that the metric field was not, in the end, “conditioned and determined” by the mass-energy-momentum Tμν of matter."
    In the mathematical fields of differential geometry and tensor calculus, differential forms are an approach to multivariable calculus that is independent of coordinates. Differential forms provide a unified approach to defining integrands over curves, surfaces, volumes, and higher dimensional manifo...



Making Star Trek Real

Jack Sarfatti
Like ·  · Share
  • Laird Racette likes this.
  • Jack Sarfatti Preface

    I adopt as a working hypothesis that the flying saucers are real and that they get here through stargates that are shortcut tunnels in Einstein’s spacetime continuum. The task is then to see what modern physics has to say about such a scenario even if it’s not true. Whether or not it’s true is beside the point and I will not discuss the actual UFO evidence, good, bad and bogus in this book. I will also write about quantum theory and its relation to computing, consciousness, cosmology, the hologram universe and ending in a scenario for Stephen Hawking’s “Mind of God.” That Hawking thinks God is not necessary is again is beside the point. A good layman’s background reference here is Enrico Rodrigo’s “The Physics of Stargates: Parallel Universes, Time Travel and the Enigma of Wormhole Physics.” If you have the patience, Leonard Susskind’s Stanford University lectures in physics online videos are also worth the effort for the serious student
  • Jack Sarfatti Chapter 1 Einstein’s Theory of Relativity in a Nutshell

    Here I follow “Gravitation and Inertia” by Ignazio Ciufolini and John Archibald Wheeler, which is a more up to date sequel to the Misner, Thorne, Wheeler classic book “Gravitation.”

    “Gravity is not a foreign and physical force transmitted through space and time. It is a manifestation of the curvature of spacetime.” Albert Einstein

    “First, there was the idea of Riemann that space, telling mass how to move, must itself – by the principle of action and reaction – be affected by mass. It cannot be an ideal Euclidean perfection, standing in high mightiness above the battles of matter and energy. Space geometry must be a participant in the world of physics.” John Archibald Wheeler (aka JAW) 

    “Second, there was the contention of Ernst Mach that the ‘acceleration relative to absolute space’ of Newton is only properly understood when it is viewed as acceleration relative to the sole significant mass there really is.” JAW

    The above statement is now obsolete since ordinary matter in the form of baryons, electrons, photons etc. is now known to be not more that approximately 5% of all the gravitating stuff that we can see in the past light cones of our telescopes. About 70% is large-scale anti-gravitating dark energy accelerating the expansion speed of 3D space. Random quantum vacuum zero point virtual photons and other spin 1 and spin 2 quanta in quantum field theory have negative pressure three times greater than their positive energy density and may be dark energy. The remaining approximately 25% is clumped shorter-scale gravitating dark matter that holds galaxies together. Random quantum vacuum zero point virtual electron-positron and other spin ½ quanta have positive pressure three times greater than their negative energy density causing attractive gravity like dark matter. If dark matter is this quantum vacuum effect dictated by local Lorentz covariance and Einstein’s Equivalence Principle (aka EEP), then none of the attempts to measure real on-mass-shell particles whizzing through space to explain dark matter will succeed. There are, however, “f(R)” MOND variations of Einstein’s general relativity that attempt to explain both dark matter and dark energy.
  • Jack Sarfatti “According to this ‘Mach Principle,’ inertia here arises from mass there.” JAW

    This is summarized in Einstein’s 1915 local tensor field equation relating the source stress-energy current densities of matter fields to the curvature of spacetime locally coincident with matter currents. However, when we solve those local field equations we have to impose global boundary/initial conditions and use the method of Green’s function propagators to see how matter currents here change spacetime curvature there. The “inertia” in Wheeler’s statement above refers to the pattern of force-free time like geodesic paths of test particles whose mass is small enough to neglect their distortion of the local curvature gravity field. The word “inertia” in the context of Mach’s principle above does not refer at all to the actual rest masses of the test particles. Indeed, the test particle rest masses cancel out of the timelike geodesic equations of motion that correspond to Newton’s first law of motion. Galileo first understood this though he did not have the modern mathematical concepts I am using here. 

    “Third was that great insight of Einstein that … ‘free fall is free float’: the equivalence principle, one of the best tested principles of physics, from the inclined tables of Galilei and the pendulum experiments of Galilei, Huygens, and Newton to the highly accurate torsion balance measurements of the twentieth century, and the Lunar Laser Ranging experiment … With these three clues vibrating in his head, the magic of mind opened to Einstein what remains one of mankind’s most precious insights: gravity is manifestation of spacetime curvature.”

    What should we mean by the word “inertia” and what is its relation to gravity? Wheeler means: “The local equivalence of ‘gravitation’ and ‘inertia,’ or the local cancellation of the gravitational field by local inertial frames … A gravitational field is affected by mass-energy distributions and currents, as are the local inertial frames. Gravitational field and local inertial frames are both characterized by the spacetime metric, which is determined by the mass-energy distributions and currents.”
  • Jack Sarfatti The same term “gravitational field” is used in several different meanings depending on context. When Wheeler talks about the “cancellation of the gravitational field by local inertial frames” he means Newton’s universally attracting radial 1/r2 field from a spherically symmetric source mass. In the tensor calculus language of Einstein’s 1916 general theory of relativity of gravitation, Newton’s gravity field is a piece of the Levi-Civita connection terms in the directional covariant derivative of the linear four-momentum of a test particle with respect to the proper clock time along its path or world line in four-dimensional spacetime. The second meaning of “gravitational field” is the tensor curvature, which is the rotational covariant partial derivative “curl” of the Levi-Civita connection with respect to itself. Einstein’s theory is a local classical field theory whose measurable properties or “observables” must be tensors and spinors. 

    The local geometrodynamic field moves massive test particles in force-free inertial motion on timelike geodesics, but do not back-react on the geometrodynamic field. We distinguish test particles from source masses, which generate the geometrodynamic field in a similar way to how electric charges generate the electromagnetic field.
  • Jack Sarfatti Contrary to popular misconceptions, although the local laws of classical physics have the same “tensor” and/or “spinor” form for all motions of detectors measuring all the observable possessed by the “test particles”, there are privileged dynamical motions of the test particles in Einstein’s two theories of relativity special 1905 and general 1916. This was in Einstein’s words “My happiest thought.” These privileged motions are called “geodesic” motions or “world lines.” Test particles are distinguished from “source particles.” It is an approximation that test particles do not significantly modify the fields acting on them. They are, strictly speaking, a useful contradiction of the metaphysical principle of no action of Alice on Bob without a direct “back-reaction” of Bob on Alice. Massless point test particles in what physicists call the “classical limit” move on “null” or “lightlike” geodesics. Test particles with mass m move on timelike geodesics that are inside the “light cone” formed by all the light rays that might be emitted from that test particle if it were electrically charged and if it were really accelerating. The latter is a “counter-factual” statement. Look that up on Google.  The key point is that Alice is weightless when traveling on a timelike geodesic inside her two local light cones past and future. There are no real forces F acting on Alice. On the contrary, Bob who is measuring Alice with a detector (aka “measuring apparatus”) need not be on another timelike geodesic. He can be off-geodesic because real forces can be acting on him causing him to feel weight. The real forces acting on Bob appear as “fictitious” “inertial pseudo-forces” acting on Alice from Bob’s frame of reference. The only real forces in nature that we know about in 2013 are the electro-magnetic, the weak and the strong. Gravity is not a real force in Einstein’s theory. Gravity is one of the fictitious forces described above. Real forces on test particles, unlike all fictitious forces on them, are not universal. Fictitious inertial pseudo-forces that appear to, but are not really acting on the observed test particles all depend on the mass m of the test particle.
  • Jack Sarfatti The operational litmus test to distinguish a real force from a fictitious inertial pseudo-force is what an accelerometer rigidly clamped to the observed test particle measures. I repeat, because many engineers and even some physicists get muddled on what should be an elementary physics idea: Einstein’s “happiest thought” that led to his general theory of relativity in the first place, was his epiphany that an accelerometer clamped to a freely falling object on a timelike geodesic path (i.e., world line) would not register any g-force (i.e., any weight). The apparent kinematical acceleration of a freely falling test particle seen in the gravitational field of the surface of Earth is because the surface of rigid Earth at every point on it has radially outward proper tensor acceleration whilst the test particle itself has zero proper tensor acceleration. The accelerometer on the test particle registers zero. The accelerometer at a point on the surface of Earth registers the “weight” an object of rest mass m clamped to it. That every point on a rigid sphere is accelerating radially outward is hard for common sense engineers and laymen to comprehend. It seems crazy to common sense, but that is precisely the counter-intuitive Alice in Wonderland reality of Einstein’s curved spacetime that is battle-tested by very accurate experiments. Consequently, if Alice and Eve are each on separate timelike geodesics very close to each other and if Bob who is not on a timelike geodesic of his own due to real forces acting on him, then Alice and Eve will have the same kinematical acceleration relative to Bob and they will both feel weightless though Bob feels weight – also called “g-force.” This causes a lot of confusion, especially to aerospace missile engineers and high-energy particle physicists, because Newton did consider gravity to be a real force, but Einstein did not. Gravity is not a force. Gravity is the curvature tensor of four-dimensional space-time. What Newton thought of as a real gravity force, is demoted to a fictitious inertial pseudo-force in Einstein’s theory. In the language of the late John Archibald Wheeler, gravity is a “force without Force”. The best local frame invariant way to think about gravity in an objective local frame-independent way is the pattern of both light like and timelike geodesics whose source is the “stress-energy density tensor field” Tuv of matter. By matter we mean spin 1/2 leptons, quarks, and the spin 1 electromagnetic-weak-strong gauge bosons as well as the spin 0 Higgs vacuum superconductor field that formed only when our observable piece of the multiverse called the “causal diamond” popped out of the false vacuum about 13.7 billion years ago.
  • Jack Sarfatti http://en.wikipedia.org/wiki/Flying_saucer 
    “For years it was thought that the Schwarzschild spacetime did in fact exhibit some sort of radial singularity at r = 2GM/c2. Eventually physicists came to realize that it was not Schwarzschild spacetime th
    ...See More
    A flying saucer (also referred to as a flying disc) is a type of described flying craft with a disc or saucer-shaped body, commonly used generically to refer to any anomalous flying object. In 1947 the term was coined but was later officially supplanted by the United States Air Force in 1952 with th...
  • Jack Sarfatti A firewall is a hypothetical phenomenon where an observer that falls into an old black hole encounters high-energy quanta at (or near) the event horizon. The "firewall" phenomenon was proposed in 2012 by Almheiri, Marolf, Polchinski, and Sully [1] as a possible solution to an apparent inconsistency in black hole complementarity. The proposal is often referred to as the "AMPS" firewall, an acronym for the names of the authors of the 2012 paper. However, the occurrence of this phenomenon was proposed eleven years earlier by Friedwardt Winterberg,[2] and is very different from Hawking radiation.
    The firewall hypothesis, like black hole complementarity, is quantum gravitational. It arises (in part) from the conjecture that once an old black hole has emitted a sufficiently large amount of Hawking radiation, the mixed quantum state of the black hole is highly entangled with the state of the Hawking radiation thus far emitted. Firewalls are a dramatic change from the usual assumption that quantum gravity is unimportant except in regions of spacetime where the radius of spacetime curvature is on the order of the Planck length; large black holes have low curvature near the event horizon.
    However, according to Winterberg,[2] a correct theory of quantum gravity cannot ignore the zero point vacuum energy. Because it must be cut off at the Planck energy, Lorentz invariance is violated at high energies, creating a preferred reference system in which the zero-point energy is at rest. In approaching and crossing the event horizon at the velocity of light in the preferred reference system, an elliptic differential equation holding matter in a stable equilibrium goes over in a hyperbolic differential equation where there is no such equilibrium, with all matter disintegrating into gamma rays without loss of information or violation of unitarity, as it has been observed in cosmic gamma ray bursters.
    The firewall idea seems to be related to the "energetic curtain" around a black hole, proposed by Braunstein,[3] but it depends on the unproven conjecture that a black hole entropy is entirely entropy of entanglementhttp://en.wikipedia.org/wiki/Firewall_(physics)
    A firewall is a hypothetical phenomenon where an observer that falls into an old black hole encounters high-energy quanta at (or near) the event horizon. The "firewall" phenomenon was proposed in 2012 by Almheiri, Marolf, Polchinski, and Sully[1] as a possible solution to an apparent inconsistency i...
  • Jack Sarfatti “What is it that breathes fire into the equations and makes a universe for them to describe? … However, if we discover a complete theory, it should in time be understandable by everyone, not just by a few scientists. Then we shall all, philosophers, scientists and just ordinary people, be able to take part in the discussion of the question of why it is that we and the universe exist. If we find the answer to that, it would be the ultimate triumph of human reason -- for then we should know the mind of God.” (P.193), A Brief History of Time.
    Rodrigo shows that the classical energy conditions and chronology protection arguments against time travel to the past as well as the quantum inequality restrictions on negative energy balanced by positive energy are not likely to be fatal barriers against stargate technology.
    Wikipedia has now become quite reliable for physics/math articles after a rocky start of several years especially on biographies of living movers and shakers. Rather than repeat standard content on technical jargon that is prerequisite to understanding this book I give URLs to Wikipedia and, at times, other explanations.
    The same idea appears in quantum theory in David Bohm’s interpretation. Orthodox quantum theory violates Wheeler’s philosophical principle of action and reaction. The quantum information field Q acts on the classical particles and fields without any direct reaction of the latter on the former. Then, and only then, is it impossible to use entanglement as a stand alone communication channel not requiring a classical signal key to decrypt the message at only one end of the entangled whole. In other words, “background independence” in Einstein’s 1916 general relativity is equivalent to entanglement signal nonlocality violating orthodox quantum theory. The non-dynamical spacetime background of Einstein’s 1905 special relativity is equivalent to the “no signaling” circular arguments of Abner Shimony’s “passion at a distance.”
    http://arxiv.org/pdf/1302.6165v1.pdf http://en.wikipedia.org/wiki/Vacuum_state
    With seven years of data, the WMAP cosmology satellite has refined the age of the universe and other key cosmic parameters. The results strengthen the "standard model" of inflationary cosmology.
  • Jack Sarfatti http://en.wikipedia.org/wiki/Riemann_curvature_tensor 
    Newton’s particle mechanics and Einstein’s 1905 special theory of rel
    ...See More
    In the mathematical field of differential geometry, the Riemann curvature tensor, or Riemann–Christoffel tensor after Bernhard Riemann and Elwin Bruno Christoffel, is the most standard way to express curvature of Riemannian manifolds. It associates a tensor to each point of a Riemannian manifold (i....
"One can construct the (compact form of the) E8 group as the automorphism group of the corresponding e8 Lie algebra. This algebra has a 120-dimensional subalgebra so(16) generated by Jij as well as 128 new generators Qa that transform as a Weyl–Majorana spinor of spin(16). These statements determine the commutators

as well as

while the remaining commutator (not anticommutator!) is defined as

It is then possible to check that the Jacobi identity is satisfied.
Geometry[edit source | editbeta]

The compact real form of E8 is the isometry group of the 128-dimensional exceptional compact Riemannian symmetric space EVIII (in Cartan's classification). It is known informally as the "octooctonionic projective plane" because it can be built using an algebra that is the tensor product of the octonions with themselves, and is also known as aRosenfeld projective plane, though it does not obey the usual axioms of a projective plane. This can be seen systematically using a construction known as the magic square, due to Hans Freudenthal and Jacques Tits (Landsberg & Manivel 2001).
Applications[edit source | editbeta]

The E8 Lie group has applications in theoretical physics, in particular in string theory and supergravity. E8×E8 is the gauge group of one of the two types of heterotic stringand is one of two anomaly-free gauge groups that can be coupled to the N = 1 supergravity in 10 dimensions. E8 is the U-duality group of supergravity on an eight-torus (in its split form).
One way to incorporate the standard model of particle physics into heterotic string theory is the symmetry breaking of E8 to its maximal subalgebra SU(3)×E6.
In 1982, Michael Freedman used the E8 lattice to construct an example of a topological 4-manifold, the E8 manifold, which has no smooth structure.
Antony Garrett Lisi's incomplete theory "An Exceptionally Simple Theory of Everything" attempts to describe all known fundamental interactions in physics as part of the E8Lie algebra.[6][7]
R. Coldea, D. A. Tennant, and E. M. Wheeler et al. (2010) reported that in an experiment with a cobalt-niobium crystal, under certain physical conditions the electron spins in it exhibited two of the 8 peaks related to E8 predicted by Zamolodchikov (1989) .[8] [9]

ask Saul-Paul Sirag & Tony Smith

The EM U1, weak SU2 & strong SU3 are LOCALLY gauged internal symmetry groups in the fiber space whose base is 4D space-time.

In contrast T4 is a symmetry group for the base 4D space-time.

In order to get a consistent gravity gauge theory we need to localize the Poincare symmetry group of Einstein's 1905 SR and this gives both curvature and torsion as independent dynamical fields. Einstein's 1915 plain vanilla GR ad hoc constrains the torsion to be zero.

Hagen Kleinert showed that torsion is a dislocation defect field in a world crystal lattice.

Curvature is the disinclination defect field.

Hammond et-al say that quantum spin generates dynamical torsion.

There is controversy over propagating torsion waves (e.g. Gennady Shipov in Moscow) as well as coupling of torsion with orbital angular momentum of matter fields as well as their quantum spin.

On Sep 18, 2013, at 6:21 PM, David Mathes <davidmathes8@yahoo.com> wrote:

T4 == U(1) X SU(2) X SU(3) for flat space...?

Extended with Higgs plus, one might even simplify it to

T4 = E8?


From: JACK SARFATTI <jacksarfatti@icloud.com>
To: art wagner <wagnerart@hotmail.com> 

Sent: Wednesday, September 18, 2013 6:12 PM
Subject: Re: Chromogravity & An Important Experiment

"Gravitational field is the manifestation of space-time translational (T4) gauge symmetry, which enables gravitational interaction to be unified with the strong and the electroweak interactions. Such a total-unified model is based on a gen- eralized Yang-Mills framework in flat space-time."

I have said this for many years now.

On Sep 18, 2013, at 6:00 PM, art wagner <wagnerart@hotmail.com> wrote:

E8 (mathematics) - Wikipedia, the free encyclopedia
In mathematics, E8 is any of several closely related exceptional simple Lie groups, linear algebraic groups or Lie algebras of dimension 248; the same notation is used for the corresponding root lattic

 Jack Sarfatti said...

Yes I still have my copy of Quantum Theory and Beyond. I was at the 1974 Cambridge ANPA meeting hosted by Ted Bastin where I first met Brian Josephson and Bernard Car as well as Dennis Bardens of BBC and allegedly British Secret Service.

On Sep 18, 2013, at 6:44 PM, nick herbert <quanta@cruzio.com> wrote:
On Sep 18, 2013, at 11:12 AM, Dean Radin <dradin@noetic.org> wrote:

This article on that same website is also very good. Apparently no one dares propose the possibility that nature as we observe it is literally shaped by our expectations:


I interpret the results of psi research as pointing toward the same possibility.

best wishes,

Chief Scientist, Institute of Noetic Sciences
Co-Editor-in-Chief, Explore: The Journal of Science and Healing
Author, Supernormal and other books
Personal website 

On Wed, Sep 18, 2013 at 8:08 AM, nick herbert <quanta@cruzio.com> wrote:
Thanks, Gaby.
This is not only a marvelous discovery
that I had never heard of
(I live in the woods after all)
but a beautifully written article
describing the discovery
and iys possible implications.


On Sep 18, 2013, at 2:48 AM, Jungle Girl wrote:

Complications in Physics Lend Support to Multiverse Hypothesis | Simons Foundation
Decades of confounding experiments have physicists considering a startling possibility: The universe might not make sense.
  • Jack Sarfatti Jim should have done more elementary calculations of simple cases in his book. I will not make the same pedagogical mistake in my book.

    Jim's Sciama vector theory of gravity which I soundly reject as beyond the fringe of plausibility as well as Einste
    in's tried and true battle tested tensor theory of gravity which I accept as The Word made Flesh from GOD(D) herself are BOTH classical field theories. Feynman diagrams
    Feynman diagram - Wikipedia, the free encyclopedia
    In theoretical physics, Feynman diagrams are pictorial representations of the mathematical expressions governing the behavior of subatomic particles.
    Motivation and history - Representation of physical reality
    are for quantum field theory and beyond, e.g. supergravity STING theory. 

    Of course, if one used classical field perturbation theory some remnant of Feynman's technique should survive. The effects of off-mass-shell virtual particles will be ignorable i.e. internal lines smeared over into a glob. However, the idea of using the amplitudehedron to compute solutions of nonlinear classical field theory might not be completely stupid? Jim's vector theory of gravity is relatively Mickey Mouse and does not need all of this fancy Dan math.

    MY VERSION of Jim's theory is very simple and does not need all his numbo jumbo about fictitious forces etc.

    One simply postulates in a Popper falsifiable manner:

    observed inertia = (Nonlocal Mach screening factor)(Local inertia)

    Local rest mass comes from several sources at different levels

    1) Higgs vacuum field for leptons, quarks, W bosons

    2) quantum chromodynamics for hadrons (confined ZPE of the quarks)

    3) standard low energy nuclear, atomic, solid state, chemical bond binding energy physics

    Finally we have the split

    (Nonlocal Mach screening factor) = Aharonov Destiny + Aharonov History

    The BACK FROM THE FUTURE DESTINY piece is the Wheeler-Feynman Hoyle-Narlikar-Cramer ADVANCED FUTURE LIGHT CONE INFLUENCE FUNCTIONAL constrained by our future dark energy TOTAL ABSORBER de Sitter event horizon (a hologram quantum computer).

    Similarly, for the RETARDED past light cone part constrained by our past particle horizon.

    OK now use plain vanilla Einstein GR

    Newton's 2nd law of TEST PARTICLE mechanics is

    DP/ds = F

    P = (Nonlocal Mach factor)(Local Inertia)V = (Phi)mV

    V = tensor 4 velocity of test particle

    D/ds = d/ds + (Levi-Civita DETECTOR terms)

    ds = proper time of test particle differential along its CLASSICAL world line

    (Levi-Civita DETECTOR terms) ~ 0 when the detector is on a timelike geodesic and is not rotating.

    d(Phi mV)/ds = (dPhi/ds)mV + Phi(dm/ds)V + (Phim)dV/ds

    This is only for timelike test particles NOT for PHOTONS!



    In addition there need be some classical field (from action) equations for Phi, but this Phi does not at all correspond to

    g00 = 1 - phi/c^2

    BTW on Jim's speed of light RED HERRING!

    classically ds = 0 and that's all one can really say correctly.

    In the general metric corresponding to an arbitrary timelike LNIF set of detectors

    ds^2 = g00c^2dt^2 + g0icdtdx^i + gijdx^idx^j

    for a classical optics light ray this is

    0 = g00c^2dt^2 + g0icdtdx^i + gijdx^idx^j

    i,j = 1,2,3

    If we define the PROPER LENGTH dL as

    dL^2 = gijdx^idx^j

    and PROPER TIME dT as

    dT^2 = g00dt^2

    then the light ray equation is

    0 = - c^2dT^2 + g0icdtdx^i + dL^2

    = - c^2dT^2 + g0ig00^-1/2cdTdx^i + dL^2

    You can always choose a local triad where gij = 0 if i =/= j and not change the dynamical physics

    define like Ray Chiao Ai = g0i

    Therefore, the light ray null geodesic equation is

    0 = - c^2dT^2 + g00^-1/2cdTA.dL + dL^2

    DEFINE c' = dL/dT

    Therefore, JIM IS WRONG! 

    0 = - c^2 + g00^-1/2A.c' + c'^2

    A.c' = cAcos(A,c')

    This is a SIMPLE quadratic equation for the speed of light that has two roots in general when A =/= 0.

    Also note the HORIZON SINGULARITY at g00 = 0

    c' = {-cAcos(A,c')g00^-1/2 +- [c^2A^2cos^2(A,c')/g00 + 4c^2]^1/2}/2

    = {c{Acos(A,c')/g00^1/2 +,- c[A^2cos^2/g00 + 4]^1/2}/2

    In the limit A -> 0 c' -> +,- c

    When A =/= 0 at a horizon we get two roots for c', i.e. 0 and infinity!


On Sep 18, 2013, at 6:12 PM, JACK SARFATTI <jacksarfatti@icloud.com> wrote:


"Gravitational field is the manifestation of space-time translational (T4) gauge symmetry, which enables gravitational interaction to be unified with the strong and the electroweak interactions. Such a total-unified model is based on a generalized Yang-Mills framework in flat space-time."
I have said this for many years now.
On Sep 18, 2013, at 6:00 PM, art wagner <wagnerart@hotmail.com> wrote:

You know when you are walking in the street and suddenly you see a strikingly beautiful woman and you are stopped in your tracks. That happened literally today and metaphorically with the 154 page paper from Princeton, Harvard, MIT creme de la creme. I now give excerpts and perhaps some Sarfatti Commentaries bye the bye.
Like ·  · Share
  • Neil Bates likes this.
  • Jack Sarfatti "We establish a direct connection between scattering amplitudes in planar four-dimensional theories and a remarkable mathematical structure known as the positive Grassmannian. The central physical idea is to focus on on-shell diagrams as objects of fundamental importance to scattering amplitudes."
  • Jack Sarfatti "The traditional formulation of quantum eld theory|encoded in its very name is built on the two pillars of locality and unitarity [1]. The standard apparatus of Lagrangians and path integrals allows us to make these two fundamental principles manifest. This approach, however, requires the introduction of a large amount of unphysical redundancy in our description of physics. Even for the simplest case of scalar field theories, there is the freedom to perform field-redefinitions. Starting with massless particles of spin-one or higher, we are forced to introduce even larger, gauge redundancies, [1].
    Over the past few decades, there has been a growing realization that these redundancies hide amazing physical and mathematical structures lurking within the heart of quantum field theory. This has been seen dramatically at strong coupling in gauge/gauge (see, e.g., [2{4]) and gauge/gravity dualities, [5]. The past decade has uncovered further remarkable new structures in eld theory even at weak coupling, seen in the properties of scattering amplitudes in gauge theories and gravity (for reviews, see [6{11]). The study of scattering amplitudes is fundamental to our understanding of field theory, and fueled its early development in the hands of Feynman, Dyson and Schwinger among others. It is therefore surprising to see that even here, by committing so strongly to particular, gauge-redundant descriptions of the physics, the usual formalism is completely blind to astonishingly simple and beautiful properties of the gauge-invariant physical observables of the theory."
  • Jack Sarfatti "All of these developments have made it completely clear that there are powerful new mathematical structures underlying the extraordinary properties of scattering amplitudes in gauge theories. If history is any guide, formulating and understanding a physics in a way that makes the symmetries manifest should play a central role in the story. The Grassmannian picture does this, but up to this point, there has been little understanding for why this formulation exists, exactly how it works, and
    where it comes from physically. Our primary goal in this note is to resolve this unsatisfactory state of affairs.
    We will derive the connection between scattering amplitudes and the Grassmannian, starting physically from first principles. This will lead us into direct contact with several beautiful and active areas of current research in mathematics [32{40].
    The past few decades have seen vigorous interactions between physics and mathematics in a wide variety of areas, but what is going on here involves new areas of mathematics that have only very recently played any role in physics which involve simple but deep ideas ranging from combinatorics to algebraic geometry. It is both startling and exciting that such elementary mathematical notions are found at the heart of the physics of scattering amplitudes.
    This new way of thinking about scattering amplitudes involves many novel physical and mathematical ideas. Our presentation will be systematic, and we have endeavored to make it self contained and completely accessible to physicists. While we will discuss a number of mathematical results {some of them new{we will usually be content with the physicist's level of rigor. The essential ideas here are all very simple, however they are tightly interlocking and range over a wide variety of areas|most of which are unfamiliar to most physicists. Thus, before jumping into the detailed exposition, as a guide to the reader we end this introductory section by giving a big picture roadmap of the logical structure and content of the paper."
  • Jack Sarfatti "In section 2, we introduce the central physical idea motivating our work, which is to focus on on-shell diagrams, obtained by gluing together fundamental 3-particle amplitudes and integrating over the on-shell phase space of internal particles. These objects are of central importance to the understanding of scattering amplitudes ...In this picture, virtual particles" make no appearance at all. We should emphasize that we are not merely using on-shell information to determine scattering amplitudes, but rather seeing that the amplitudes can be directly computed in terms of fully on-shell processes. The off-shell, virtual particles familiar from Feynman diagrams are replaced by internal, on-shell particles (with generally complex momenta). In our study of on-shell diagrams, we will see that different diagrams related by certain elementary moves can be physically equivalent, leading to the natural question of how to invariantly characterize the physical content of an on-shell graph. Remarkably, the invariant content of on-shell diagrams turn out to be characterized by permutations. We discuss this in detail in section 3 where we show how a long known and beautiful connection between permutations and scattering amplitudes in integrable (1+1)-dimensional theories generalizes to realistic theories in (3+1) dimensions." My comment: note the artful dodge COMPLEX 4-momenta. Real particles have REAL 4-momenta.Therefore, the virtual particles are still there in the imaginary parts of the 4-momenta.
  • Jack Sarfatti "Theoretical explorations in field theory have been greatly advanced by focusing on interesting classes of observables|from local correlation functions and scattering amplitudes, to Wilson and 't Hooft loops, surface operators and line defects, to partition functions on various manifolds (see e.g. [56, 57]). The central physical idea of our work is to study on shell scattering processes as a new set of objects of fundamental interest."
  • Jack Sarfatti "Theories with maximal supersymmetry have the wonderful feature that particles of all helicities can be unied into a single super-multiplet, [62{66]. For N =4 SYM, we can group all the helicity states into a single Grassmann coherent state ... The fundamental building blocks for all on-shell scattering processes are the three particle amplitudes, which are completely determined (up to an overall coupling constant) by Poincare invariance ... the amplitude is non-singular in the limit where the momenta are taken real ... It is remarkable that three-particle amplitudes are totally fixed by Poincare symmetry; they carry all the essential information about the particle content and obvious symmetries of the physical theory. It is natural to "glue" these elementary building blocks together to generate more complicated objects we will call on-shell diagrams. ... Note that on-shell diagrams such as those of (2.13) are not Feynman diagrams! There are no "virtual" or "off-shell" internal particles involved: all the lines in these pictures are on-shell (meaning that their momenta are null). Each internal line represents a sum over all possible particles which can be exchanged in the theory, with (often complex) momenta constrained by momentum conservation at each vertex) integrating over the on-shell phase space of each. ... In general, we have some number of integration variables corresponding to the (on-shell) internal momenta, and delta-functions enforcing momentum-conservation at each vertex. We may have just enough -functions to fully localize all the internal momenta; in this case the on-shell diagram becomes an ordinary function of the external data, which has historically been called a "leading singularity" in the literature [11,69]. If there are more delta -functions than necessary to x the internal momenta, the left-over constraints will impose conditions on the external momenta; such an
    object is said to be a singularity or to have "singular support". If there are fewer delta-functions than necessary to fix the internal momenta, there will be some degrees of freedom left over; the on-shell diagram then leaves us with some differential form on these extra degrees of freedom which we are free to integrate over any contour we please. But there is no fundamental distinction between these cases; and so we will generally think of an on-shell diagram as providing us with an "on-shell form" a differential form defined on the space of external and internal on-shell momenta."
  • Jack Sarfatti "Putting all the 3-particle amplitudes in an on-shell diagram together gives rise to a (typically high-dimensional) differential form on the space of external and internal momenta. The on-shell form associated with a diagram is then obtained by taking residues of this high-dimensional form on the support of all the delta-function constraints (thought of holomorphically) as representing poles which enforce their arguments to vanish); this produces a lower-dimensional form defined on the support of any remaining delta-functions.
    Individual Feynman diagrams are not gauge invariant and thus don't have any physical meaning. By contrast, each on-shell diagram is physically meaningful and corresponds to some particular on-shell scattering process."
  • Jack Sarfatti On Sep 18, 2013, at 6:14 PM, JACK SARFATTI wrote:

    On Sep 18, 2013, at 6:08 PM, Saul-Paul and Mary-Minn Sirag <sirag@mindspring.com> wrote:

    Paul et alia:

    This is an E-mail I sent to Nick (and many others) in Sept. 2010.

    I mentioned the Top Quark mass prediction by Polchinski et al (1983) in a paragraph that read:

    There were many comments to this Polchinski review: and they are contained the blog post. Most interesting to me was #23 by Tony Smith, who pointed out that in 1983 Polchinski, Wise, and Alvarez-Gaume (Nuc. Phys. B221, 495-523) in the context of supergravity theory, actually predicted a surprisingly large mass for the top quark (125 - 195 GeV). This was when the top quark was believed to have a mass of around 40 GeV (as was indeed claimed as an experimental measurement by Carlo Rubbia at CERN). When the top quark was finally detected at Fermilab in 1995, it was (very surprisingly) measured to be within the range predicted in 1983 by Polchinski et al.! (See also p. 369 of Polchinski's book, "String Theory" (Vol. II), 1998, Cambridge.)

    All for now;-)
    Begin forwarded message:

    From: Saul-Paul and Mary-Minn Sirag <sirag@mindspring.com>
    Date: September 21, 2010 10:16:35 AM PDT
    To: nick herbert <quanta@cruzio.com>

    Subject: Re: String Theory

    Hi Nick,

    The original string theory of 1970 was an attempt to model the miraculous Veneziano amplitude function (1968), which obeyed 6 of 7 of the principles of the S-Matrix approach to the strong nuclear force. This was in the context of the fierce struggle between the S-Matrix school, led by Chew (at Berkeley) and the Quark school, led by Gell-Mann (at Caltech). By 1973, when asymptotic freedom was discovered by Wilczek, Gross, and Politzer, quantum chromodynamics (the QCD of the Quark school), the string model went into nearly total eclipse. However, QCD has not been able to calculate many mass spectra. These are mainly experimental measurements using QCD as background theory.

    There were three big embarrassments for the 1970 string model:
    1. It only worked in 26 dimensions.
    2. It could only model bosons, and therefore only mesons (as a string vibrating between charges at its ends). So fermions such as protons and neutrons could not be accounted for.
    3. It contained a massless spin2 as a necessary ingredient.

    By 1971, supersymmetry was invoked in order to bring fermions into the string picture. This superstring theory had the effect of cutting the space-time dimensionality to 10 -- with hopes of reduction to 4-d. 

    In 1974, after QCD took over strong-force modeling, a couple of physicists woke up and said, "We have known for a long time what a massless spin2 particle is: the graviton. So superstring theory should be regarded as a primarily a quantum gravity theory.

    Hardly anyone paid attention to this superstring-gravity idea; but several physicists developed point-particle supersymmetry models of quantum gravity. This is called supergravity, and the main breakthrough calculations occurred in 1976. 
    [Nick, you and I heard about this from your friend Heinz Pagels even before the papers came out.]

    Supergravity can be modeled in dimensions ranging up to 11-d supergravity. Note: 11-d supergravity entails the symmetries of E7 Lie algebra. I got involved with the structure of E7 by way of the McKay correspondence, which makes a profound duality between the Octahedral Double group and E7. 
    See details in my 1993 paper athttp://williamjames.com/Theory/Consciousness.pdf

    In 1984 John Schwarz and Michael Green, through anomaly cancellation (via certain gauge groups), showed that superstring theory is a viable quantum gravity theory. This revived string theory from it's 10 year eclipse! Soon there were 5 competing superstring theories. The heterotic string theory created by the Princeton String Quartet was especially favored since its gauge group, E8 x E8, provided a way to embed all the gauge theories of the standard model of particle physics and also the grand unified theory gauge group SU(5). 

    Meanwhile several British physicists were toying with the possibility of generalizing the string (1-d) to membranes (2-d to 9-d). Also various dualities were discovered between the 5 competing string theories.

    In 1995 Ed Witten put all these ideas and some of his own together to propose an overarching M-theory as a profound unification of all the string theories. In effect we then would have one string theory. A further consequence of M-theory is that its lower energy limit theory is the old (1970s) 11-d supergravity (a point particle theory). Of course, given my interest in E7 (entailed in 11-d supergravity), I am very happy with this M-theory development.

    Incidentally, in July of this year, Ed Witten was given the Newton prize in London. His hour long lecture (for a general audience) was videotaped and is available on the Web at: 


    The miraculous twists and turns of the "Long Strange Trip" of string theory is well described by Witten in this lecture.
  • Jack Sarfatti On the general issue of string theory predictions, I suggest that you should read Joe Polchinski's review of the books by Lee Smolin and Peter Woit. This review was put on the web as a blog. I will attach it here:

    < JoePolchinski-ReviewsSmolin&Woit-7Dec06.pdf>

    There were many comments to this Polchinski review: and they are contained the blog post. Most interesting to me was #23 by Tony Smith, who pointed out that in 1983 Polchinski, Wise, and Alvarez-Gaume (Nuc. Phys. B221, 495-523) in the context of supergravity theory, actually predicted a surprisingly large mass for the top quark (125 - 195 GeV). This was when the top quark was believed to have a mass of around 40 GeV (as was indeed claimed as an experimental measurement by Carlo Rubbia at CERN). When the top quark was finally detected at Fermilab in 1995, it was (very surprisingly) measured to be within the range predicted in 1983 by Polchinski et al.! (See also p. 369 of Polchinski's book, "String Theory" (Vol. II), 1998, Cambridge.)

    Witten's lecture emphasizes that the grand vistas of mathematics and physics afforded by the many twists and turns of string theory is far from the complete picture -- which is still opaque to us.

    I believe that string theory should be viewed as a structure within the vastness of the full set of A-D-E Coxeter graph classifications and correspondences (which by now entail more than 20 mathematical structures of great importance in physics). Accordingly, I have been advocating what I call ADEX-theory, which I define as the study and applications of all the A-D-E correspondences. See my short paper, "ADEX dimensions", which I will attach here:
    <SPS-ADEXdimensions-20Jun08 .pdf>

    As I have said before: On the extra dimensions of space-time (forced on us by string theory): these have been an embarrassment to most physicists; but to those open to the paranormal, these extra dimensions should be viewed as an embarrassment of riches.

    All for now;)

  • Jack Sarfatti On Sep 20, 2010, at 10:30 PM, nick herbert wrote:

    The original goal of string theory
    was to explain the various masses
    of elementary particles as excitations 
    of a string oscillating in many dimensions.

    How many published mass spectra
    have been published by battallions
    of string theorists?

    On Sep 20, 2010, at 10:04 PM, JACK SARFATTI wrote:

    There is no pathetic lack of data. I gave you the references on Libet, Radin, Beirman, PEAR, Global Consciousness Project - all done on a shoestring. Of course, more effort is needed and much of what you suggest below is good. If all the $ wasted on the salaries of string theorists in major universities were put into consciousness research, then, perhaps there would be more progress? 

    Also, remember Einstein did General Relativity by pure reasoning. In a similar way, that signal nonlocality is needed to understand consciousness is quite obvious - to my mind at least, if not to yours. However, I am quite happy that you do not agree with Josephson and me on this. History will decide.

    Follow the $ Nick. How much has been spent on consciousness research compared to other areas in mainstream science? Also consider the fact, that physicists who dare to come out of the closet on this are smeared as lunatics and crackpots even if they have a Nobel Prize in physics. 

    While you are at it Nick, where are your NINE TESTS for string theory?
  • Jack Sarfatti On Sep 20, 2010, at 9:50 PM, nick herbert wrote:


    (inspired by a recent paper on consciousness and quantum physics by Yu Shan & Danko Nikolic)

    "Physicists must crawl before they can walk."--Jack Sarfatti, excusing the pathetic lack of experimental tests of vague quantum consciousness hypotheses proposed by him and a few others.

    Two choices are involved in every quantum measurement--the Heisenberg choice and the Dirac choice. The Heisenberg choice is the PHYSICAL choice (made by man or nature) to deploy a macroscopic object in a particular way. It is called the Heisenberg choice because deploying an object in a particular way (to select out which path info) completely precludes deploying it in a complementary way (to select out which interference sub-class info). The Dirac choice is the irreversible quantum jump that occurs somewhere in the apparatus and brings the experiment to a close. This choice is considered by most physicists to be utterly random--an act of God--and since Dirac was the closest thing to diety on this planet, his name is attached to this choice.--Nick Herbert

    The motto of the Royal Society of London, "Nullius in Verba". loosely translated means "Talk is Cheap," Theorists are honored not for their colorful phrases, personalities or press releases but for their successful predictions of natural phenomena, such as Einstein's bold prediction of the three classic consequences of General Relativity. The field of quantum consciousness is still awaiting a Big Mind imaginative enough to tackle the Hard Problem of consciousness and deliver big results. Despite much recent attention to the problem of consciousness by physicists, in my opinion we are not even at the crawling stage of consciousness research, let alone walking upright or preparing to ascend some difficult peaks. By the tough standards we have learned to expect from conventional physics, consciousness physicists are still on their knees, praying for inspiration for the right direction to crawl. The biggest breakthrough in consciousness in the past century was not an idea but the invention of LSD which permits not only monks and meditators a glimpse of the uncharted realms of inner space, but ordinary people as well. In the words of Terence McKenna, now that even bad people can see God, what does this gift impel us to do?

    As a big problem, consciousness calls for correspondingly big standards for success. I know not what might satisfy others but, off the top of my head, I propose Nick's Nine Tests for a Real Theory of Consciousness. These tests do not explain already existing phenomena but call for brand-new 
    experiences that might be expected to follow upon a successful (presumably quantum) explanation of the origin and nature of subjective experience.

    1. A Heisenberg choice (actual deployment of well-specified physical matter that selects which quantum possibilities are viable) that permits Nick Herbert and his friends to experience a brand-new color.

    2. A Heisenberg choice that mimics the effects of LSD. Physics is a more fundamental science than chemistry and should be able to prove it by fundamentally altering human experience in direct ways that don't involve chemistry.

    3. A Heisenberg choice that reliably magnifies the power of Puthoff & Targ's Remote Viewing by orders of magnitude in the tradition of physics-based microscopes and telescopes which immensely increased the powers of our physical vision.

    4. A Heisenberg choice that magnifies the power of the Radin Effect (also known as Autonomic Presentiment) by orders of magnitude--a development 
    that would expand human perception (for short distances) backwards in time.

    5. A Heisenberg choice that would produce a purely quantum anesthesia. If consciousness is really a quantum effect, we should be able to quench it by 
    purely quantum-physical means. Professor Hameroff--set your phaser to stun.

    6. A Heisenberg choice that would directly link two human minds, verifying James T. Culbertson's Conjecture that the separation that human minds ordinarily experience is a mere biological accident.

    7. A Heisenberg choice that would link human minds to the many minds in nature, realizing the Quantum Tantra dream of a radically new and more intimate kind of Quantum Measurement.

    8. A Heisenberg choice that puts human minds in touch with the Mind of God--in one dramatic stroke physics could drive all churches out of business,
    plus all the atheists as well.

    9. Show me something brand new. A real theory of consciousness will necessarily be full of surprises.

    The reason it's called the Hard Problem of Consciousness is the same reason that the North Face (of the Eiger) is called a difficult ascent. If your theory 
    can rise above "Talk is cheap" and manages to pass even one of Nick's Nine Tests, you will be honored forever as one of this world's intellectual giants.
  • Jack Sarfatti On Sep 18, 2010, at 12:47 PM, JACK SARFATTI wrote:

    Valentini does propose an experiment on the CMB to look for imprints of primordial signal nonlocality.

    The presponse experiments Libet --> Radin ---> Bierman I say are simply explained by signal nonlocality - indeed Brian Josephson independently suggested the role of signal nonlocality in living matter in his paper with Pallikari.

    I have made a very definite Popper falsifiable prediction about dark matter - no real dark matter particles whizzing through space. Looking for dark matter particles is like looking for the motion of the Earth through the aether using a Michelson-Morley interferometer.

    I predicted supersolid helium in 1969 


    Also Ray Chiao will confirm that my 1967 paper on self-trapped laser filaments was helpful to him on his early experiments with them.

    Finally, I point out that the dark energy must be a hologram screen effect not from our past but from our future.

    On Sep 18, 2010, at 11:11 AM, JACK SARFATTI wrote:

    I agree with Nick's argument below. However, it was not Fred Alan Wolf who first suggested that consciousness collapses the wave function, I think it started with Fritz London, John Von Neumann, Eugene Wigner on to Henry Stapp & Roger Penrose (with his "orch" modification that is vague, but I think is equivalent to "signal nonlocality" violating "no-cloning" , "unitarity" et-al i.e. P =/= |Psi|^2 aka "sub-quantal non-equilibrium").
    David Deutsch also argues no consciousness in orthodox quantum theory - Penrose's of course is not "orthodox".
    However, none of these people, in my opinion, have asked the right question in regard to the physical nature of consciousness. For that we must go to P.W. Anderson's "More is different" aka spontaneous symmetry breakdown of the ground state e.g. Vitiello's mind model.

    The conscious mind field is a macro-quantum coherent ground state order parameter of the living body in which some kind of either quasi-particle or collective mode (poles of single-particle & pair propagators respectively) of an underlying dynamics, e.g. ions, dipoles in microtubules et-al are in effect Bose-Einstein condensed i.e. Penrose-Onsager ODLRO (macroscopic eigenvalues of low order density matrices).

    The effective c-number field order parameter dynamics is non-unitary, nonlinear (Landau-Ginzburg) with signal nonlocality - living systems are not in thermal equilibrium - the effective low energy Bohm macro-quantum coherent potential is local in ordinary 3D space though it has the nonlocal influence from the boundary conditions discussed e.g. at beginning of Bohm & Hiley's Undivided Universe for the single particle problem - intensity independence, context dependence etc.

    On Sep 18, 2010, at 10:49 AM, nick herbert wrote:

    Hi Danko--

    The conventional wisdom asserts that
    all the various quantum realities
    are non-testable--each gives 
    the same experimental result.

    This may or may not be true
    as imaginative experiments 
    of the type you are looking 
    for might show.

    For a time I thought that the
    Bedford & Wang thought experiment
    (a crude variation on your own proposal)
    might do the trick (see Consciousness Post)
    but after much thought and discussion 
    with other reality fans I concluded that 
    Ordinary quantum calculations showed 
    that no test for conscious collapse was possible
    with the B&W setup and its variants.

    Successful variants of your experiment might exist,
    Someone on this list might be inspired to think of one.

    Your physical setup is a very clever variation on the double-slit 
    experiment with "which-path" observations cleverly and naturally built-in.
    And more important, it is a real experiment that can easily be done.

    One sad fact about the quantum/consciousness connection is that 
    (like the quantum/gravity connection) despite tons of groundless 
    speculation and opinion (see the work of Fred Allen Wolf) 
    there exists not a single experiment that successfully connects consciousness 
    with quantum mechanics. (pace Rosenblum & Kuttner and Dean Radin). Perhaps 
    this situation will change for the better due to discussions triggered by your recent paper.

    Good luck in your work
    Nick Herbert 

    On Sep 18, 2010, at 4:59 AM, Danko Nikolic wrote:

    My criticism is that you choose a system such that quantum mechanics predicts no interference no matter 
    what you do to the entangled photons, even including leaving them forever unobserved, so that your 
    consciousness postulate is sure to be falsified. 

    Yes, but is there another system for which this does not hold? Can one make it different such that the hypothesis is not sure to be falsified? We could not think of another experimental setup that would not produce the same outcome.

    I had discussions with several experimental quantum physicists from Vienna--hoping to design and conduct an experiment. We could not think of a setup. If we could, we would have probably already ran it.

    So, perhaps our point is that an experiment of your likings (and ours), to the best of our knowledge, CANNOT BE DESIGNED.

    If someone can be more creative and prove us wrong, great! Let us go then and run the experiment. I would gladly be a part of it.

    With best regards,

    Danko Nikolic
Error on p. 33 of Jim Woodward's Making Starships book (Springer-Verlag)
  • Jack Sarfatti Our future universe is dominated by the de Sitter metric.
    Using Jim's notation:

    g00 = 1 + phi/c^2

    phi = - 2r^2/A

    A is the area of our future horizon

    we are located at r = 0


    Jim's equation:

    phi/c^2 = 1 is the classical equation for an infinitely thin cosmic future event horizon with advanced back-from-the-future Wheeler-Feynman Hawking radiation peak black body wave length ~ A^1/2 contradicts his "vector" approximation to Einstein's tensor GR, which requires

    phi/c^2 << 1
    Begin forwarded message:

    From: Jonathan Post <jvospost3@gmail.com>
    Subject: Re: Serious conceptual error Jim makes on p. 33 of his book - in my opinion
    Date: September 12, 2013 7:07:33 PM PDT

    In any case, the universe is NOT flat:
    Evidence left over after the Big Bang may suggest a saddle-shaped universe.
    Originally published: 
    Sep 11 2013 - 5:00pm
    Charles Q. Choi, ISNS Contributor
    (ISNS) -- The shape of the universe may be dramatically different than before thought, a group of researchers now says.

On Sep 8, 2013, at 11:05 AM, JACK SARFATTI <jacksarfatti@icloud.com> wrote:

"Radin draws attention to the similarities between psi phenomena, where events separated in space and time appear to have a connection which can't be explained by known means of communication, and the entanglement of particles resulting in correlations measured at space-like intervals in quantum mechanics, and speculates that there may be a kind of macroscopic form of entanglement in which the mind is able to perceive information in a shared consciousness field (for lack of a better term) as well as through the senses."
I distinguish two levels of entanglement "weak" and "strong.".  The former is consistent with the "no-signal" arguments of mainstream "orthodox" quantum theory. A small minority of "fringe physicists" (including me) think these arguments are circular. With weak entanglement, a third party Eve can in hindsight see patterns of parallel behavior in Alice and Bob although neither Alice nor Bob are directly aware of what the other is thinking etc. With strong entanglement (aka "signal nonlocality" A. Valentini) we have what most people think of as telepathy  and precognition. Alice knows directly and instantly what Bob is thinking. Indeed, Alice may know ahead of time what Bob will think, but hasn't yet.

On Sep 8, 2013, at 10:19 AM, JACK SARFATTI <jacksarfatti@icloud.com> wrote:


"Parapsychology is small science.  There are only about 50 people in the entire world doing serious laboratory experiments in the field today, and the entire funding for parapsychology research in its first 130 years is about what present-day cancer research expends in about 43 seconds.  Some may say “What has parapsychology produced in all that time?”, but then one might ask the same of much cancer research.

Of the fifty or so people actively involved in parapsychology research, I have had the privilege to meet at least eight, including the author of the work reviewed infra, and I have found them all to be hard-headed scientists who approach the curious phenomena they study as carefully as physical scientists in any other field.  Their grasp of statistical methods is often much better than their more respectable peers in the mainstream publishing papers in the soft sciences.  Publications in parapsychology routinely use double-blind and randomisation procedures which are the exception in clinical trials of drugs.

The effect sizes in parapsychology experiments are small, but they are larger, and their probability of being due to chance is smaller, than the medical experiments which endorsed prescribing aspirin to prevent heart attacks and banning silicone breast implants.  What is interesting is that the effect size in parapsychology experiments of all kinds appears to converge upon a level which, while small, is so far above chance to indicate “something is going on”.

Before you reject this out of hand, I'd encourage you to read the book or view the videos linked below.  Many people who do this research started out to dismiss such nonsense and were enthralled when they discovered there appeared to be something there."

see also

On Sep 7, 2013, at 8:27 PM, nick herbert <quanta@cruzio.com> wrote:


An exploration of mind merge
using physics not chemistry
in less than 1000 words..


The theory of relativity deals with the geometric
structure of a four-dimensional spacetime. Quantum mechanics
describes properties of matter. Combining these
two theoretical edifices is a difficult proposition. For example,
there is no way of defining a relativistic proper
time for a quantum system which is spread all over
space. A proper time can in principle be defined for a
massive apparatus (‘‘observer’’) whose Compton wavelength
is so small that its center of mass has classical
coordinates and follows a continuous world line. However,
when there is more than one apparatus, there is no
role for the private proper times that might be attached
to the observers’ world lines. Therefore a physical situation
involving several observers in relative motion cannot
be described by a wave function with a relativistic
transformation law (Aharonov and Albert, 1981; Peres,
1995, and references therein). This should not be surprising
because a wave function is not a physical object.
It is only a tool for computing the probabilities of objective
macroscopic events.
Einstein’s [special] principle of relativity asserts that there are
no privileged inertial frames. 
[Comment #3: Einstein's general principle of relativity is that there are no privileged local accelerating frames (AKA LNIFs). In addition, Einstein's equivalence principle is that one can always find a local inertial frame (LIF) coincident with a LNIF (over a small enough region of 4D space-time) in which to a good approximation, Newton's 1/r^2 force is negligible "Einstein's happiest thought" Therefore, Newton's universal "gravity force" is a purely inertial, fictitious, pseudo-force exactly like Coriolis, centrifugal and Euler forces that are artifacts of the proper acceleration of the detector having no real effect on the test particle being measured by the detector. The latter assumes no rigid constraint between detector and test particle. For example a test particle clamped to the edge r of a uniformly slowly rotating disk will have a real EM force of constraint that is equal to m w x w x r.]
This does not imply the
necessity or even the possibility of using manifestly symmetric
four-dimensional notations. This is not a peculiarity
of relativistic quantum mechanics. Likewise, in classical
canonical theories, time has a special role in the
equations of motion.
The relativity principle is extraordinarily restrictive.
For example, in ordinary classical mechanics with a finite
number of degrees of freedom, the requirement that
the canonical coordinates have the meaning of positions,
so that particle trajectories q(t) transform like
four-dimensional world lines, implies that these lines
consist of straight segments. Long-range interactions are
forbidden; there can be only contact interactions between
point particles (Currie, Jordan, and Sudarshan,
1963; Leutwyler, 1965). Nontrivial relativistic dynamics
requires an infinite number of degrees of freedom,
which are labeled by the spacetime coordinates (this is
called a field theory).
Combining relativity and quantum theory is not only a
difficult technical question on how to formulate dynamical
laws. The ontologies of these theories are radically
different. Classical theory asserts that fields, velocities,
etc., transform in a definite way and that the equations
of motion of particles and fields behave covariantly. …
For example, if the expression for the Lorentz force is written
...in one frame, the same expression is valid
in any other frame. These symbols …. have objective
values. They represent entities that really exist, according
to the theory. On the other hand, wave functions
are not defined in spacetime, but in a multidimensional
Hilbert space. They do not transform covariantly when
there are interventions by external agents, as will be
seen in Sec. III. Only the classical parameters attached
to each intervention transform covariantly. Yet, in spite
of the noncovariance of r, the final results of the calculations
(the probabilities of specified sets of events) must
be Lorentz invariant.
As a simple example, consider our two observers, conventionally
called Alice and Bob,4 holding a pair of spin-1/2
particles in a singlet state. Alice measures sand finds
+1, say. This tells her what the state of Bob’s particle is,
namely, the probabilities that Bob would obtain + or - 1 if he
measures (or has measured, or will measure) s along
any direction he chooses. This is purely counterfactual
information: nothing changes at Bob’s location until he
performs the experiment himself, or receives a message
from Alice telling him the result that she found. In particular,
no experiment performed by Bob can tell him
whether Alice has measured (or will measure) her half
of the singlet.
A seemingly paradoxical way of presenting these results
is to ask the following naive question. Suppose that
Alice finds that sz = 1 while Bob does nothing. When
does the state of Bob’s particle, far away, become the
one for which sz = -1 with certainty? Although this
question is meaningless, it may be given a definite answer:
Bob’s particle state changes instantaneously. In
which Lorentz frame is this instantaneous? In any
frame! Whatever frame is chosen for defining simultaneity,
the experimentally observable result is the same, as
can be shown in a formal way (Peres, 2000b). Einstein
himself was puzzled by what seemed to be the instantaneous
transmission of quantum information. In his autobiography,
he used the words ‘‘telepathically’’ and
‘‘spook’’ (Einstein, 1949). …
In the laboratory, any experiment
has to be repeated many times in order to infer a
law; in a theoretical discussion, we may imagine an infinite
number of replicas of our gedanken experiment, so
as to have a genuine statistical ensemble. Yet the validity
of the statistical nature of quantum theory is not restricted
to situations in which there are a large number
of similar systems. Statistical predictions do apply to
single eventsWhen we are told that the probability of
precipitation tomorrow is 35%, there is only one tomorrow.
This tells us that it may be advisable to carry an
umbrella. Probability theory is simply the quantitative
formulation of how to make rational decisions in the
face of uncertainty (Fuchs and Peres, 2000). A lucid
analysis of how probabilistic concepts are incorporated
into physical theories is given by Emch and Liu (2002).
[My comment #4: Peres is correct, but there is no conflict with Bohm's ontological
interpretation here. The Born probability rule is not fundamental to quantum reality
in Bohm's view, but is a limiting case when the beables are in thermal equilibrium.]
Some trends in modern quantum information theory
may be traced to security problems in quantum communication.
A very early contribution was Wiesner’s seminal
paper ‘‘Conjugate Coding,’’ which was submitted
circa 1970 to IEEE Transactions on Information Theory
and promptly rejected because it was written in a jargon
incomprehensible to computer scientists (this was actually
a paper about physics, but it had been submitted to
a computer science journal). Wiesner’s article was finally
published (Wiesner, 1983) in the newsletter of ACM
SIGACT (Association for Computing Machinery, Special
Interest Group in Algorithms and Computation
Theory). That article tacitly assumed that exact duplication
of an unknown quantum state was impossible, well
before the no-cloning theorem (Dieks, 1982; Wootters
and Zurek, 1982) became common knowledge. Another
early article, ‘‘Unforgeable Subway Tokens’’ (Bennett
et al., 1983) also tacitly assumed the same.
A. The ambivalent quantum observer
Quantum mechanics is used by theorists in two different
ways. It is a tool for computing accurate relationships
between physical constants, such as energy levels,
cross sections, transition rates, etc. These calculations
are technically difficult, but they are not controversial.
In addition to this, quantum mechanics also provides
statistical predictions for results of measurements performed
on physical systems that have been prepared in a
specified way. 
[My comment #5: No mention of Yakir Aharonov's intermediate present "weak measurements"
with both history past pre-selection and destiny future post-selection constraints. The latter in
Wheeler delayed choice mode would force the inference of real back-from-the-future retrocausality.
This would still be consistent with Abner Shimony's "passion at a distance," i.e. "signal locality"
in that the observer at the present weak measurement would not know what the future constraint 
actually will be. In contrast, with signal non locality (Sarfatti  1976 MIT Tech Review (Martin Gardner) & 
Antony Valentini (2002)) such spooky precognition would be possible as in Russell Targ's reports on 
CIA funded RV experiments at SRI in the mid 70's and 80's. 
This is, on the face of it, a gross violation of orthodox
quantum theory as laid out here in the Peres review paper.]
The quantum measuring process is the interface
of classical and quantum phenomena. The preparation
and measurement are performed by macroscopic
devices, and these are described in classical terms. The
necessity of using a classical terminology was emphasized
by Niels Bohr (1927) from the very early days of
quantum mechanics. Bohr’s insistence on a classical description
was very strict. He wrote (1949)
‘‘ . . . by the word ‘experiment’ we refer to a situation
where we can tell others what we have done and what
we have learned and that, therefore, the account of the
experimental arrangement and of the results of the observations
must be expressed in unambiguous language,
with suitable application of the terminology of
classical physics.’’
Note the words ‘‘we can tell.’’ Bohr was concerned
with information, in the broadest sense of this term. He
never said that there were classical systems or quantum
systems. There were physical systems, for which it was
appropriate to use the classical language or the quantum
language. There is no guarantee that either language
gives a perfect description, but in a well-designed experiment
it should be at least a good approximation.
Bohr’s approach divides the physical world into ‘‘endosystems’’
(Finkelstein, 1988), which are described by
quantum dynamics, and ‘‘exosystems’’ (such as measuring
apparatuses), which are not described by the dynamical
formalism of the endosystem under consideration.
A physical system is called ‘‘open’’ when parts of
the universe are excluded from its description. In different
Lorentz frames used by observers in relative motion,
different parts of the universe may be excluded. The
systems considered by these observers are then essentially
different, and no Lorentz transformation exists
that can relate them (Peres and Terno, 2002).
It is noteworthy that Bohr never described the measuring
process as a dynamical interaction between an
exophysical apparatus and the system under observation.
He was, of course, fully aware that measuring apparatuses
are made of the same kind of matter as everything
else, and they obey the same physical laws. It is
therefore tempting to use quantum theory in order to
investigate their behavior during a measurement. However,
if this is done, the quantized apparatus loses its
status as a measuring instrument. It becomes a mere intermediate
system in the measuring process, and there
must still be a final instrument that has a purely classical
description (Bohr, 1939).
Measurement was understood by Bohr as a primitive
notion. He could thereby elude questions which caused
considerable controversy among other authors. A
quantum-dynamical description of the measuring process
was first attempted by John von Neumann in his
treatise on the mathematical foundations of quantum
theory (1932). In the last section of that book, as in an
afterthought, von Neumann represented the apparatus
by a single degree of freedom, whose value was correlated
with that of the dynamical variable being measured.
Such an apparatus is not, in general, left in a definite
pure state, and it does not admit a classical
description. Therefore von Neumann introduced a second
apparatus which observes the first one, and possibly
a third apparatus, and so on, until there is a final measurement,
which is not described by quantum dynamics
and has a definite result (for which quantum mechanics
can give only statistical predictions). The essential point
that was suggested, but not proved by von Neumann, is
that the introduction of this sequence of apparatuses is
irrelevant: the final result is the same, irrespective of the
location of the ‘‘cut’’ between classical and quantum
These different approaches of Bohr and von Neumann
were reconciled by Hay and Peres (1998), who
8At this point, von Neumann also speculated that the final
step involves the consciousness of the observer—a bizarre
statement in a mathematically rigorous monograph (von Neumann,
B. The measuring process
Dirac (1947) wrote that ‘‘a measurement always
causes the system to jump into an eigenstate of the dynamical
variable being measured.’’ Here, we must be
careful: a quantum jump (also called a collapse) is something
that happens in our description of the system, not
to the system itself. Likewise, the time dependence of
the wave function does not represent the evolution of a
physical system. It only gives the evolution of probabilities
for the outcomes of potential experiments on that
system (Fuchs and Peres, 2000).
Let us examine more closely the measuring process.
First, we must refine the notion of measurement and
extend it to a more general one: an interventionAn
intervention is described by a set of parameters which
include the location of the intervention in spacetime, referred
to an arbitrary coordinate system. We also have
to specify the speed and orientation of the apparatus in
the coordinate system that we are using as well as various
other input parameters that control the apparatus,
such as the strength of a magnetic field or that of a rf
pulse used in the experiment. The input parameters are
determined by classical information received from past
interventions, or they may be chosen arbitrarily by the
observer who prepares that intervention or by a local
random device acting in lieu of the observer.
[My comment #6: Peres, in my opinion, makes another mistake.
Future interventions will affect past weak measurements.

Back From the Future

A series of quantum experiments shows that measurements performed in the future can influence the present. Does that mean the universe has a destiny—and the laws of physics pull us inexorably toward our prewritten fate?

By Zeeya Merali|Thursday, August 26, 2010
http://discovermagazine.com/2010/apr/01-back-from-the-future#.UieOnhac5Hw ]
An intervention has two consequences. One is the acquisition
of information by means of an apparatus that
produces a record. This is the ‘‘measurement.’’ Its outcome,
which is in general unpredictable, is the output of
the intervention. The other consequence is a change of
the environment in which the quantum system will
evolve after completion of the intervention. For example,
the intervening apparatus may generate a new
Hamiltonian that depends on the recorded result. In particular,
classical signals may be emitted for controlling
the execution of further interventions. These signals are,
of course, limited to the velocity of light.
The experimental protocols that we consider all start
in the same way, with the same initial state ... , and the
first intervention is the same. However, later stages of
the experiment may involve different types of interventions,
possibly with different spacetime locations, depending
on the outcomes of the preceding events. Yet,
assuming that each intervention has only a finite number
of outcomes, there is for the entire experiment only a
finite number of possible records. (Here, the word
record means the complete list of outcomes that occurred
during the experiment. We do not want to use the
word history, which has acquired a different meaning in
the writings of some quantum theorists.)
Each one of these records has a definite probability in
the statistical ensemble. In the laboratory, experimenters
can observe its relative frequency among all the records
that were obtained; when the number of records tends
to infinity, this relative frequency is expected to tend to
the true probability. The aim of theory is to predict the
probability of each record, given the inputs of the various
interventions (both the inputs that are actually controlled
by the local experimenter and those determined
by the outputs of earlier interventions). Each record is
objective: everyone agrees on what happened (e.g.,
which detectors clicked). Therefore, everyone agrees on
what the various relative frequencies are, and the theoretical
probabilities are also the same for everyone.
Interventions are localized in spacetime, but quantum
systems are pervasive. In each experiment, irrespective
of its history, there is only one quantum system, which
may consist of several particles or other subsystems, created
or annihilated at the various interventions. Note
that all these properties still hold if the measurement
outcome is the absence of a detector click. It does not
matter whether this is due to an imperfection of the detector
or to a probability less than 1 that a perfect detector
would be excited. The state of the quantum system
does not remain unchanged. It has to change to
respect unitarity. The mere presence of a detector that
could have been excited implies that there has been an
interaction between that detector and the quantum system.
Even if the detector has a finite probability of remaining
in its initial state, the quantum system correlated
to the latter acquires a different state (Dicke,
1981). The absence of a click, when there could have
been one, is also an event.
The measuring process involves not only the physical
system under study and a measuring apparatus (which
together form the composite system C) but also their
environment, which includes unspecified degrees of freedom
of the apparatus and the rest of the world. These
unknown degrees of freedom interact with the relevant
ones, but they are not under the control of the experimenter
and cannot be explicitly described. Our partial
ignorance is not a sign of weakness. It is fundamental. If
everything were known, acquisition of information
would be a meaningless concept.
A complete description of involves both macroscopic
and microscopic variables. The difference between
them is that the environment can be considered as
adequately isolated from the microscopic degrees of
freedom for the duration of the experiment and is not
influenced by them, while the environment is not isolated
from the macroscopic degrees of freedomFor example,
if there is a macroscopic pointer, air molecules bounce
from it in a way that depends on the position of that
pointer. Even if we can neglect the Brownian motion of
a massive pointer, its influence on the environment leads
to the phenomenon of decoherence, which is inherent to
the measuring process.
An essential property of the composite system C,
which is necessary to produce a meaningful measurement,
is that its states form a finite number of orthogonal
subspaces which are distinguishable by the observer.
[My comment #7: This is not the case for Aharonov's weak measurements where
<A>weak = <history|A|destiny>/<history|destiny>
Nor is it true when Alice's orthogonal micro-states are entangled with Bob's far away distinguishably non-orthogonal macro-quantum Glauber coherent and possibly squeezed states.
  1. Coherent states - Wikipedia, the free encyclopedia

    In physics, in quantum mechanics, a coherent state is the specific quantum state of the quantum harmonic oscillator whose dynamics most closely resembles the ...
    You've visited this page many times. Last visit: 8/7/13
  2. Review of Entangled Coherent States

    arxiv.org › quant-ph
    by BC Sanders - ‎2011 - ‎Cited by 6 - ‎Related articles
    Dec 8, 2011 - Abstract: We review entangled coherent state research since its first implicit use in 1967
|Alice,Bob> = (1/2)[|Alice +1>|Bob alpha> + |Alice -1>|Bob beta>]
<Alice+1|Alice -1> = 0
<Bob alpha|Bob beta> =/= 0  
e.g. Partial trace over Bob's states  |<Alice +1|Alice-Bob>|^2 = (1/2)[1 + |<Bob alpha|Bob beta>|^2] > 1
this is formally like a weak measurement where the usual Born probability rule breaks down. 
Complete isolation from environmental decoherence is assumed here.
It is clear violation of "passion at a distance" no-entanglement signaling arguments based on axioms that are empirically false in my opinion.
"The statistics of Bob’s result are not affected at all by what Alice may simultaneously do somewhere else. " (Peres) 
is false.
While a logically correct formal proof is desirable in physics, Nature has ways of leap frogging over their premises.
One can have constrained pre and post-selected conditional probabilities that are greater than 1, negative and even complex numbers. 
All of which correspond to observable effects in the laboratory - see Aephraim Steinberg's experimental papers
University of Toronto.]
Each macroscopically distinguishable subspace corresponds
to one of the outcomes of the intervention and
defines a POVM element Em , given explicitly by Eq. (8)
below. …
C. Decoherence
Up to now, quantum evolution is well defined and it is
in principle reversible. It would remain so if the environment
could be perfectly isolated from the macroscopic
degrees of freedom of the apparatus. This demand is of
course self-contradictory, since we have to read the result
of the measurement if we wish to make any use of it.
A detailed analysis of the interaction with the environment,
together with plausible hypotheses (Peres, 2000a),
shows that states of the environment that are correlated
with subspaces of with different labels m can be treated
as if they were orthogonal. This is an excellent approximation
(physics is not an exact science, it is a science of
approximations). The resulting theoretical predictions
will almost always be correct, and if any rare small deviation
from them is ever observed, it will be considered
as a statistical quirk or an experimental error.
The density matrix of the quantum system is thus effectively
block diagonal, and all our statistical predictions
are identical to those obtained for an ordinary mixture
of (unnormalized) pure states
This process is called decoherence. Each subspace
m is stable under decoherence—it is their relative
phase that decoheres. From this moment on, the macroscopic
degrees of freedom of have entered into the
classical domain. We can safely observe them and ‘‘lay
on them our grubby hands’’ (Caves, 1982). In particular,
they can be used to trigger amplification mechanisms
(the so-called detector clicks) for the convenience of the
Some authors claim that decoherence may provide a
solution of the ‘‘measurement problem,’’ with the particular
meaning that they attribute to that problem
(Zurek, 1991). Others dispute this point of view in their
comments on the above article (Zurek, 1993). A reassessment
of this issue and many important technical details
were recently published by Zurek (2002, 2003). Yet
decoherence has an essential role, as explained above. It
is essential that we distinguish decoherence, which results
from the disturbance of the environment by the
apparatus (and is a quantum effect), from noise, which
would result from the disturbance of the system or the
apparatus by the environment and would cause errors.
Noise is a mundane classical phenomenon, which we ignore
in this review.
E. The no-communication theorem
We now derive a sufficient condition that no instantaneous
information transfer can result from a distant intervention.
We shall show that the condition is
[Am,Bnn] = 0
where Amand Bnare Kraus matrices for the observation
of outcomes m by Alice and n by Bob.
[My comment #8: "The most beautiful theory is murdered by an ugly fact." - Feynman
e.g. Libet-Radin-Bierman presponse in living brain data
SRI CIA vetted reports of remote viewing by living brains.
  1. CIA-Initiated Remote Viewing At Stanford Research Institute

    As if to add insult to injury, he then went on to "remote view" the interior of the apparatus, .... Figure 6 - Left to right: Christopher Green, Pat Price, and Hal Puthoff.
    You've visited this page many times. Last visit: 5/30/13
  2. Harold E. Puthoff - Wikipedia, the free encyclopedia

    PuthoffHal, Success Story, Scientology Advanced Org Los Angeles (AOLA) special... H. E. Puthoff, CIA-Initiated Remote Viewing At Stanford Research Institute, ...
  3. Remote viewing - Wikipedia, the free encyclopedia

    Among some of the ideas that Puthoff supported regarding remote viewing was the ...by Russell Targ and Hal Puthoff at Stanford Research Institute in the 1970s  ...
    You've visited this page many times. Last visit: 7/5/13
  4. Dr. Harold Puthoff on Remote Viewing - YouTube

    Apr 28, 2011 - Uploaded by corazondelsur
    Dr. Hal Puthoff is considered the father of the US government'sRemote Viewing program, which reportedly ...
  5. Remoteviewed.com - Hal Puthoff

    Dr. Harold E. Puthoff is Director of the Institute for Advanced Studies at Austin. A theoretical and experimental physicist specializing in fundamental ...
On Sep 4, 2013, at 9:06 AM, JACK SARFATTI <adastra1@icloud.com> wrote:
Peres here is only talking about Von Neumann's strong measurements not 
Aharonov's weak measurements.

Standard texbooks on quantum mechanics
tell you that observable quantities are represented by
Hermitian operators, that their possible values are the
eigenvalues of these operators, and that the probability
of detecting eigenvalue a, corresponding to eigenvector
|a>  |<a|psi>|2, where |psi> is the (pure) state of the
quantum system that is observed. With a bit more sophistication
to include mixed states, the probability can
be written in a general way <a|rho|a> …
This is nice and neat, but it does not describe what
happens in real lifeQuantum phenomena do not occur
in Hilbert space; they occur in a laboratory. If you visit a
real laboratory, you will never find Hermitian operators
there. All you can see are emitters (lasers, ion guns, synchrotrons,
and the like) and appropriate detectors. In
the latter, the time required for the irreversible act of
amplification (the formation of a microscopic bubble in
a bubble chamber, or the initial stage of an electric discharge)
is extremely brief, typically of the order of an
atomic radius divided by the velocity of light. Once irreversibility
has set in, the rest of the amplification process
is essentially classical. It is noteworthy that the time and
space needed for initiating the irreversible processes are
incomparably smaller than the macroscopic resolution
of the detecting equipment.
The experimenter controls the emission process and
observes detection events. The theorist’s problem is to
predict the probability of response of this or that detector,
for a given emission procedure. It often happens
that the preparation is unknown to the experimenter,
and then the theory can be used for discriminating between
different preparation hypotheses, once the detection
outcomes are known.
<Screen Shot 2013-09-04 at 8.57.50 AM.png>
Many physicists, perhaps a majority, have an intuitive,
realistic worldview and consider a quantum state as a
physical entity. Its value may not be known, but in principle
the quantum state of a physical system would be
well defined. However, there is no experimental evidence
whatsoever to support this naive belief. On the
contrary, if this view is taken seriously, it may lead to
bizarre consequences, called ‘‘quantum paradoxes.’’
These so-called paradoxes originate solely from an incorrect
interpretation of quantum theory, which is thoroughly
pragmatic and, when correctly used, never yields
two contradictory answers to a well-posed question. It is
only the misuse of quantum concepts, guided by a pseudorealistic
philosophy, that leads to paradoxical results.
[My comment #2: Here is the basic conflict between epistemological vs ontological views of quantum reality.]
In this review we shall adhere to the view that r is
only a mathematical expression which encodes information
about the potential results of our experimental interventions.
The latter are commonly called
‘‘measurements’’—an unfortunate terminology, which
gives the impression that there exists in the real world
some unknown property that we are measuring. Even
the very existence of particles depends on the context of
our experiments. In a classic article, Mott (1929) wrote
‘‘Until the final interpretation is made, no mention
should be made of the a ray being a particle at all.’’
Drell (1978a, 1978b) provocatively asked ‘‘When is a
particle?’’ In particular, observers whose world lines are
accelerated record different numbers of particles, as will
be explained in Sec. V.D (Unruh, 1976; Wald, 1994).
1The theory of relativity did not cause as much misunderstanding
and controversy as quantum theory, because people
were careful to avoid using the same nomenclature as in nonrelativistic
physics. For example, elementary textbooks on
relativity theory distinguish ‘‘rest mass’’ from ‘‘relativistic
mass’’ (hard-core relativists call them simply ‘‘mass’’ and ‘‘energy’’).
2The ‘‘irreversible act of amplification’’ is part of quantum
folklore, but it is not essential to physics. Amplification is
needed solely to facilitate the work of the experimenter.
3Positive operators are those having the property that
^curuc&>0 for any state c. These operators are always Hermitian.
94 A. Peres and D. R. Terno: Quantum information and relativity theory
Rev. Mod.
On Sep 4, 2013, at 8:48 AM, JACK SARFATTI <adastra1@icloud.com> wrote:

Begin forwarded message:

From: JACK SARFATTI <jacksarfatti@icloud.com>
Subject: Quantum information and relativity theory
Date: September 4, 2013 8:33:48 AM PDT
To: nick herbert <quanta@mail.cruzio.com>

The late Asher Peres http://en.wikipedia.org/wiki/Asher_Peres interpretation is the antithesis of the late David Bohm's ontological interpretation http://en.wikipedia.org/wiki/David_Bohm holding to a purely subjective epistemological Bohrian interpretation of the quantum BIT potential Q.
He claims that Antony Valentini's signal non locality beyond orthodox quantum theory would violate the Second Law of Thermodynamics.
Quantum information and relativity theory
Asher Peres
Department of Physics, Technion–Israel Institute of Technology, 32000 Haifa, Israel
Daniel R. Terno
Perimeter Institute for Theoretical Physics, Waterloo, Ontario, Canada N2J 2W9
(Published 6 January 2004)
This article discusses the intimate relationship between quantum mechanics, information theory, and
relativity theory. Taken together these are the foundations of present-day theoretical physics, and
their interrelationship is an essential part of the theory. The acquisition of information from a
quantum system by an observer occurs at the interface of classical and quantum physics. The authors
review the essential tools needed to describe this interface, i.e., Kraus matrices and
positive-operator-valued measures. They then discuss how special relativity imposes severe
restrictions on the transfer of information between distant systems and the implications of the fact that
quantum entropy is not a Lorentz-covariant concept. This leads to a discussion of how it comes about
that Lorentz transformations of reduced density matrices for entangled systems may not be
completely positive maps. Quantum field theory is, of course, necessary for a consistent description of
interactions. Its structure implies a fundamental tradeoff between detector reliability and
localizability. Moreover, general relativity produces new and counterintuitive effects, particularly
when black holes (or, more generally, event horizons) are involved. In this more general context the
authors discuss how most of the current concepts in quantum information theory may require a
I. Three Inseparable Theories 93
A. Relativity and information 93
B. Quantum mechanics and information 94
C. Relativity and quantum theory 95
D. The meaning of probability 95
E. The role of topology 96
F. The essence of quantum information 96
II. The Acquisition of Information 97
A. The ambivalent quantum observer 97
B. The measuring process 98
C. Decoherence 99
D. Kraus matrices and positive-operator-valued
measures (POVM’s) 99
E. The no-communication theorem 100
III. The Relativistic Measuring Process 102
A. General properties 102
B. The role of relativity 103
C. Quantum nonlocality? 104
D. Classical analogies 105
IV. Quantum Entropy and Special Relativity 105
A. Reduced density matrices 105
B. Massive particles 105
C. Photons 107
D. Entanglement 109
E. Communication channels 110
V. The Role of Quantum Field Theory 110
A. General theorems 110
B. Particles and localization 111
C. Entanglement in quantum field theory 112
D. Accelerated detectors 113
VI. Beyond Special Relativity 114
A. Entanglement revisited 115
B. The thermodynamics of black holes 116
C. Open problems 118
Acknowledgments and Apologies 118
Appendix A: Relativistic State Transformations 119
Appendix B: Black-Hole Radiation 119
References 120
Quantum theory and relativity theory emerged at the
beginning of the twentieth century to give answers to
unexplained issues in physics: the blackbody spectrum,
the structure of atoms and nuclei, the electrodynamics of
moving bodies. Many years later, information theory
was developed by Claude Shannon (1948) for analyzing
the efficiency of communication methods. How do these
seemingly disparate disciplines relate to each other? In
this review, we shall show that they are inseparably
A. Relativity and information
Common presentations of relativity theory employ
fictitious observers who send and receive signals. These
‘‘observers’’ should not be thought of as human beings,
but rather as ordinary physical emitters and detectors.
Their role is to label and locate events in spacetime. The
speed of transmission of these signals is bounded by
c—the velocity of light—because information needs a
material carrier, and the latter must obey the laws of
physics. Information is physical (Landauer, 1991).
[My comment #1: Indeed information is physical. Contrary to Peres, in Bohm's theory Q is also physical but not material (be able), consequently one can have entanglement negentropy transfer without be able material propagation of a classical signal. I think Peres makes a fundamental error here.]
However, the mere existence of an upper bound on
the speed of propagation of physical effects does not do
justice to the fundamentally new concepts that were introduced
by Albert Einstein (one could as well imagine
communications limited by the speed of sound, or that
of the postal service). Einstein showed that simultaneity
had no absolute meaning, and that distant events might
have different time orderings when referred to observers
in relative motion. Relativistic kinematics is all about
information transfer between observers in relative motion.
Classical information theory involves concepts such as
the rates of emission and detection of signals, and the
noise power spectrum. These variables have well defined
relativistic transformation properties, independent
of the actual physical implementation of the communication

indian porn sexnxxx.cc xvideos Amateur Porn video porno amatoriali filmeporno.top lupoporno film porno gratuit porno mature xnxx film porno gratuit
bisexuel gay porno gay porno देसी सेक्स एचडी पॉर्न ऊपर ऊपर से चुदाई Големи цици