Text Size


Notes on my Star Gate book
Like · · Share
  • Jack Sarfatti The equation in question is (in index-free short hand)

    DU/ds = dU/ds + {LC}UU

    This is a tensor equation- a geometric object, and the choices of local coordinate patches of differential geometry are irrelevant.

    We are interested in this book in the heuristic physical meaning of the equations not in the excess baggage of formalism that only obscures the essential physics leading many physicists astray into purely mathematical dead ends perhaps important to pure mathematics but not to physics. The Cornell philosophy of Hans Bethe, Ed Salpeter, Phil Morrison, Tommy Gold and Richard Feynman himself was to get the most physics with the least possible mathematics. This is in accord with Einstein's remark that any intelligent fool can make the subject more complicated than it need be. Indeed, this is the trend we see in modern theoretical physics today.

    DU/ds is what accelerometers measure locally on the test particle being observed.

    {LC}UU is what accelerometers measure locally on the detector observer of the test particle.

    dU/ds is the apparent or kinematical 4-acceleration of the test particle relative to the detector.

    The test particle and the detector are nearly coincident, i.e. their actual space-time separation must be small compared to the local radii of curvature of 4D spacetime for the equation not to break down. For example on Earth surface that curvature radius is about 10^13 cm, so this is not a problem for local experiments.

    The Levi Civita connection {LC} in Einstein's GR physically describes the fictitious inertial pseudo forces that appear to act on test particles. These inertial forces are caused by real forces on the local noninertial frames lnifs measuring the motion of the test particle.

    Newton's 2nd Law is for rest mass m constant

    F = mDV/ds = real 4-force

    V = 4-velocity of test particle

    ds = proper time differential

    DV/ds = dV/ds - {LC}VV = proper 4-acceleration. it is a gct group tensor

    In a local inertial frame lif {LC} = 0

    This is Einstein's equivalence principle

    In a local non inertial frame lnif

    {LC} =/= 0

    - m{LC}VV = all the inertial fictitious pseudo forces that seem to act on the test particle from the POV of the properly accelerating lnif detector observer, but don't.

    Note the g00 in the denominator that is zero at horizons.

    Note also that the quantum Unruh effect in which vacuum zero point virtual photons are transformed into real black body radiation photons is proportional to the local tensor proper acceleration DU/ds of the detector accelerometer.

Advanced Intelligence Agency
(a private sector contractor)

Memorandum for the Record

Subject: Technological Surprise Quantum Cryptography et-al - Threat Assessment Analysis

To: The Usual Suspects ;-)

From: A "Hippy" who Saved Physics ;-)

Cut to the chase, bottom line for intelligence analysts - excerpts

In regards to the computational power of computers with access to time machines, it
is straightforward to see that any computation in which an efficient method for checking
the solution exists, a source of random bits, sent through a checking algorithm which then
acts as a c-not on a time machine qubit, can arrive at the correct solution immediately
if the informational content of the solution is sufficiently less than the work function of
the time machine. Since time machine bits may also act as perfectly random sources,
the information may seem to be created from nothing, but one may also think of such
`calculations' as becoming an extremely lucky guesser, due to the re-weighting of histories
by the time machine. ... Conventional crytpography would pose little obstacle to such a
computer. ... public key certification by computer would be almost useless.

Hawking famously has cited the lack of future tourists as good evidence against time machines.

Disputed by UFO investigators where Hawking's statement is considered as part of the disinformation coverup by the UFO investigators and the UFO investigators are debunked as kooks, cranks, crackpots, paranoid conspiracy theorists by the opposition sometimes called "MAJIC" whose front org is the Committee to Investigate Claims of the Paranormal. However, rising above this din of factional wars of rivals inside the intelligence agencies of the major powers and their agents of influence, cut-outs, useful and useless idiots ;-), It is prudent to assume that whoever is really behind "UFOs" has such literally advanced super-technology at their disposal.

Thermodynamics of Time Machines
Michael Devin
In this note, a brief review of the consistent state approach to systems containing closed timelike
curves[CTCs] or similar devices is given, and applied to the well known thermodynamic problem of
Maxwell's demon. The 'third party paradox' for acausal systems is defined and applied to closed
timelike curve censorship and black hole evaporation. Some traditional arguments for chronology
protection are re-examined ...

Since the original version of this paper in 2001[1], there has been a renewed interest in
time machine calculations, springing from a duality between acausal systems constrained
by a quantum analog of the Novikov[2] consistency principle, and the formalism of post-
selected ensembles developed by Aharonov and many others[3{5]. Interest has also grown in
the applications of such systems to computation theory, following the footsteps of Deutsch[6],
who employed a dierent superselection criteria leading to different physics. ...

In the past ten years the body of work of post-selected ensembles has grown to become the
standard concerning time machines[3],....  Some material has been added to reflect the more recent developments in black hole physics and post-selected systems.

Suppose we take the time looping bit and entangle it with the position of a particle in
a box. The box is divided by a partition into two unequal sections. In the case of a classic
Szilard engine we measure which side of the box the particle in on and then adiabatically
expand the section with the particle to the size of the original box, performing work. by
Landauer's principle we generate one bit of entropy reseting the measurement apparatus,
which is exactly equal to the maximum work we can extract from the engine by placing the
partition in the center. When the particle is entangled with the time machine qubit, the
probability distribution is no longer uniform and net work can be extracted. ...

As the noise drops to zero for a time machine, the work extractable per bit diverges. A perfectly reliable time machine can therefore violate the second law of thermodynamics by an arbitrarily large amount, but a noisy one has an effective limit. ...

In some of the cases considered in general relativity, with back reactions ignored, we find that CTCs and other time machines act like systems that do not need to borrow from anywhere to have energy. The number of accessible states grows exponentially with energy, and with all microstates equally probable, we naturally arrive at a negative temperature. A similar argument may be used for particle number to give negative chemical potential to the occupation numbers of each field mode. A similar argument may be used for particle number to give negative chemical potential to the occupation numbers of each field mode. If the number of particles
or energy is not somehow bounded then a divergence can result. This is especially the case when we have the internal entropy naturally maximized by eliminating the interaction of the time machine with it's environment due to ignoring back reaction.

The appearance of these divergences is often cited as support for Hawking's chronology protection conjecture[7, 8]. It is assumed that the  fluctuations must destroy the time machine before anything improper can occur. However, if this is the case, then it provides the very mechanism for making time machines well behaved entities with positive temperature. The higher the energy or occupation number of a particular field mode in a time machine, the more it is suppressed by the re-weighting of histories by the amplitude for such a high energy state to scatter onto the same state. In post-selected language the sample of high energy states acceptable to post selection is small because high energy modes tend to decay, and high particle number states tend to dissipate, with exponential probability. ...

The system is capable of extracting arbitrarily large amounts of work from an entangled system. In general we can imagine that systems with very large values of time machine negentropy will behave quite strangely, as the probability of exotic events could be exponentially amplified. ...

In regards to the computational power of computers with access to time machines, it
is straightforward to see that any computation in which an efficient method for checking
the solution exists, a source of random bits, sent through a checking algorithm which then
acts as a c-not on a time machine qubit, can arrive at the correct solution immediately
if the informational content of the solution is suffciently less than the work function of
the time machine. Since time machine bits may also act as perfectly random sources,
the information may seem to be created from nothing, but one may also think of such
`calculations' as becoming an extremely lucky guesser, due to the re-weighting of histories
by the time machine.

Essentially time machines are entropy pumps, similar to a classical heat engine. Instead of
transporting heat, they transport entropy, pushing a system of particles or a random message
in message space into a lower entropy state, but increasing the entropy of the environment
in some way not yet understood. The computations, like those of a true quantum computer,
are essentially analog computations. In this case eectively physically reversing classical
hard problems in time. Conventional crytpography would pose little obstacle to such a
computer. Instead one would have to devise ambiguous codes, which could be decoded into
a number of intelligible but incorrect messages, leaving the computer with a problem of
interpreting which was significant, a task made hard for entirely different reasons. A `brute
force' entropy attack assisted by a time machine would then more likely generate one of
the red-herring messages. Other unusual protocols might be used to increase security, but
public key certication by computer would be almost useless. ...

Hawking famously has cited the lack of future tourists as good evidence against time
machines. Although no one disputes this, it is an interesting case to consider for the would be
time theorist. One possible explanation for the lack of such `tourist' fields on the microscopic
scale could be something like the quantum zeno effect. The atom is locked in its state and
the cat never dies because we generally have good records of whether or not time travelers
have appeared. For such a traveler, our present would be his past, and such records in that
future of a lack of visitors from the future may act as a similar lock on using tunneling or
entanglement type phenomena as time machines for that purpose. Different possible tourists
may destructively interfere with each other, just as highly non-classical paths for macroscopic
systems do in path integral theory. Consider that the weight of a particular time tourist
scenario is determined by the amplitude for the tourist's state to scatter onto itself at his
later departure. For any large number of bits of the tourist, as those bits decohere with the
environment that weight should decrease exponentially.

A physical example of how one might look for such `tourists' could be realized by exploring
the third party paradox where the receiving channel is measured well before the time machine
exists. The spin measurements of that channel should be random, but if tourism is allowed,
then they may contain a message. If we consider ensembles that may or may not contain a
time machine, it is helpful to note that the weight factor for a particular history is an inner
product of two unit vectors, as well as a noise coecient. Both of these factors are less than
one, and a sampling from ensembles where the existence of a later time machine depends
on the reception of a message that enables it's construction will actually be suppressed
relative to other random possible messages. A statistical `weak censorship' counteracts the
spontaneous emergence of time machines, without absolutely forbidding them. It might
make for an interesting experiment to construct a post-selection equivalent of the tourist
problem, in which selection criteria followed more complex protocols.

In order for tourists to be suciently rare, the chronology protection mechanism need
not be absolute. Instead it need only be exponentially difficult for tourists to visit some
location in order for the expectation value of tourists to be nite, and thus hopefully small.

In conclusion, time machines, if they exist at all, must possess fundamental limits on their
error rate and waste heat, irrespective of the exact method of construction. These limits can
be thought of as analogous to the old Carnot effciency of classical heat engines independent
of the specific construction of the engine. Most of the standard paradoxes associated with
time travel are mitigated by considering systems operating within these limits. The study
of acausal models still has much room for development. In the case of renormalization,
badly behaved bare models may form condensates, shifting the vacuum and creating a more
well behaved dressed model. Similarly, acausal bare models may lead to better behaved
approximately causal models when various corrections are accounted for. In cosmology and
landscape theory, some physicists have sought a model for the emergence of the Lorentzian
signature of the metric, a spontaneous symmetry breaking that creates time itself. If such
ambitions are ever to succeed they surely have to entertain causality as potentially only
approximate in the resulting cosmos.

Technical Appendix for Physicists may be skipped by non-Physicists

To new students of quantum mechanics, the Bell inequalities, delayed choice, and quantum eraser experiments have seemed to almost violate causality. The fact that they cannot
is a crucial consequence of the unitary nature of quantum mechanics. One of the most
troubling aspects of the information loss paradox is the apparent loss of unitarity. Not all
non-unitary maps are created equal, and trace over models of lossy processes do generally
preserve causality. Such models seemed adequate until Hawking radiation came along. The
eventual disintegration of the hole broke the analogy of environmental decoherence open-
ing up the possibility of `bad' nonunitary processes in some imagined acausal lossy theory
of quantum gravity. The aim of the remaining sections is to explore implications of this
possibility. A quantum eraser is a system that exhibits extreme nature of the delayed choice exper-
iment by measuring and then coherently erasing information about two different possible
paths for a system. By the no copy theorem a qubit that is created by measuring another
qubit can only be coherently erased by combining it with the original again. Coherent era-
sure makes the erased bit `unrecoverable in principle' and thus restores interference effects
relating to any superposition of the original bit before the creation of the measurement bit.
Two concerns in the information paradox were first, that an evaporated black hole might constitute an `in principle unrecoverable' process, and second that proposed complementarity scenarios would violate the no copy theorem, providing another way to erase measurements.

Both cases lead to breakdown of unitarity and subsequently causality. Complementarity has
to ensure the second scenario of a bit meeting its extra twin can not occur. This appears to
be the primary motivation for the recent 'rewall' models of black hole evaporation.
The inherent non-unitarity of time machines can easily be seen by observing the effect
that this probability skewing has on entangled particle pairs. Consider instead of a particle
in a box, the classic spin entangled pairs of particles. If we should choose one of the entangled particles to be sent an arbitrary distance away, then use the other as a control bit in our
time machine circuit, then the state of the pair becomes in general a mixed state. If we
designate a second communication channel to act as the control of another c-not gate on
the time machine bit, then we may measure a real correlation between that channel and the
spin measurements of the distant spin partner. A single time machine as a third party in
the mutual future of two observers can apparently effect nonlocal communication between
them. Thus the non-unitary effects of a time machine may be felt arbitrarily far away, even
in the past light cone of the arrival of  |out >.

Consider the equivalent case for a post-selected system where a single bit is selected
to be in a state |0> at two different times. In between these two it is acted on by by two
controlled not operations, one of an EPR pair, and a second being the desired input bit. The
post-selected statistics of the distant EPR partner will now reflect those of the chosen input
bit. Any time a superselection operator acts on an entangled system of particles to enforce
a particular final state on part of the system, the potential for acausal communications
between two third parties also appears. This `third party paradox', is an important element
in understanding the interaction between time machines and nonunitary dynamics.
So far it seems that time machines skew the statistics of ensembles to create effective
nonlinear dynamics. In turn most nonlinear quantum mechanics appears to be exploitable
to create time machines. Explicitly, one time machine can be used to create another, or any
number of others, through the third party paradox. A useful exercise here is to consider
the work done by these `child' machines and how it compares to the work extractable by
the parent alone. Each child `inherits' the noise of its parent, and shares to some degree
the back reaction of its siblings. If the spawning process introduces no additional noise,
then we can shift the arrival time of |out> to an earlier time and find an equivalent system
containing only the parent time loop. This is possible since the duration of the loop is not a
factor in the work function. The maximum work performed by the entire ensemble, minus
any entropy cost for erasing extra measurement bits, should still be less than or equal to
the original work function. ...

Early in the `black hole wars' Hawking tentatively proposed a theory of density matrices might be considered as a generalization of quantum mechanics capable of handling the apparent lack of unitary evolution in gravitational collapse[9]. This approach was heavily criticized for possible violations of locality or energy conservation[10]. Energy conservation can be maintained, but the trade-off between causality and non-unitarity remains. Any system that can act on a qubit to map orthogonal to non-orthogonal states, can be added
to a quantum eraser double interferometer to break the net balance between opposing interference patterns that locally cancel for distant entangled states. It would seem though that if such transitions were possible, then vacuum fluctuations would cause them to occur to any given bit eventually, and thus nonlocal interactions would be everywhere. ...

Hawking and others have contended that all systems containing time machines should
possess entropy in accord with the number of internal states `sent' to the past[11] ...
This scenario is trivially modeled in a post-selection experiment as simply
three measurements of a random bit, in which the first and last measurements are the same
result. ...

the importance of the relative phase information of out states that is crucial to preventing entangled particle pairs from allowing non-local communication. The classic double interferometer fails to detect any local interference effects when observing only one of the photons. The other photon may be in either of two states, and that bit contains either the path information of its cousin, eliminating the interference, or the two outcomes
contribute to two separate interference patterns exactly out of phase, such that the trace over those gives no local interference.

(not necessarily using Glauber coherent non-orthogonal entangled states - JS)

Some black holes are thought to contain ready-made time machines in the form of closed
timelike curves. The troubling behavior of the Kerr metric near the singularity was assumed
to be made safe by being behind the horizon, an early and important result supporting
the cosmic censorship hypothesis. However due to the third party effect, it would appear
that not only does the horizon fail to prevent information from leaving the CTC region,
it leads non-local communication between points far from the hole. These secondary time
machines can then effectively 'mine' the black hole for negentropy. Some fraction of the
entropy associated with the irreducible mass of the hole should then provide a bound on
this entropy, and therefore some constraints on k, for the CTC region. For the purposes of
chronology protection, horizons alone are ineffective 'transparent censors'. ...

One proposal in resolution to the black hole information paradox is to add a boundary
condition to the singularity[12]. Some critics argue this violates causality[13]. The argu-
ment against it can be illustrated with the following paradox. Under normal circumstances,
information, such as a volume of Shakespeare, falls into a black hole, which then evaporates
via Hawking radiation. If a boundary condition at the singularity is prescribed, then these
fields must be canceled by other contributions as they approach the singularity. These other
contributions are the in-falling components of the pairs of particles appearing from the vac-
uum, the outgoing of which constitute the Hawking radiation. Since each pair is strongly
entangled, and the in-falling radiation is forced to match up with the in-falling Shakespeare
volume via the superselection of global field congurations to fit the boundary condition,
then the outgoing radiation must be simply the scrambled volume of Shakespeare. Another
way of considering it is to imagine the field modes reflect off of the singularity, becoming
negative energy time reversed modes. They then travel out of the hole and reflect off the
potential outside the main black hole, becoming again positive energy, forward modes.
The boundary condition acts as a selector of global field congurations, much like the
post-selection operator used to model acausal ensembles. The proposed mechanism `similar
to state teleportation' is in fact the third party paradox communication channel arising
in both time machine and post selected systems. We may employ the same methods of
superselection to generate a time machine via the third party problem. The picture is
complicated slightly though by the presence of the incoming part of the Hawking pairs.
This incoming part may serve as the required noise that bounds the total work extractable
by all third party time machines. If no time machines are spawned this way, the work is
expended adjusting the outgoing radiation into the form of Shakespeare. One flaw in this method of teleportation is also that there is nothing to require that the teleported states leave the black hole before the original states enter it. ...

For discussion
"The researchers conducted a mirror experiment to show that by changing the position of the mirror in a vacuum, virtual particles can be transformed into real photons that can be experimentally observed. In a vacuum, there is energy and noise, the existence of which follows the uncertainty principle in quantum mechanics."


I use the inverse argument to the above in my argument that the dark energy accelerating the universe is cosmic redshifted advanced Wheeler-Feynman real photon thermal Hawking-Unruh radiation back from our future cosmic event horizon (Lp thick) of energy density hc/Lp^4 that appears as virtual photons with ~ 10^-122 smaller energy density hc/Lp^2A in our detectors from Type 1a supernovae. A = area-entropy of our future light cone's intersection with our observer-dependent de Sitter future horizon (also applies to Type 1a supernovae in the past light cones of our telescopes).


On CCC-predicted concentric low-variance circles in the CMB sky
V. G. Gurzadyan1 and R. Penrose2
1 Alikhanian National Laboratory and Yerevan State University, Yerevan, Armenia
2 Mathematical Institute, 24-29 St Giles, Oxford OX1 3LB, U.K
Received: date / Revised version: date
Abstract. A new analysis of the CMB, using WMAP data, supports earlier indications of non-Gaussian features of concentric circles of low temperature variance. Conformal cyclic cosmology (CCC) predicts such features from supermassive black-hole encounters in an aeon preceding our Big Bang. The significance of individual low-variance circles in the true data has been disputed; yet a recent independent analysis has confirmed CCC’s expectation that CMB circles have a non-Gaussian temperature distribution. Here we
examine concentric sets of low-variance circular rings in the WMAP data, finding a highly non-isotropic distribution. A new “sky-twist” procedure, directly analysing WMAP data, without appeal to simulations, shows that the prevalence of these concentric sets depends on the rings being circular, rather than even slightly elliptical, numbers dropping off dramatically with increasing ellipticity. This is consistent with CCC’s expectations; so also is the crucial fact that whereas some of the rings’ radii are found to reach around
15◦, none exceed 20◦. The non-isotropic distribution of the concentric sets may be linked to previously known anomalous and non-Gaussian CMB features.


MIT DARK MATTER DISCOVERY? http://t.co/0Cuupxo5pr via @regvulture If real WIMPs exist, then I am wrong that Dark Matter is a virtual fermion-antifermion pair effect inside the quantum vacuum. They generate attractive gravity. Virtual bosons generate repulsive anti-gravity. That this is so comes from the equivalence principle of Einstein and the spin-statistics connection of quantum field theory.
MIT boffin teases space-station probe's DARK MATTER DISCOVERY • The Register
MIT scientist and Nobel Laureate in Physics Samuel Ting told reporters at the American Association for the Advancement of Science (AAAS) that the first results from the costly Alpha Magnetic Spectrometer (AMS) are ready.


Comment on “Trouble with the Lorentz Law of Force: Incompatibility with Special Relativity and Momentum Conservation”
Daniel A. T. Vanzella
Published 20 February 2013 (2 pages)

Comment on “Trouble with the Lorentz Law of Force: Incompatibility with Special Relativity and Momentum Conservation”
Stephen M. Barnett
Published 20 February 2013 (1 page)

Comment on “Trouble with the Lorentz Law of Force: Incompatibility with Special Relativity and Momentum Conservation”
Pablo L. Saldanha
Published 20 February 2013 (2 pages)

Comment on “Trouble with the Lorentz Law of Force: Incompatibility with Special Relativity and Momentum Conservation”
Mohammad Khorrami
Published 20 February 2013 (1 page)

Mansuripur Replies:
Masud Mansuripur
Published 20 February 2013 (1 page)
 Phys. Rev. Lett. 110, 080503 (2013) [5 pages]

Entanglement and Particle Identity: A Unifying Approach

No Citing Articles
Download: PDF (111 kB) Export: BibTeX or EndNote (RIS)

A. P. Balachandran1,2,*, T. R. Govindarajan1,3,†, Amilcar R. de Queiroz4,‡, and A. F. Reyes-Lega5,§ 1Institute of Mathematical Sciences, CIT Campus, Taramani, Chennai 600113, India
2Physics Department, Syracuse University, Syracuse, New York 13244-1130, USA
3Chennai Mathematical Institute, H1, SIPCOT IT Park, Kelambakkam, Siruseri 603103, India
4Instituto de Fisica, Universidade de Brasilia, Caixa Postal 04455, 70919-970 Brasilia, Distrito Federal, Brazil
5Departamento de Física, Universidad de los Andes, Apartado Aéreo 4976 Bogotá, Distrito Capital, Colombia

Received 22 June 2012; revised 8 November 2012; published 22 February 2013

It has been known for some years that entanglement entropy obtained from partial trace does not provide the correct entanglement measure when applied to systems of identical particles. Several criteria have been proposed that have the drawback of being different according to whether one is dealing with fermions, bosons, or distinguishable particles. In this Letter, we give a precise and mathematically natural answer to this problem. Our approach is based on the use of the more general idea of the restriction of states to subalgebras. It leads to a novel approach to entanglement, which is suitable to be used in general quantum systems and especially in systems of identical particles. This settles some recent controversy regarding entanglement for identical particles. The prospects for applications of our criteria are wide ranging, from spin chains in condensed matter to entropy of black holes.

© 2013 American Physical Society

03.67.Mn, 02.30.Tb, 03.65.Ud, 89.70.Cf




 Systems of identical particles.—In the case of identical
particles, the Hilbert space of the system is no longer of the
tensor product form. Therefore, the treatment of subsystems
using partial trace becomes problematic. In contrast,
in our approach, all that is needed to describe a subsystem
is the specification of a subalgebra that corresponds to the
subsystem. Then, the restriction of the original state to the
subalgebra provides a physically motivated generalization
of the concept of partial trace, the latter not being sensible
anymore. Applying the GNS construction to the restricted
state, we can study the entropy emerging from the restriction
and use it as a generalized measure of entanglement.

                      Phys. Rev. Lett. 110, 080501 (2013) [4 pages]

Fundamental Bound on the Reliability of Quantum Information Transmission

No Citing Articles
Supplemental Material
Download: PDF (111 kB) Export: BibTeX or EndNote (RIS)

Naresh Sharma* and Naqueeb Ahmad Warsi† Tata Institute of Fundamental Research (TIFR), Mumbai 400005, India

Received 17 August 2012; published 20 February 2013

Information theory tells us that if the rate of sending information across a noisy channel were above the capacity of that channel, then the transmission would necessarily be unreliable. For classical information sent over classical or quantum channels, one could, under certain conditions, make a stronger statement that the reliability of the transmission shall decay exponentially to zero with the number of channel uses, and the proof of this statement typically relies on a certain fundamental bound on the reliability of the transmission. Such a statement or the bound has never been given for sending quantum information. We give this bound and then use it to give the first example where the reliability of sending quantum information at rates above the capacity decays exponentially to zero. We also show that our framework can be used for proving generalized bounds on the reliability.

© 2013 American Physical Society



On Feb 22, 2013, at 10:39 AM, JACK SARFATTI <adastra1@me.com> wrote:

O Brave New World ;-)
We argue that generic nonrelativistic quantum field theories with a holographic description are dual to Hořava gravity. We construct explicit examples of this duality embedded in string theory by starting with relativistic dual pairs and taking a nonrelativistic scaling limit.


Feb 18

testing ignore

Posted by: lensman |
Tagged in: Untagged 


BBC News - Alpha Magnetic Spectrometer to release first results http://t.co/9BrasObB
'Space LHC' to release first results
The scientist leading the Alpha Magnetic Spectrometer, one of the most expensive experiments ever put into space, says the project is ready to come forward with its first results.
Jack Sarfatti "The scientist leading one of the most expensive experiments ever put into space says the project is ready to come forward with its first results.

Nobel Laureate Sam Ting said the scholarly paper to be published in a few weeks would concern dark matter.

This is the unseen material whose gravity holds galaxies together.

Researchers do not know what form this mysterious cosmic component takes, but one theory points to it being some very weakly interacting massive particle (or Wimp for short).

Although telescopes cannot detect the Wimp, there are high hopes that AMS can confirm its existence and describe some of its properties from indirect measures."


If my idea is correct, there will be no evidence for real dark matter particles whizzing through space from this device.

Dark matter is a virtual particle effect inside the quantum vacuum. It is the phase where the density of virtual fermion-antifermion pairs outweighs the density of virtual bosons. Dark energy is the opposite.

w = -1 for both in 3D + 1.

The physics is elementary

quantum statistics (permutation symmetry) in 3D

equivalence principle

local Lorentz invariance
The scientist leading the Alpha Magnetic Spectrometer, one of the most expensive...See More
Apple's Killer App for Iphone? Photographing & videoing the future? P.K. Dick rises from his grave. ;-)
Like · · Share
  • Jack Sarfatti Begin forwarded message:

    From: JACK SARFATTI <sarfatti@pacbell.net>
    Subject: [ExoticPhysics] Spacelike (FTL) Entanglement Signals with Trapped Ions? "magic wand" Apple's killer app. ; -)
    Date: February 11, 2013 8:22:02 PM PST
    To: Exotic Physics <exoticphysics@mail.softcafe.net>
    Reply-To: Jack Sarfatti's Workshop in Advanced Physics <exoticphysics@mail.softcafe.net>

    Indeed, unless I am mistaken, this ultimately solid state system can be packaged into an Apple I phone to take photographs of the future seen in the past.

    "6.5. Quantum Teleportation Mechanical States
    Analogous to continuous-variable teleportation of optical states [220], one can teleport the quantum state of one mechanical oscillator to the other, if two entangled squeezed
    beams are used to drive them, each of their positions are measured | and with results fed back to the other one (as shown in Fig. 18).


    Imagine we can do Fig 18 with phonons rather than photons in some long crystal rod.

    Each mechanical oscillator A & B is a trapped ion with internal qubits 1,0 eigenvalues at the two ends of the crystal rod ("magick wand" ;-))

    The coherent phonon Glauber states are z & z' for the center of mass motions of the ions.

    The initial state is

    |A,B>i =(1/2)^1/2[|1>A|z>A + |0>A|z'>A + |1>B|z>B + |0>B|z'>B]

    after the entanglement swapping via teleportation of the Glauber coherent phonon states the prepared final state is

    |A,B>f = (1/2)^1/2[|1>A|z>B + |0>A|z'>B + |1>B|z>A + |0>B|z'>A]

    Use the Born rule in density matrix trace formalism to get e.g.

    P(1)A = (1/2)(1 + | B<z|z'>B |^2)

    This violates the parameter independence no-signal arguments of orthodox quantum theory because the Glauber coherent states are macroscopically distinguishable and non-orthogonal.

    The Born probability rule breaks down in Antony Valentini's sense for Glauber states when they are entangled with other states.

    P(1)A + P(0)A = 1 + | B<z|z'>B |^2

    Not only can A & B be spacelike separated, but we can operate the "magick wand" in Wheeler delayed choice mode in which a tiny video camera is at B which transmits images and audio from A's future back to A in the past.

    Indeed, this solid state system can be packaged into an Apple I phone to take photographs of the future.

    No doubt those ET's in their magnificent flying disks have such crystals?

    ExoticPhysics mailing list

On Feb 10, 2013, at 9:58 AM, Alexander Poltorak <apoltorak@generalpatent.com> wrote:

I am not denying that there is a tensor part in a LC connection – I use it in my papers – the only thing I am saying is, to extract it, you need the second connection.

Right, no problem there mathematically. Physically it means adding new tensor fields like torsion & non-metricity.
Your previous assertion that there is a unique decomposition of LC into tensor and non-tensor part is incorrect.  Every time you  subtract another connection (affine or LC) from your first LC connection, you  get a tensor of affine deformation.  Since you can define infinite number of various connections on the manifold, there is infinite number of ways to decompose you LC into a tensor and non-tensor, as the affine deformation tensor will be different depending on the second connection.
However, what I think you are trying to say, is that there is one way to extract a tensor out of LC connection, which contains all information about the geometry but is a true tensor.  If this is what you are saying, that is certainly correct.  There is very simple way to do it – just subtract from your first LC connection another affine connection with zero curvature and torsion.  What you will get is a tensor of affine deformation that contains all information about the geometry defined by your original LC connection.  Essentially, what you are doing through this procedure, you are stripping the information about the coordinate system from your LC connection and leaving only information about the geometry imbedded in the tensor of affine deformation (which is also the tensor of nonmetricity for the affine connection with respect to the metric associated with your LC connection).  This gives you a unique tensor part of the LC connection that you are seeking.  But why reinvent the wheel and call it a “tensor of metricity” when everyone in the world calls tensor of nonmetricity or tensor of affine deformation?  You will only confuse people by inventing new terminology for well-known objects.  So far, it’s all pretty obvious.

OK, but then the question is what is the explicit structure, the formula for, this allegedly unique affine connection A with zero curvature and zero torsion? Also, "curvature" and torsion with respect to itself A? Or with respect to the original LC connection? It seems it must be the latter. The simplest connection with zero curvature and zero torsion relative to the LC connection is A = 0. But that is obviously not a good choice. Also connections describe frames of reference as well as parallel transport in the appropriate fiber space of the physically relevant fiber bundle.
I think where we run into a philosophical argument, is where you propose to discard the second connection.  I understand that mathematically speaking, you can do it.  But what is the physical meaning of this?  What is the meaning of your flat connection that you need to subtract from the LC connection to extract the tensor of affine deformation?  You can take two approaches: first, a-la Rosen, you can say that without gravitational field the spacetime ought to be flat and gravity curves it – hence we start with the flat connection (or Rosen’s flat metric  -- either way, it’s bimetrism, because LC connection always has its proper metric associated with it) and then introduce the second LC connection (and the second metric) describing the geometry change by the presence of gravitational field – the difference between the two will be your affine deformation tensor that describes the strength of gravitational field.  Or  you can follow my approach, where I propose that the first affine connection describes the choice of the frame of reference (in an IFR the connection has no curvature or torsion, but in a NIFR, the connection has curvature and, possibly, torsion).  But this is a question of interpretation.  The result will be the same – the use of the tensor of affine deformation or tensor of nonmetricity (if there is at least one metric) as the strength of the gravitational field, as I’ve done in my papers. ... But I don’t see what you are adding to what I have described more than 30 years ago.

OK, but here there is a conceptual philosophical problem. Frames of reference are only descriptions of frame-invariant geometric objects. Curvature and torsion are frame-invariant geometric objects. So this appears to be a contradiction since in your idea of "frame" geometric objects are no longer frame invariant. In terms of Plato's Allegory of the Cave, what is real are the objects what is frame dependent are the projected shadows from the objects. The shadows are the subjective frame-dependent representations of the real objects.
Best regards,
From: Paul Zielinski [mailto:iksnileiz@gmail.com] Sent: Sunday, February 10, 2013 1:04 AM
To: Alexander Poltorak
Cc: JACK SARFATTI; d14947 Gladstone; Waldyr A. Rodrigues Jr.; james Woodward; Gerry Pelligrini; Saul-Paul Sirag
Subject: Re: KISS OFF! ;-)

Thanks for your response. Comments below.

On 2/9/2013 8:43 PM, Alexander Poltorak wrote:
Paul: see my comments below:
From: Paul Zielinski [mailto:iksnileiz@gmail.com] Sent: Saturday, February 09, 2013 3:01 PM
To: Alexander Poltorak
Cc: JACK SARFATTI; d14947 Gladstone; Waldyr A. Rodrigues Jr.; james Woodward; Gerry Pelligrini; Saul-Paul Sirag
Subject: Re: KISS OFF! ;-)

I'm sorry but I have to say that what you wrote below is simply erroneous.

If I understand your position correctly, you are saying that it is only possible to extract a non-zero tensor from the LC connection as the
non-metricity of a second "Affine" connection, and that since no such connection is available in orthodox GR (which I think everyone
agrees with), the LC connection *has no tensor part in that theory*. In other words, the second Affine connection being unavailable, the LC connection is "irreducibly non-tensorial" in that context.
[AP] That is correct, that’s my assertion.

But this is clearly false, since all that is required here is that it be shown that there is a quantity contained in the LC connection whose components A^u_vw transform according to tensor rules.
[AP] Be my guest, try to prove it. I don’t think you will succeed.

OK then suppose we have the second connection, and use it your way to identify a class of (1, 2) tensors Acontained in the LC connection. How can removing the second connection from the formalism change the coordinate transformation properties of the quantity A^u_vw once it is identified? How can that be possible?

It's one thing to say that it is not "explicitly" present, as you did, but it's quite another to say that it's not present at all.

My position here is that once it is identified (or "extracted" in your terminology) and its transformation properties are established, the removal of the second
connection that is used to identify it only prevents us from calling it the "non-metricity" of the missing connection. It doesn't prevent us from classifying it as
a tensor.

Or is it your position that this quantity goes to zero when the second connection is removed from the theory?

Once this is established, there is no need for a second connection since then the existence of such a quantity depends only on this independent condition being satisfied. So you can use a second connection as a "construction for the sake of proof" in order to isolate the tensor part of the LC connection, and then discard it once the existence of the tensor quantity is established.
[AP] Yes, if you can show that there is a quantity contained in the LC connection whose components A^u_vw transform according to tensor rules, the second connection would not be required. However, you have not shown it and, I am afraid, you will not be able to show it.

Then what is your -Q^u_vw? This is the negative of the non-metricity of the second "Affine" connection, right? You seem to be saying that the components -Q^u_vw no longer transform according to the (1, 2) tensor rules when the second connection is excluded from the theory. Or else that they all go to zero.

How exactly does that work, in your view?

Here is a simple illustration of the fallacy of this proposition.  If you chose normal (aka Riemannian) coordinates in the vicinity of point p, Christoffel Symbols of your LC connection vanish in the vicinity of p, which could not happen if LC connection contained a tensor component.
Ah OK I see.

But in my theory of the LC connection, the Riemann coordinates make a non-tensor contribution to the LC connection that cancels the tensor geometric contribution, i.e., the matrix representations of the coordinate and geometric contributions to the LC connection sum in the Riemann CS to give a *zero matrix*.

According to my understanding the coordinate contributions to the LC connection depend only on the non-linear character of the diffeomorphic transformations on the coordinate space R^4, and do not at all depend on the intrinsic geometry of the object manifold.
So this is a fundamental difference in our respective understandings of the nature of the LC connection and its relationship to the coordinates and the coordinate space R^n.

From my perspective your argument is circular, since according to my understanding you still get a vanishing LC connection around any given point p in a Riemann CS
even with a non-zero tensor geometric part.

Here is even simpler proof: take a flat Minkowski space.  In curvilinear coordinates, Christoffel symbols will be non-zero, but in Minkowski coordinates they all vanish globally.  How would that be possible if there was a tensor component there?

Easy. The tensor geometric part of the LC connection is zero everywhere on a Minkowski manifold.

Which means that on a flat manifold in R^n-curvilinear coordinates you have a non-zero pure affine connection except for a *zero* geometric contribution (i.e., a zero tensor).
Referring to the theory of parallel transport, this is because on a flat manifold the inner product defined by the Minkowski metric
is invariant under transport of vectors along the manifold, and thus there is nothing to correct for in the partial derivatives of tensors,
except for curved-coordinate artifacts. So all that applies in this case, and all that is present, is the coordinate part of the LC connection, which enables the LC covariant derivative to correct the coordinate artifacts. The zero geometric part does nothing.

In other words, in the *unique* decomposition

Γ^u_vw = G^u_vw + X^u_vw

on a globally flat Minkowski manifold, all G^u_vw = 0, some X^u_vw =/= 0.
This makes perfect sense to me, since here we are interested in the covariant first order *geometric* variation of the inner product under infinitesimal displacement of vectors/tensors along the manifold, which clearly vanishes for a Minkowski manifold.

Also it is not true that defining a second connection is the *only* way to extract such a tensor, since for example as I've mentioned you can take the difference between two LC connections (compatible with different metrics) to get a similar results, and then discard the second metric as a "construction for the sake of proof" afterwards.
[AP] I don’t understand your argument here.  By introducing the second LC connection associated with another metric you have introduced the second connection, haven’t you?  Of course, the difference between two LC connection will always be a tensor.
Yes exactly.

The difference between any two Affine connections is always a tensor, called tensor of affine deformation.

But you need the second connection, metric (i.e., LC) or not (i.e. Affine)!

Yes but because the geometric contribution G^u_vw to the LC connection is zero for the flat manifold, it does nothing,
and we can simply remove it from the definition of the resulting tensor quantity without disturbing anything. Then we have a
standalone tensor G^u_vw that only refers by definition to *one* metric. That's the trick.

That's what I meant by "kicking down the ladder behind you". It's just a mathematical construction for the sake of proof.

It's very important to understand that the resulting tensor is a standalone quantity whose transformation properties are not
affected in the slightest by removal of the zero flat space contribution.
I can show that the resulting tensor is the negative of the non-metricity of what I'm calling the "pure affine connection",
which has no geometric part and is thus irreducibly tensorial.
So all roads lead to Rome.

Thus the correct statement here would be that while the tensor that is exposed by the application of the covariant derivative associated with your second "Affine" connection is not the *non-metricity* tensor of the Affine connection if the second Affine connection is not defined, it is still a *tensor* quantity transforming according to tensor rules that is mathematically present in the LC connection, regardless.
[AP] You lost me here again. A tensor obtained by replacing partial derivatives of the metric tensor in Christoffel symbols by covariant derivatives with respect to another connection is by definition the tensor of nonmetricity for that second connection.

Yes exactly. But if the second connection is removed from the theory, there can be no "non-metricity" tensor of *that* connection in the theory.
So in that case you can no longer *call* the tensor quantity extracted by that method a "non-metricity" tensor. But it still has the same tensor transformation properties, and is therefore still a tensor, and is still present in the LC connection, regardless.

That is shown clearly by the Levi-Civita dual metric construction, which exposes the same family of tensors without reference to a second connection; and according to the argument above, when you take one of the metrics to be flat, you can also remove all reference to the flat metric once you have identified a *unique* 3rd rank geometric tensor inside the LC connection, without disturbing the value or the transformation properties of the resulting quantity.

  You can do the same thing in your model where you have two metrics.  When you construct to LC connections based on each respective metric and then take a covariant derivative of the first metric with respect to LC connection associated with the second metric, you will get the tensor of nonmetricity of that second connection with respect to the first metric.  Or vice versa.  In this scenario, albeit you start with two metrics, you still have two connections.

Yes but see above. The kicking-down-the-ladder trick of removing the always-zero geometric part of the flat LC connection from the *definition* of the
resulting tensor yields a standalone quantity whose definition refers only to a *single* metric. Because removing a quantity that is *always zero* from
the definition numerically leaves the same tensor in place.

So we are talking about two things here: (1) the method used to isolate the tensor part of the LC connection; and (2) the tensor
properties of the quantity so isolated. [AP]  This is a tautology. Once you isolated “the tensor part of the LC connection” in (1), obvious, “the tensor properties of the quantity so isolated,” which is a tensor by your own definition, are guaranteed. Only (1) depends on the existence of a second connection in your theory, while (2) stands quite independently of (1) in your theory since it depends only on the transformation properties of the components A^u_vw under coordinate transformations, which are not at all dependent on the existence of the second connection.

So I stand by what I said: your argument in favor of what you understood to be Jack's position on this question is not logically consistent
with your position on the extraction of a non-zero tensor from the LC connection using your second Affine connection.

[AP] I respectfully disagree.

Are you now willing to acknowledge the error? [AP] I’d be glad to acknowledge, if I knew were the error was.

See above.

There is clearly a fundamental difference in our respective understandings of the LC connection. I am saying that Riemann coordinates
are R^n-curvilinear (in the coordinate space R^n) and therefore make a non-zero contribution to the LC connection, and that this cancels
the geometric part around any point in such a CS. In other words, the respective matrix representations of the two linearly independent
contributions mutual cancel in such a CS.

This is a basic point that I think will have to be resolved before we can go any further with this.



On 2/7/2013 1:34 PM, Alexander Poltorak wrote:
There is no logical contradiction. To get a tensor, you must introduce the second connection. It is not present in the standard formulation of GR – hence Jack correctly states that LC connection is irreducibly nontensorial.  We can get to a tensor, but for that we need the second connection (not the second metric, as you suggest, but the Affine connection), which does not exist explicitly in Einstein’s GR.

From: Paul Zielinski [mailto:iksnileiz@gmail.com] Sent: Thursday, February 07, 2013 3:59 PM
To: Alexander Poltorak
Cc: JACK SARFATTI; d14947 Gladstone; Waldyr A. Rodrigues Jr.; james Woodward; Gerry Pelligrini; Saul-Paul Sirag
Subject: Re: KISS OFF! ;-)

How can you say that the LC connection decomposes into a tensor and a non-tensor, and at the same time argue that Jack is
right when he says that the LC connection has no tensor part? This seems like a logical contradiction to me.

Of course the LC connection as a whole is a non-tensor, and of course the non-metricity Q^u_vw of the metric compatible LC connection is zero *by definition*. However, it doesn't follow that there is no tensor part in the "LC connection of GR". The LC connection of GR is the LC connection of Riemannian geometry, and the LC connection of Riemannian geometry contains an infinite class of (generally) non-zero tensors, as you yourself have argued.

It seems to me that the correct statement here is that the LC connection of GR does contain this tensor part, but this quantity
has not previously been physically interpreted in *orthodox* GR.

On 2/7/2013 12:38 PM, Alexander Poltorak wrote:
What Jack is talking about by saying there is no tensor component in 1916 GR's LC connection is as follows: a general Affine connection, as is well known, is a sum of a metric connection (aka LC connection), which is a non-tensorial quantity, nonmetriciy and torsion, which are both tensors. The only thing Jack is saying is that in standard 1916 GR, both nonmetriciy and torsion are zero and, therefore the Affine connection is equal to a LC connection, which is non-tensor -- hence, Jack says, there is no tensorial part in GR's connection and he is right of course.

Feb 07
  • ack Sarfatti Jack Sarfatti On Feb 6, 2013, at 3:49 PM, nick herbert <quanta@cruzio.com> wrote:

    Again a very persuasive argument.

    You are correct that the |0>|1> term is small.

    But it is multiplied by a different |0>|1> term (to form the product state |0>|1>|0>|1>.
    The coefficients of this different |0>|1> term are surprisingly large.

    JS: Ah so, Holmes.

    NH: As to your ability to make alphaxr as large as you please. Do you think you can do this
    and 1) preserve normalization of the input coherent state? 2) preserve the truncation condition?

    JS: This issue of the normalization of the input coherent state is non-trivial. In the literature the authors on entangled coherent Glauber state put in what looks like an observer-dependent normalization forcing the Born probability rule to be obeyed. This can always be done ad_hoc, but it is not part of the rules of orthodox quantum theory where unitary time evolution guarantees invariance of the initial normalization choice that should not depend on what future choice is made by the measuring apparatus (for strong Von-Neumann projections).

    For example, for a trapped ion internal qubit +,- entangled with its coherent phonon center of mass motion z. z'+ instead of the unitary invariant choice

    | > = (1/2)^1/2[|z>|+> + |z'>|->]

    The Born rule trace over the non-orthogonal Glauber states gives the seemingly inconsistent result

    P(+) = P(-) = (1/2)[1 + |<z|z'>|^2]

    P(+) + P(-) > 1

    which I say is a breakdown of the Born probability rule in the sense of Antony Valentini's papers.

    The dynamics of Glauber state ground state Higgs-Goldstone-Anderson condensates with ODLRO (Penrose-Onsager) is inherently nonlinear and non-unitary governed by Landau-Ginzburg c-number equations coupled to q-number random noise. The bare part of the noise dynamics sans coupling to the condensate is of course orthodox quantum mechanical.

    Now what the published paper's authors do is to use an ad-hoc

    | > ' = | > = (1/2[1 + |<z|z'>|^2])^1/2[|z>|+> + |z'>|->]

    giving the usual no-signaling

    P(+) = P(-) = 1/2

    NH: And by the way, just what is the wavefunction for the input coherent state before the beam splitter?
    You are never specific about what has to go into the beamsplitter to achieve the performance you describe.
  • Jack Sarfatti On Feb 6, 2013, at 1:49 PM, Demetrios Kalamidas wrote:

    Hi to all,

    Concerning my scheme, as it appears in the paper, lets do a certain type of logical analysis of the purported result:

    Let's say that the source S has produced 1000 pairs of entangled photons in some unit time interval. This means that we have 1000 left-going photons (in either a1 or b1) AND 1000 right-going photons (in either a2 or b2).

    Let's say we have chosen 'r' to be so small that only 1 out of every 1000 right-going photons is actually reflected into modes a3' and b3'. So, 999 right-going photons have been transmitted into modes a2' and b2'.

    In my eq.6, we observe that the 'quantum erasure' part is proportional to 'ra'. Let's say we choose 'ra' such that '|ra|squared', which gives the probability of this outcome, is 10 percent.

    This means that roughly 100 right-going photons have caused 'quantum erasure', for their 100 left-going partners, by mixing with the coherent states in a2' and b2'.

    Thus, "fringes" on the left will be formed that show a variation of up to 100 photons, as phase 'phi' is varied, between the two outputs of beam splitter BS0.

    Now, for this total batch of 1000 right-going photons, ONLY ONE PHOTON, roughly, has made it into a3' or b3' and mixed with the coherent states over there.

    So, even if that ONE PHOTON contributes to "anti-fringes" on the left, it could only produce a variation of, roughly, up to 1 photon, as 'phi is varied, between the two outputs of BS0....and that is nowhere near canceling the "fringe" effect, but can, at most, cause a minute reduction in the "fringe" visibility.

    JS: This seems to be a plausible rational intuitively understandable informal argument. Very nice. However, words alone without the math can be deceiving.

    DK: Please note that we can choose 'r' to be as small as we desire, i.e. we can arrange so that one out of every billion right-going photons can be reflected into a3' and b3' WHILE STILL MAINTAINING the '|ra|squared'=10percent value (by just cranking up the initial coherent state amplitude accordingly).

    I wrote this logical interpretation of my proposal in order to show that Nick's analysis goes wrong somewhere in predicting equal amplitudes for the "fringe" and "anti-fringe" terms.

    JS: I do hope Demetrios will prove correct of course. Even Nick Herbert desires that. Is young Demetrios the new Arthur? Has he pulled the Sword from The Stone?