"Let us illustrate the problem of signalling with the assistance of the ubiquitous experimenters Alice and Bob. We will place Alice and Bob at some distance apart, and between them there will be a source emitting pairs of entangled particles. To avoid relativistic complications we will assume that Alice, Bob, their detectors, and the particle source are all mutually at rest in an inertial frame (the ‘lab’ frame). Pair after pair of particles are emitted by the source and detected by Alice and Bob's apparatuses, who record their results. Alice and Bob are free to alter the angle of their detectors with each run of the apparatus.

What each experimenter will record is an apparently random sequence of ups and downs, like the results of an honest coin repeatedly tossed; and yet, when they compare results afterward, they will note that certain correlations, generally sinusoidal in form, stand between their results. For example, if the particles are spin-1/2 fermions, and if Alice and Bob are measuring spin in a particular direction, then the correlation between their results will be -cos@ where @ is the angle between Alice and Bob's detectors. Sinusoidal correlations like these readily violate mathematical inequalities such as those defined by Bell (1964). Itamar Pitowsky (1994) showed that the Bell Inequalities are examples of “conditions of possible experience” first written down by George Boole; these are consistency conditions between measurement results on the assumption that the results of one measurement and the way it is carried out does not influence the measurement of the other particle at the time of measurement. This means that the particular sequence of results that Alice and Bob get at their respective detectors could not have been encoded in the particles at the source; for some relative angles their results are too well correlated or anti-correlated for them to be due to local causes built into both particles when they were emitted” Kent Peacock "The No-Signalling Theorems: A Nitpicking Distinction”

Here is the setup

Bob is closer to the pair source S than Alice.

B — S—————A

Bob does not change his settings.

Alice at the last moment changes her settings in delayed choice fashion AFTER Bob’s particles in the entangled pairs has already been detected.

This is done in pulse fashion so that there is a good statistical sample of particles in each pulse.

Each setting (ai,b) b-fixed has random outputs 1,0 for each individual detection.

Using the statistical rules of orthodox quantum theory Alice and Bob compare their raw data after the experiment is over and from the fraction of coincidences in each pulse, Bob can infer the sequence of settings a1, a2, …. aN for N pulses, which is the encoded message.

It is obvious, since Bob did nothing at all, that Alice’s free will choices of settings a1, a2, …. aN for N pulses (which is the message) is the active future cause of the back-from-the-future coincidences, unless you want a paranoid conspiracy theory.

Now of course this is not Valentini’s “signal nonlocality” that is a larger theory violating orthodox quantum theory the way general relativity violates special relativity globally though not locally. With Valentini’s PQM extension of QM Bob can know in advance what Alice will choose even before she chooses it without doing the hindsight correlation analysis. However, any attempt by Bob to cause a paradox will fail either for reasons given by Thorne and Novikov or by David Deutsch.

More on the physical meaning of gauge transformations in both gravity and the electromagnetic-weak-strong interactions 12-18-13 The subject of gauge transformations is almost always presented in an obscure way as a purely formal mathematical exercise without direct physical meaning. This is all clas...

The issue before me is how to address them properly in my Stargate book and in my reviews of his book. I will take several weeks pondering this. I will not make Jim's theory a central part of my book as I have plenty of original material myself.

On Oct 20, 2013, at 12:20 AM, "jfwoodward@juno.com" <jfwoodward@juno.com> wrote:

Gentlefolk,

The continuation of last night's comments. Jack and Paul, by the way, have repaired to a shorter list to continue their mathematical discussions. As far as I am concerned, this process has been like tapping a kaleidoscope. I've known about Einstein's predilection for Mach's ideas since reading John David North's history of cosmology back in the '60s.

60's - paleolithic times in cosmology and in general relativity. See Feynman's letter to his wife at Warsaw GR meeting - it's online.

And with every pass, I learn a bit more -- though a bit less with each pass, at least recently.

As I said yesterday, much of the confusion [leaving aside the silliness about "fictitious" forces] in this business seems to be an outgrowth of the now allegedly mainstream view that gravity is only present when non-vanishing spacetime curvature is present -- a view that seems to have its origins in a neo-Newtonian view that large constant potentials can be gauged away as irrelevant. This comports with the widespread view that the Aharanov-Bohm experiment notwithstanding, potentials in classical situations are not real. Only the fields derived from them are.

Jim's writing about fictitious forces in his book is hardly intelligible to me.

Jim also seems to be confused about "potentials"

There are superficial formal analogies between Einstein's geometrodynamics and Maxwell's electrodynamics, but one must be very careful in applying them.

Jim cites Bohm-Aharonov. OK first look at Maxwell's electrodynamics. I use Cartan's forms

We have a potential 1-form A that is a connection for parallel transport of objects in the U1 circle fiber space.

The gauge transformations are

A --> A' = A + df

f = 0-form scalar

d^2 = 0

It's line integrals of A around closed loops that give the observable quantum phase shifts in the Bohm-Aharonov effect via Stoke's theorem etc.

The EM curvature is the 2-form

F = dA which is gauge invariant

F' = dA' = dA + d^2f = dA = F

Maxwell's field equations concern the 3-forms

dF = d^2A = 0 these are two of Maxwell's equations - no magnetic monopoles and Faradays EMF law (motors, generators ....)

d*F = *J these are the last two - Ampere's law with displacement current and Gauss's law

* = Hodge dual

Finally

d*J = d^2*F = 0

is local conservation of electric current densities

this is a 4-form in 4D spacetime dual to a 0-form.

This gauge theory extends to the non-Abelian unitary groups SU2 and SU3 that describe the weak and strong forces (Yang-Mills).

Jim's vector theory if done correctly has

g00 = 1 + phi/c^2

g0i = Ai

However, the analogy to EM as a gauge theory breaks down completely, because the F to Jim's A is the Levi-Civita connection.

In fact the proper analogy is that the Levi-Civita connection is the analog to the EM A and the curvature tensor is the analog to EM's F.

Conservation of currents is the Bianchi identity in GR.

However, to make the analogy more transparent. General relativity as a local gauge theory is a non-Abelian Yang-Mills theory based on the Poincare symmetry group of Einstein's special relativity.

Einstein's 1905 Special Relativity mathematically is the representation theory of the global 10-parameter Poincare group.

General relativity is properly named because it is a limiting case (zero torsion) of the local gauge theory of the Poincare group with the real gravity field as the curvature 2-form from the connection 1-form just as in Maxwell's electrodynamics.

However, the connection 1-form corresponding to Maxwell's A is not the Levi-Civita connection from the usual 1916 GR tensor formulation, rather it is the six spin connection 1-forms AIJ = - AJI with two LIF indices, IJ analogous to the internal indices Aa in Yang-Mills theory of the SU2 and SU3 internal groups and the 4 tetrad connection 1-forms eI.

There are therefore 10 connection 1-forms one for each "charge" generator of the Poincare group (linear-momentum-energy, rotational momentum, Lorentz boosts)

The Levi-Civita connection is derivable from the spin connections and the tetrads.

The four eI are the base 1-forms for a geodesic LIF dual to the tangent vector fiber space basis.

The spin connection allows coupling of gravity to spinors, the Levi-Civita connection does not.

Therefore, Einstein's 1916 geometrodynamics reformulated in modern Cartan-forms has the local gauge structure

D = d + SIJ/ Cartan exterior covariant derivative

summation convention over repeated indices - I am too lazy to put in the ^ for upper indices.

TI = DeI = deI + SIJ/eI = dislocation defect torsion field 2-form

Einstein's 1916 theory requires the ad-hoc constraint

TI = 0

In that limit:

Einstein-Hilbert action density is the 0-form scalar without cosmological constant for simplicity

*(eI/eJ/RKL)

with Euler-Lagrange equation for vacuum is the 1-form equation

*(eI/RJL) = 0

in ordinary tensor language this is

Ruv - (1/2)guv = 0

Including the matter-field sources gives

*(eI/RJK) = (8piG/c^4)*(TIJK)

More details are in Rovelli's lectures http://www.cpt.univ-mrs.fr/~rovelli/book.pdf

This may be true for all other physical fields. But it is not true for gravity. The vector part of the gravitational potential very definitely does depend on the particular value of the scalar potential calculated. There are some formal technical details that complicate this a bit. But the idea that you can ignore cosmic scale matter currents when computing local gravitational effects is still just wrong.

I find above comment by Jim unintelligible - at least at the present time.

Tonight, what I want to do, however, is talk a bit about a couple of other matters. The first is the "origin" of inertia. You may recall that Jack gave a long list of mechanisms -- the Higgs process, QCD calculations, and suchlike -- that allegedly are the origin of mass, and thus inertia. The fact of the matter is that none of these processes (valid in and of themselves) account for the origin of mass and inertia. Frank Wilczek, after telling you about these processes in his book The Lightness of Being, allows as much (on pages 200 through 202).

I am staring at those pages and I see nothing in Wilzcek's text that justifies Jim's extraordinary claim above. Certainly nothing that needs Mach's principle that simply replaces one mystery with another. Again Jim is confounding two different meanings of "inertia" just as he and other confound two different meanings of "gravity field".

Mach's Principle only is concerned with how matter affects disclination geodesic deviation (aka curvature). The real gravity field of Einstein's geometrodynamics is the field of "geodesic deviation" corresponding to inhomogeneities in Newton's "gravity field", which is a fictitious force field.

http://en.wikipedia.org/wiki/Geodesic_deviation

http://en.wikipedia.org/wiki/Fictitious_force

Mach's principle is not concerned with the origin of rest masses of elementary particles. Einstein briefly confounded the two, but it led nowhere. Wilzcek is concerned in those pages 200 - 202 with the cosmic landscape/Anthropic principle issue. Why these particular numbers and not others. http://www.fourmilab.ch/documents/reading_list/indices/book_487.html

A word on history. What Einstein may or may not have said in 1907 in his informal language as he groped toward GR is completely irrelevant to the modern understanding of general relativity. This is a normal evolution of all good physical theories. I have no patience with cranks that try to make a big deal over that. Such discussions are a waste of time for me.

Inertia is a universal property of stuff. And the only universal interaction that couples stuff is gravity. It is thus obvious that if gravity produces inertial forces (that is, the relativity of inertia obtains), that gravity should have a lot to do with the origin of inertia. (The origin of inertia was the title of Sciama's first paper on this I note. So I'm not making this up.)

Jim's remark above is unintelligible to me. This is what I mean by "inertial force."

inertial force(-nûrshl)

An apparent force that appears to affect bodies within a non-inertial frame, but is absent from the point of view of an inertial frame. Centrifugal forces and Coriolis forces, both observed in rotating systems, are inertial forces. Inertial forces are proportional to the body's mass. See also General Relativity.

Newton's gravity force per unit test mass -GMr/r^3 is an inertial force in exactly the same way as centrifugal and Coriolis forces are.

They are all part of the Levi-Civita connection which vanishes at the origin of a Local Inertial Frame (LIF).

The "force of gravity" you feel as weight on Earth is the unbalanced electrical force pushing you off a timelike geodesic of the local curvature real gravity field mostly due to the mass of Earth. You need that unbalanced force on you to keep you still (with respect to Earth) in the curved spacetime we live in. Earth pushes up on you and you push down on Earth etc. - action-reaction Newton's 3rd law.

Therefore, I find Jim's discussion of inertial forces here and in his book unintelligible and not mainstream.

This is more obvious still when you discover that phi = c^2 is the condition that must be satisfied for inertial forces to be due to gravity. You don't even have to fudge with dimensions to get this to work.

I also find "phi = c^2" unintelligible and not mainstream physics.

The dimension of phi is velocity squared. You may not like this result. Jack it seems doesn't. But it is a simple consequence of GRT. You might think that this means that should the rest of the matter in the universe be made to disappear (or should you screen an object from the gravity of all that matter) the mass of an object would go to zero -- as is assumed in a number of discussions of Mach's principle and the origin of inertia. But that's not what happens. Read chapters 7 and 8.

Unintelligible to me still as of this date.

The last thing I want to comment on is, how the devil did all this get so bolixed up? Recent kaleidoscope tapping suggests that there were two crucial mistakes that are largely responsible for all the confusion. The first mistake was made by Einstein in 1921. By that time, he had been worked over by Willem deSitter and disabused of his naive Machianism (which is why he started talking about spacetime as an "ether" about this time). So the claims he put into his Princeton lectures on Mach's principle were more tentative than they had been previously. One of the things he calculated that he took to be in accord with Mach's ideas was the effect of "spectator" matter (that is, nearby stuff) on the mass of an object. He claimed that piling up spectator matter would cause the mass of the object in question to increase (because of its changed gravitational potential energy). A very small amount. But if the origin of mass is the gravitational influence of cosmic matter, this is just the sort of effect you might expect to see.

OK

It turns out that Einstein was wrong about this. That's what Carl Brans showed in 1962 (as part of his doctoral work at Princeton with Bob Dicke). The EP simply forbids the localization of gravitational potential energy. So, the inference that GRT is explicitly non-Machian regarding inertia and its origin is perfectly reasonable. It's the inference that Brans and Dicke -- and everyone else for that matter -- took away. Brans and Dicke, to remedy this presumed defect of GRT, resuscitated Pasqual Jordan's scalar-tensor version of gravity, hoping the scalar field part could bring in Machian ideas.

OK

The second crucial mistake is the inference everyone made that Brans' EP argument meant that Mach's principle isn't contained in GRT. Indeed, exactly the opposite is the case. Brans' conclusion from the EP is absolutely necessary for Mach's principle to be contained in GRT. It is the conclusion that must be true if inertial reaction forces are always to satisfy Newton's third law, for it guarantees that phi = c^2 ALWAYS when measured locally. But everyone had adopted the false inference that GRT is non-Machian. It's no wonder that issues of Mach's principle in GRT has been so confused. It's no wonder that C+W (really Wheeler I'd guess, for he witnessed the Mach wars of the '50s and '60s) tried to use Lynden-Bell's initial data and constraint equations approach to implement Einstein's parting shot at Mach's principle in the '20s. The origin of inertia is just too important to let go with the sort of "explanations" now floating around.

Unintelligible. EEP follows trivially once one understands that Newton's gravity force is simply the fictitious force on the weightless geodesic test particle as seen visually in a static LNIF from real electrical forces pushing the static LNIF off a local timelike geodesic.

On a personal note, I've known that phi = c^2 (locally) is the condition to get all of the Mach stuff to work since around 1992. But I was focused on inertial forces and how they might be transiently manipulated. And doing experiments. I won't tell you how long it took for the other aspect of the origin of inertia to sink in -- even though it was staring me in the face. . . .

Unintelligible.

Keep the faith,

Sorry Jim but the faith required here is not scientific in my opinion.

Jim ____________________________________________________________

Jack SarfattiEinstein wrote in ~ 1907: "The breakthrough came suddenly one day. I was sitting on a chair in my patent office in Bern. Suddenly the thought struck me: If a man falls freely, he would not feel his own weight. I was taken aback. This simple thought experiment made a deep impression on me. This led me to the theory of gravity. I continued my thought: A falling man is accelerated. Then what he feels and judges is happening in the accelerated frame of reference. I decided to extend the theory of relativity to the reference frame with acceleration. I felt that in doing so I could solve the problem of gravity at the same time. A falling man does not feel his weight because in his reference frame there is a new gravitational field, which cancels the gravitational field due to the Earth. In the accelerated frame of reference, we need a new gravitational field.”

Jack SarfattiThose quotes are from early Einstein around 1907 and Jim Woodward repeats what I said repeatedly that Einstein himself was still unclear in his own mind on how to use words like "accelerated frame" back then. He was in middle of breaking away from Newton's GRIP on the mind of how to think about gravity.

DV/ds is measured directly locally by an accelerometer clamped to the test particle - real measurement 1

dV/ds = d^2X/ds^2 is measured indirectly by the Doppler radar clamped to the local frame detector - real measurement 2

M^-1DP(frame)/ds is measured directly by a second acclerometer clamped to the frame-Doppler radar - measurement 3

The BASIC LAW is

measurement 1 = measurement 2 - measurement 3

provided that test particle and frame Doppler radar are not far away from each other relative to A^1/2 where A^-1 is smallest local radius of curvature.

The geodesic equation is simply Newton's first law when

measurement 1 = 0

Newton's second law is simply when

measurement 1 =/= 0

there is never any cancellation of real forces on any one object in this context

the LNIF ---> LIF in measurement 3 simply means removing a real unbalanced force on the frame detector according to Newton's 1st law.

Jack SarfattiEinstein's use of "accelerated" here is in Newton's sense - the rest frame of the freely falling man is kinematically accelerated relative to the Earth

i.e. d^2X/ds^2

the freely falling man's local frame is LIF - though Einstein did not yet discover that in 1907 and his informal language is still Newtonian because the modern GR informal language of 1916 and after is not yet emerged.

Jack Sarfatti“there is a new gravitational field, which cancels the gravitational field due to the Earth.”

This is Einstein's remark that Z and other muddled philosophers and Laputa Scholastics pulls out of proper context. Yes, Einstein wrote it back around 1907 before he understood the problem the way he eventually would in 1916.

In fact there is only one gravity field not two.

The point is that there was never a real gravity force field on the test particle to begin with.

Therefore, you don't need a second gravity force field to cancel what was never there!

Indeed, there is no way to measure either of these alleged two real gravity forces to begin with. You can never separate them. Accelerometers on test particles always show zero.

Therefore, like the mechanical aether these two forces are not independently measurable - they are errors of thinking - excess metaphysical informal language baggage.

I adopt as a working hypothesis that the flying saucers are real and that they get here through stargates that are shortcut tunnels in Einstein’s spacetime continuum. The task is then to see what modern physics has to say about such a scenario even if it’s not true. Whether or not it’s true is beside the point and I will not discuss the actual UFO evidence, good, bad and bogus in this book. I will also write about quantum theory and its relation to computing, consciousness, cosmology, the hologram universe and ending in a scenario for Stephen Hawking’s “Mind of God.” That Hawking thinks God is not necessary is again is beside the point. A good layman’s background reference here is Enrico Rodrigo’s “The Physics of Stargates: Parallel Universes, Time Travel and the Enigma of Wormhole Physics.” If you have the patience, Leonard Susskind’s Stanford University lectures in physics online videos are also worth the effort for the serious student

Jack SarfattiChapter 1 Einstein’s Theory of Relativity in a Nutshell

Here I follow “Gravitation and Inertia” by Ignazio Ciufolini and John Archibald Wheeler, which is a more up to date sequel to the Misner, Thorne, Wheeler classic book “Gravitation.”

“Gravity is not a foreign and physical force transmitted through space and time. It is a manifestation of the curvature of spacetime.” Albert Einstein

“First, there was the idea of Riemann that space, telling mass how to move, must itself – by the principle of action and reaction – be affected by mass. It cannot be an ideal Euclidean perfection, standing in high mightiness above the battles of matter and energy. Space geometry must be a participant in the world of physics.” John Archibald Wheeler (aka JAW)

“Second, there was the contention of Ernst Mach that the ‘acceleration relative to absolute space’ of Newton is only properly understood when it is viewed as acceleration relative to the sole significant mass there really is.” JAW

The above statement is now obsolete since ordinary matter in the form of baryons, electrons, photons etc. is now known to be not more that approximately 5% of all the gravitating stuff that we can see in the past light cones of our telescopes. About 70% is large-scale anti-gravitating dark energy accelerating the expansion speed of 3D space. Random quantum vacuum zero point virtual photons and other spin 1 and spin 2 quanta in quantum field theory have negative pressure three times greater than their positive energy density and may be dark energy. The remaining approximately 25% is clumped shorter-scale gravitating dark matter that holds galaxies together. Random quantum vacuum zero point virtual electron-positron and other spin ½ quanta have positive pressure three times greater than their negative energy density causing attractive gravity like dark matter. If dark matter is this quantum vacuum effect dictated by local Lorentz covariance and Einstein’s Equivalence Principle (aka EEP), then none of the attempts to measure real on-mass-shell particles whizzing through space to explain dark matter will succeed. There are, however, “f(R)” MOND variations of Einstein’s general relativity that attempt to explain both dark matter and dark energy.

Jack Sarfatti“According to this ‘Mach Principle,’ inertia here arises from mass there.” JAW

This is summarized in Einstein’s 1915 local tensor field equation relating the source stress-energy current densities of matter fields to the curvature of spacetime locally coincident with matter currents. However, when we solve those local field equations we have to impose global boundary/initial conditions and use the method of Green’s function propagators to see how matter currents here change spacetime curvature there. The “inertia” in Wheeler’s statement above refers to the pattern of force-free time like geodesic paths of test particles whose mass is small enough to neglect their distortion of the local curvature gravity field. The word “inertia” in the context of Mach’s principle above does not refer at all to the actual rest masses of the test particles. Indeed, the test particle rest masses cancel out of the timelike geodesic equations of motion that correspond to Newton’s first law of motion. Galileo first understood this though he did not have the modern mathematical concepts I am using here.

“Third was that great insight of Einstein that … ‘free fall is free float’: the equivalence principle, one of the best tested principles of physics, from the inclined tables of Galilei and the pendulum experiments of Galilei, Huygens, and Newton to the highly accurate torsion balance measurements of the twentieth century, and the Lunar Laser Ranging experiment … With these three clues vibrating in his head, the magic of mind opened to Einstein what remains one of mankind’s most precious insights: gravity is manifestation of spacetime curvature.”

What should we mean by the word “inertia” and what is its relation to gravity? Wheeler means: “The local equivalence of ‘gravitation’ and ‘inertia,’ or the local cancellation of the gravitational field by local inertial frames … A gravitational field is affected by mass-energy distributions and currents, as are the local inertial frames. Gravitational field and local inertial frames are both characterized by the spacetime metric, which is determined by the mass-energy distributions and currents.”

Jack SarfattiThe same term “gravitational field” is used in several different meanings depending on context. When Wheeler talks about the “cancellation of the gravitational field by local inertial frames” he means Newton’s universally attracting radial 1/r2 field from a spherically symmetric source mass. In the tensor calculus language of Einstein’s 1916 general theory of relativity of gravitation, Newton’s gravity field is a piece of the Levi-Civita connection terms in the directional covariant derivative of the linear four-momentum of a test particle with respect to the proper clock time along its path or world line in four-dimensional spacetime. The second meaning of “gravitational field” is the tensor curvature, which is the rotational covariant partial derivative “curl” of the Levi-Civita connection with respect to itself. Einstein’s theory is a local classical field theory whose measurable properties or “observables” must be tensors and spinors.

The local geometrodynamic field moves massive test particles in force-free inertial motion on timelike geodesics, but do not back-react on the geometrodynamic field. We distinguish test particles from source masses, which generate the geometrodynamic field in a similar way to how electric charges generate the electromagnetic field.

Jack SarfattiContrary to popular misconceptions, although the local laws of classical physics have the same “tensor” and/or “spinor” form for all motions of detectors measuring all the observable possessed by the “test particles”, there are privileged dynamical motions of the test particles in Einstein’s two theories of relativity special 1905 and general 1916. This was in Einstein’s words “My happiest thought.” These privileged motions are called “geodesic” motions or “world lines.” Test particles are distinguished from “source particles.” It is an approximation that test particles do not significantly modify the fields acting on them. They are, strictly speaking, a useful contradiction of the metaphysical principle of no action of Alice on Bob without a direct “back-reaction” of Bob on Alice. Massless point test particles in what physicists call the “classical limit” move on “null” or “lightlike” geodesics. Test particles with mass m move on timelike geodesics that are inside the “light cone” formed by all the light rays that might be emitted from that test particle if it were electrically charged and if it were really accelerating. The latter is a “counter-factual” statement. Look that up on Google. The key point is that Alice is weightless when traveling on a timelike geodesic inside her two local light cones past and future. There are no real forces F acting on Alice. On the contrary, Bob who is measuring Alice with a detector (aka “measuring apparatus”) need not be on another timelike geodesic. He can be off-geodesic because real forces can be acting on him causing him to feel weight. The real forces acting on Bob appear as “fictitious” “inertial pseudo-forces” acting on Alice from Bob’s frame of reference. The only real forces in nature that we know about in 2013 are the electro-magnetic, the weak and the strong. Gravity is not a real force in Einstein’s theory. Gravity is one of the fictitious forces described above. Real forces on test particles, unlike all fictitious forces on them, are not universal. Fictitious inertial pseudo-forces that appear to, but are not really acting on the observed test particles all depend on the mass m of the test particle.

Jack SarfattiThe operational litmus test to distinguish a real force from a fictitious inertial pseudo-force is what an accelerometer rigidly clamped to the observed test particle measures. I repeat, because many engineers and even some physicists get muddled on what should be an elementary physics idea: Einstein’s “happiest thought” that led to his general theory of relativity in the first place, was his epiphany that an accelerometer clamped to a freely falling object on a timelike geodesic path (i.e., world line) would not register any g-force (i.e., any weight). The apparent kinematical acceleration of a freely falling test particle seen in the gravitational field of the surface of Earth is because the surface of rigid Earth at every point on it has radially outward proper tensor acceleration whilst the test particle itself has zero proper tensor acceleration. The accelerometer on the test particle registers zero. The accelerometer at a point on the surface of Earth registers the “weight” an object of rest mass m clamped to it. That every point on a rigid sphere is accelerating radially outward is hard for common sense engineers and laymen to comprehend. It seems crazy to common sense, but that is precisely the counter-intuitive Alice in Wonderland reality of Einstein’s curved spacetime that is battle-tested by very accurate experiments. Consequently, if Alice and Eve are each on separate timelike geodesics very close to each other and if Bob who is not on a timelike geodesic of his own due to real forces acting on him, then Alice and Eve will have the same kinematical acceleration relative to Bob and they will both feel weightless though Bob feels weight – also called “g-force.” This causes a lot of confusion, especially to aerospace missile engineers and high-energy particle physicists, because Newton did consider gravity to be a real force, but Einstein did not. Gravity is not a force. Gravity is the curvature tensor of four-dimensional space-time. What Newton thought of as a real gravity force, is demoted to a fictitious inertial pseudo-force in Einstein’s theory. In the language of the late John Archibald Wheeler, gravity is a “force without Force”. The best local frame invariant way to think about gravity in an objective local frame-independent way is the pattern of both light like and timelike geodesics whose source is the “stress-energy density tensor field” Tuv of matter. By matter we mean spin 1/2 leptons, quarks, and the spin 1 electromagnetic-weak-strong gauge bosons as well as the spin 0 Higgs vacuum superconductor field that formed only when our observable piece of the multiverse called the “causal diamond” popped out of the false vacuum about 13.7 billion years ago.

Jack Sarfattihttp://en.wikipedia.org/wiki/Flying_saucer “For years it was thought that the Schwarzschild spacetime did in fact exhibit some sort of radial singularity at r = 2GM/c2. Eventually physicists came to realize that it was not Schwarzschild spacetime th...See More

A flying saucer (also referred to as a flying disc) is a type of described flying craft with a disc or saucer-shaped body, commonly used generically to refer to any anomalous flying object. In 1947 the term was coined but was later officially supplanted by the United States Air Force in 1952 with th...

Jack SarfattiA firewall is a hypothetical phenomenon where an observer that falls into an old black hole encounters high-energy quanta at (or near) the event horizon. The "firewall" phenomenon was proposed in 2012 by Almheiri, Marolf, Polchinski, and Sully [1] as a possible solution to an apparent inconsistency in black hole complementarity. The proposal is often referred to as the "AMPS" firewall, an acronym for the names of the authors of the 2012 paper. However, the occurrence of this phenomenon was proposed eleven years earlier by Friedwardt Winterberg,[2] and is very different from Hawking radiation. The firewall hypothesis, like black hole complementarity, is quantum gravitational. It arises (in part) from the conjecture that once an old black hole has emitted a sufficiently large amount of Hawking radiation, the mixed quantum state of the black hole is highly entangled with the state of the Hawking radiation thus far emitted. Firewalls are a dramatic change from the usual assumption that quantum gravity is unimportant except in regions of spacetime where the radius of spacetime curvature is on the order of the Planck length; large black holes have low curvature near the event horizon. However, according to Winterberg,[2] a correct theory of quantum gravity cannot ignore the zero point vacuum energy. Because it must be cut off at the Planck energy, Lorentz invariance is violated at high energies, creating a preferred reference system in which the zero-point energy is at rest. In approaching and crossing the event horizon at the velocity of light in the preferred reference system, an elliptic differential equation holding matter in a stable equilibrium goes over in a hyperbolic differential equation where there is no such equilibrium, with all matter disintegrating into gamma rays without loss of information or violation of unitarity, as it has been observed in cosmic gamma ray bursters. The firewall idea seems to be related to the "energetic curtain" around a black hole, proposed by Braunstein,[3] but it depends on the unproven conjecture that a black hole entropy is entirely entropy of entanglementhttp://en.wikipedia.org/wiki/Firewall_(physics)

A firewall is a hypothetical phenomenon where an observer that falls into an old black hole encounters high-energy quanta at (or near) the event horizon. The "firewall" phenomenon was proposed in 2012 by Almheiri, Marolf, Polchinski, and Sully[1] as a possible solution to an apparent inconsistency i...

Jack Sarfatti“What is it that breathes fire into the equations and makes a universe for them to describe? … However, if we discover a complete theory, it should in time be understandable by everyone, not just by a few scientists. Then we shall all, philosophers, scientists and just ordinary people, be able to take part in the discussion of the question of why it is that we and the universe exist. If we find the answer to that, it would be the ultimate triumph of human reason -- for then we should know the mind of God.” (P.193), A Brief History of Time. Rodrigo shows that the classical energy conditions and chronology protection arguments against time travel to the past as well as the quantum inequality restrictions on negative energy balanced by positive energy are not likely to be fatal barriers against stargate technology. Wikipedia has now become quite reliable for physics/math articles after a rocky start of several years especially on biographies of living movers and shakers. Rather than repeat standard content on technical jargon that is prerequisite to understanding this book I give URLs to Wikipedia and, at times, other explanations. The same idea appears in quantum theory in David Bohm’s interpretation. Orthodox quantum theory violates Wheeler’s philosophical principle of action and reaction. The quantum information field Q acts on the classical particles and fields without any direct reaction of the latter on the former. Then, and only then, is it impossible to use entanglement as a stand alone communication channel not requiring a classical signal key to decrypt the message at only one end of the entangled whole. In other words, “background independence” in Einstein’s 1916 general relativity is equivalent to entanglement signal nonlocality violating orthodox quantum theory. The non-dynamical spacetime background of Einstein’s 1905 special relativity is equivalent to the “no signaling” circular arguments of Abner Shimony’s “passion at a distance.” http://www.skyandtelescope.com/news/84347742.html http://arxiv.org/pdf/1302.6165v1.pdfhttp://en.wikipedia.org/wiki/Vacuum_state http://en.wikipedia.org/wiki/Virtual_particle http://en.wikipedia.org/wiki/Lorentz_covariance http://en.wikipedia.org/wiki/Green's_function http://en.wikipedia.org/wiki/Geodesic http://en.wikipedia.org/wiki/Levi-Civita_connection http://en.wikipedia.org/wiki/Covariant_derivative

With seven years of data, the WMAP cosmology satellite has refined the age of the universe and other key cosmic parameters. The results strengthen the "standard model" of inflationary cosmology.

In the mathematical field of differential geometry, the Riemann curvature tensor, or Riemann–Christoffel tensor after Bernhard Riemann and Elwin Bruno Christoffel, is the most standard way to express curvature of Riemannian manifolds. It associates a tensor to each point of a Riemannian manifold (i....

It's clear that DK's scheme won't work - nor will any scheme that is based on unitary linear orthodox quantum theory using orthogonal base states.
However, concerning Valentini's, Josephson, Weinberg, Stapp & my different & independent from from DK's approaches: while the trace operation to get expectation values of observables on quantum density matrices is invariant under unitary transformations of the base states which preserve orthogonality, that is not true for the transformation from an orthogonal Fock basis to the non-orthogonal Glauber coherent state basis, which is clearly a non-unitary transformation that is OUTSIDE the domain of validity of orthodox quantum theory. I think many Pundits have missed this point?

Hawking's former assistant Bernard Carr spells this out clearly in Can Psychical Research Bridge the Gulf Between Matter and Mind?" Bernard Carr Proceedings of the Society for Psychical Research, Vol 59 Part 221 June 2008

Begin forwarded message:

From: nick herbert <quanta@cruzio.com> Subject: Re: AW: AW: More on the |0>|0> term Date: June 14, 2013 11:14:57 AM PDT To: Suda Martin <Martin.Suda.fl@ait.ac.at>

Thank you, Martin. I finally get it. My confusion lay in the attribution of the short calculation below. I thought this calculation (which leads to rA) was due to Gerry.

Instead it is a calculation done by Gerry but attributed to DK. It was not a calculation that DK ever carried out but arose from Gerry taking Gerry's FULL CALCULATION, applying the Kalamidas approximation and getting an incorrect result.

The correct result is Zero on which you and Gerry agree.

So if Kalamidas would have carried out the calculation this way he would have gotten an incorrect answer.

I hope I have now understood the situation correctly.

But Kalamidas did not carry out the calculation that Gerry displays. DK did not start out with the FULL CALCULATION and then approximate.

DK starts with an approximation and then calculates.

DK starts with an approximation and carries out a series of steps which all seem to be valid but whose conclusion is preposterous. Furthermore the approximation (weak coherent states) is an approximation used in dozens of laboratories by serious quantum opticians without as far as I am aware leading to preposterous or impossible conclusions.

Therefore it seems to me that the calculation below is another nail in the Kalamidas coffin, BUT THE BEAST IS STILL ALIVE.

1. No one yet has started with Kalamidas's (approximate) assumptions, and discovered a mistake in his chain of logic.

2. No one yet has started with Kalamidas's (approximate) assumptions, followed a correct chain of logic and shown that FTL signaling does not happen.

Martin Suda came the closest to carrying out problem #2. He started with the Kalamidas (approximation) assumptions and decisively proved that all FTL terms are zero. But Martin's proof contains an unphysical |0>|0> term that mars his triumph.

I am certain that the Kalamidas claim is wrong. The FULL CALCULATION refutations of Ghirardi, Howell and Gerry are pretty substantial coffin nails. But unless I am blind there seems still something missing from a clean and definitive refutation of the Kalamidas claim. See problems #1 and #2 above.

I do not think that Nick is being stubborn or petty in continuing to bring these problems to your attentions. I should think it would be a matter of professional pride to be able to bring this matter to a clean and unambiguous conclusion by refuting Kalamidas on his own terms.

Thank you all for participating in this adventure whatever your opinions.

Nick Herbert

On Jun 14, 2013, at 3:29 AM, Suda Martin wrote:

Nick,

Thank you for comments!

I would still like to explain my short considerations below a bit more precisely, anyway. I feel there was perhaps something unclear as regards my email (12th June), because you wrote "you were confused".

I only considered the following:

DK disclosed a calculation (see attachment) which is completely wrong because he made a mathematical limit (see first line, where he omitted the term ra^{+}_{a3}) which is absolutely not justifiable here (just as CG mentioned, see below) because both parts are equally important if you make the expectation value properly. If you take both parts you get exactly zero: alpha^{*}(tr^{*}+rt^{*})=0. So one does not obtain a quantity like (r alpha)^{*}.

That’s all. There is absolutely no discrepancy between me and CG.

Nice regards, Martin

-----Ursprüngliche Nachricht----- Von: nick herbert [mailto:quanta@cruzio.com] Gesendet: Mittwoch, 12. Juni 2013 23:33

Betreff: Re: AW: More on the |0>|0> term

"And again, the notion that an alleged approximate calculation (I say "alleged" because as with everything else there are correct and incorrect approximate calculations) based on a weak signal coherent state somehow trumps an exact computation valid for any value of the coherent state parameter, is, well, just insane. If you want to see where things go wrong just take more terms in the series expansions. Add up enough terms and, viola, no effect! One can't get much more specific than that." --Christopher Gerry

Actually, Chris, one can get much more specific than that by explicitly displaying the Correct Approximation Scheme (CAS) and showing term by term than Alice's interference vanishes (to the proper order of approximation).

Absent a correct CAS and its refutation these general claims are little more than handwaving.

Produce a CAS. Refute it.

Is anyone up to this new Kalamidas challenge? Or does everyone on this list except me consider deriving a CAS a waste of time?

Nick Herbert

On Jun 12, 2013, at 2:03 PM, CHRISTOPHER GERRY wrote:

We are both right: the two terms cancel each other out! That the whole expectation value is zero is actually exactly what's in our paper's Eq. 9. This happens because the reciprocity relations must hold. That Kalamidas thought (or maybe even still thinks) his calculation is correct, is at the heart of the matter, that is, that he is either unable to do the calculations or that he can do them but chooses not too because they don't get him where he wants to go.

The Kalamidas scheme will not work not work on the basis of general principles as we showed in the first part of our paper (see also Ghirardi's paper).

And again, the notion that an alleged approximate calculation (I say "alleged" because as with everything else there are correct and incorrect approximate calculations) based on a weak signal coherent state somehow trumps an exact computation valid for any value of the coherent state parameter, is, well, just insane. If you want to see where things go wrong just take more terms in the series expansions. Add up enough terms and, viola, no effect! One can't get much more specific than that.

Christopher C. Gerry Professor of Physics Lehman College The City University of New York 718-960-8444 christopher.gerry@lehman.cuny.edu

---- Original message ---- Date: Wed, 12 Jun 2013 12:28:16 -0700 From: nick herbert <quanta@cruzio.com> Subject: Re: AW: More on the |0>|0> term To: Suda Martin
All--

Excuse me for being confused. Gerry refutes Kalamidas by showing that an omitted term is large. Suda refutes Kalamidas by showing that the same term is identically zero. What am I missing here?

I wish to say that I accept the general proofs. Kalamidas's scheme will not work as claimed. That is the bottom line. So if the general proofs say FTL will fail for full calculation, then it will certainly fail for approximations.

The "weak coherent state" is a common approximation made in quantum optics. And dozens of experiments have been correctly described using this approximation. So it should be a simple matter to show if one uses Kalamidas's approximation, that FTL terms vanish to the appropriate level of approximation. If this did not happen we would not be able to trust the results of approximation schemes not involving FTL claims.

Gerry's criticism is that Kalamidas's scheme is simply WRONG--that he has thrown away terms DK regards as small. But in fact they are large. Therefore the scheme is flawed from the outset.

If Gerry is correct, then it seems appropriate to ask: Is there a CORRECT WAY of formulating the Kalamidas scheme using the "weak coherent state" approximation, where it can be explicitly shown that this correct scheme utterly fails?

It seems to me that there are still some loose ends in this Kalamidas affair, if not a thorn in the side, at least an unscratched itch.

It seems to me that closure might be obtained. And the Kalamidas affair properly put to rest if everyone can agree that 1. DK has improperly treated his approximations; 2. Using the CORRECT APPROXIMATION SCHEME, the scheme abjectly fails just as the exact calculation says it must.

Why should it be so difficult to construct a correct description of the Kalamidas proposal, with CORRECT APPROXIMATIONS, and show that it fails to work as claimed?

AS seen from the Ghirardi review, there are really not that many serious FTL proposals in existence. And each one teaches us something-- mostly about some simple mistakes one should not make when thinking about quantum systems. Since these proposals are so few, it is really not a waste of time to consider them in great detail, so we can learn to avoid the mistakes that sloppy thinking about QM brings about.

When Ghirardi considers the Kalamidas scheme in his review, I would consider it less than adequate if he did not include the following information:

1. Kalamidas's scheme is WRONG because he treats approximations incorrectly. 2. When we treat the approximations correctly, the scheme fails, just as the general proofs say it must.

Gerry has provided the first part of this information. What is seriously lacking here is some smart person providing the second part.

Nick Herbert

On Jun 12, 2013, at 8:50 AM, Suda Martin wrote:

Dear all,

Yes, if one calculates precisely the Kalamidas - expression given in the attachment of the email of CG one obtains exactly

alpha^{*}(tr^{*}+rt^{*})=0

due to the Stokes-relation of beam splitters. No approximations are necessary. So, I am astonished about the sloppy calculations of Demetrios.

Cheers, Martin

________________________________________ Von: CHRISTOPHER GERRY [CHRISTOPHER.GERRY@lehman.cuny.edu]

Betreff: Re: More on the |0>|0> term

I probably shouldn't jump in on this again, but...

I can assure you that there's no thorn in the side of the quantum optics community concerning the scheme of Kalamidas. There are only people doing bad calculations. Despite claims to the contrary, our paper, as with Ghirardi's, does specifically deal with the Kalamidas proposal. It is quite clearly the case that EXACT calculations in the Kalamidas proposal shows that the claimed effect disappears. To suggest that it's there in the approximate result obtained by series expansion, and therefore must be a real effect, is simply preposterous. All it means is that the approximation is wrong; in this case being due to the dropping important terms.

The whole business about the |00> and whatever (the beam splitter transformations and all that) is not the issue. I'm astonished at how the debate on this continues. The real problem, and I cannot emphasize it enough, is this: Kalamidas cannot do quantum optical calculations, even simple ones and therefore nothing he does should be taken seriously. As I've said before, his calculation of our Eq. (9), which I have attached here, is embarrassingly wrong. It's obvious from the expression of the expectation value in the upper left that there has to be two terms in the result both containing the product of r and t. But Kalamidas throws away one of the terms which is of the same order of magnitude as the one he retains. Or maybe he thinks that term is zero via the quantum mechanical calculation of its expectation value, which it most certainly is not. His limits have been taken inconsistently. So, he not only does not know how to do the quantum mechanical calculations, he doesn't even know how or when the limits should be taken. There's absolutely no point in debating the meaning of the results incorrect calculations. Of course, by incorrectly doing these things he gets the result he wants, and then thinks it's the duty of those of us who can do these calculations to spend time showing him why his calculations are wrong, which he then dismisses anyway. My point in again bringing this specific calculation of his is not to say anything about his proposal per se, but to demonstrate the abject incompetence of Kalamidas in trying to do even the most elementary calculations. And if anyone still wonders why I'm angry about the whole affair, well, what should I feel if some guy unable to do simple calculations tries to tell established quantum optics researchers, like me and Mark Hillery, that our paper showing where he's wrong dismisses ours as being "irrelevant?" He doesn't even seem to know that what he said was an insult.

And finally, the continued claim that the specific proposal of Kalamidas has not been addressed must simply stop. It has been repeatedly. I suspect this claim is being made because people don't like the results of the correct calculations. That's not the problem of those of us can carry through quantum optical calculations.

CG

Christopher C. Gerry Professor of Physics Lehman College The City University of New York 718-960-8444 christopher.gerry@lehman.cuny.edu

---- Original message ---- Date: Tue, 11 Jun 2013 14:12:19 -0700 From: nick herbert <quanta@cruzio.com> Subject: Re: More on the |0>|0> term To: "Demetrios Kalamidas" <dakalamidas@sci.ccny.cuny.edu>

yer right, demetrios-- the |00> term on the right is always accompanied in Suda's calculation by a real photon on the left.

But this is entirely non-physical. No real or virtual quantum event corresponds to this term.

Especially with the high amplitude required for Suda-interference-destruction.

So your specific approximate FTL scheme despite many general refutations still remains a puzzlement.

A thorn in the side of the quantum optics community.

if any think otherwise let them put on the table one unambiguous refutation OF YOUR SPECIFIC PROPOSAL-- not of their own nor of somebody else's totally different FTL signaling scheme,

Nick

On Jun 11, 2013, at 1:27 PM, Demetrios Kalamidas wrote:

Nick,

The EP and CSs do derive from the same laser pulse: part of the pulse pumps the nonlinear crystal and the other part is split off accordingly to create the CSs. However, you are still misssing the point: If no EP pair is created, then you will certainly get '00' on the right sometimes.... BUT there will be no left photon in existence. The problem with the Suda term is that when it appears, it appears only accompanied by a left photon in a superposition state: ie it always appears as (10+e01)(00+11). Think of it this way: Suppose you just have an EP source that creates pairs, with one photon going left and the other right. Imagine that on the right there is a highly trasnparent BS with say |r|^2=0.001. That means that only one out of every thousand right photons from the EP are reflected, and 999 are transmitted. So, this means that for every 1000 counts ON THE LEFT, there will be 999 counts tranmitted on the right. Now introduce, at the other input of that same BS, a CS so that it has a tiny reflected portion of amplitude |ralpha>. Allegedly then, there will arise cases where no photon is found in the transmitted channel with probability equal to |ralpha|^2. Since alpha is arbitrary, we can choose | ralpha|=0.1. This means that the probabilty of getting no photon in the transmitted channel will be |ralpha|^2=0.01.....Which now means that, for every 1000 EP pairs created, we will get 1000 counts on the left, but only 900 counts in the transmitted channel on the right! Whereas, without the CS in the other channel, there would be 999 counts on the right for that same 1000 counts on the left. Demetrios

On Tue, 11 Jun 2013 09:44:42 -0700 nick herbert <quanta@cruzio.com> wrote: Demetrios-- I don't know how the entangled pair (EP) and CSs are generated. I supposed all three are created with a single PULSE in a non- linear crystal. Now one can imagine that this pulse fails to create an EP but does create a CS Then some of Bob's detectors will fire but no ES is formed. So this kind of process could lead to lots of |0>|0> terms. However what we need are not "lots of |0>|0> terms" but a precise amplitude (rA) of |0>|0> term. Given our freedom (in the thought experiment world) to arbitrarily select the efficiency of the non-linear crystal, it is hard to see why the elusive |0>|0> term would have exactly the right magnitude and phase to cancel out the interference. Your original FTL scheme still continues to puzzle me. Nick On Jun 11, 2013, at 6:54 AM, Demetrios Kalamidas wrote: Nick,

The 'entire experimental arrangement' is indeed where the problem (mystery) arises: When both CSs are generated it is easy to understand that '00' will arise, simply because each CS has a non-zero vacuum term. However, the entire arrangement means inclusion of the entangled photon pair: Any time that pair is generated, you are guaranteed to get a photon on the right, regardless of whether the CSs are there. So, when entangled pair and CSs are present, there must be at least one photon at the right. In fact, when only one photon emerges at the right WE KNOW both CSs were empty.

On Mon, 10 Jun 2013 10:34:30 -0700 nick herbert <quanta@cruzio.com> wrote: Demetrios-- Sarfatti sent around a nice review of quantum optics by Ulf Leonhardt that discusses the structure of path-uncertain photons. Here is an excerpt: The interference experiments with single photons mentioned in Sec. 4.3 have been performed with photon pairs generated in spontaneous parametric downconversion [127]. Here the quantum state (6.28) of light is essentially |01> |02> + ζ |11>|12 >. (6.29) In such experiments only those experimental runs count where photons are counted, the time when the detectors are not firing is ignored, which reduces the quantum state to the photon pair |11> |12> . Postselection disentangles the two-mode squeezed vacuum. We argued in Sec. 4.3 that the interference of the photon pair |11> |12> at a 50:50 beam splitter generates the entangled state (4.24). Without postselection, however, this state is the disentangled product of two single- mode squeezed vacua, as we see from the factorization (6.6) of the S matrix. The notion of entanglement is to some extent relative. this excerpt suggests a possible origin for Suda's |0>|0> term. In the above process, it's just the inefficiency of the down converter that generates a |0>|0> term. That won't do the trick. But in your more complicated situation--containing two properly timed coherent states-- when Bohr's "entire experimental arrangement" is considered, the | 0>| 0> term may arise naturally with the proper amplitude and phase. It would correspond to events when the coherent states were successfully generated but there were no events in either upper or lower path. If this conjecture can be shown to hold true, then the original Kalamidas proposal would be refuted by Suda's calculation. The trick would be to examine--in a thought experiment way-- exactly how those two |A> beams are created--looking for entanglement with |0>|0> states in the part of the experiment considered in your proposal. Nick ref: Ulf Leonhardt's wonderful review of quantum optics, starting with reflections from a window pane and concluding with Hawking radiation.

Jack Sarfatti This is hot. If the effect works it's the basis for a new Intel, Microsoft & Apple combined for those smart venture capitalists, physicists & engineers who get into it. This is as close as we have ever come since I started the ball rolling at Brandeis in 1960-61 & then in mid-70's see MIT Physics Professor David Kaiser's "How the Hippies Save Physics". I first saw this as a dim possibility in 1960 at Brandeis grad school and got into an intellectual fight about it with Sylvan Schweber and Stanley Deser. Then the flawed thought experiment published in the early editions of Gary Zukav's Dancing Wu Li Masters in 1979 - pictured in Hippies book tried to do what DK may now have actually done. That is, control the fringe visibility at one end of an entangled system from the other end without the need of a coincidence counter correlator after the fact. Of course, like Nick Herbert's FLASH at the same time late 70's, it was too naive to work and the nonlinear optics technology was not yet developed enough. We were far ahead of the curve as to the conceptual possibility of nonlocal retrocausal entanglement signaling starting 53 years ago at Brandeis when I was a National Defense Fellow Title IV graduate student.

Jack Sarfatti

about an hour ago near San Francisco On Feb 5, 2013, at 12:28 PM, JACK SARFATTI <sarfatti@pacbell.net> wrote:

Thanks Nick. Keep up the good work. I hope to catch up with you on this soon. This may be a historic event of the first magnitude if the Fat Lady really sings this time and shatters the crystal goblet. On the Dark Side this may open Pandora's Box into a P.K. Dick Robert Anton Wilson reality with controllable delayed choice precognition technology. ;-)

On Feb 5, 2013, at 10:38 AM, nick herbert <quanta@cruzio.com> wrote:

Demetrios--

Looking over your wonderful paper I have detected one inconsistency but it is not fatal to your argument.

On page 3 you drop two r terms because "alpha", the complex amplitude of the coherent state can be arbitrarily large in magnitude.

But on page 4 you reduce the magnitude of "alpha" so that at most one photon is reflected. So now alpha cannot be arbitrarily large in magnitude.

But this is just minor quibble in an otherwise superb argument.

This move does not affect your conclusion--which seems to directly follow from application of the Feynman Rule: For distinguishable outcomes, add probabilities; for indistinguishable outcomes, add amplitudes.

To help my own understanding of how your scheme works, I have simplified your KISS proposal by replacing your coherent states with the much simpler state |U> = x|0> + y|1>. I call this variation of your proposal KISS(U)

When this state |U> is mixed with the entangled states at the beamsplitters, the same conclusion ensues: there are two |1>|1> results on Bob's side of the source that cannot be distinguished -- and hence must be amplitude added.

The state |U> would be more difficult to prepare in the lab than a weak coherent state but anything goes in a thought experiment. The main advantage of using state |U> instead of coherent states is that the argument is simplified to its essence and needs no approximations. Also the KISS(U) version shows that your argument is independent of special properties possessed by coherent states such as overcompleteness and non- orthogonality. The state |U> is both complete and orthogonal -- and works just as well to prove your preposterous conclusion. --- that there is at least one way of making photon measurements that violates the No-Signaling Theorem.

Thanks for injecting some fresh excitement into the FTL signaling conversation.

warm regards Nick Herbert Like · · Share David Fernando López Torres, Keith Kenemer and 2 others like this. View 1 more comment

Jack Sarfatti On Feb 5, 2013, at 1:15 PM, Demetrios Kalamidas <dakalamidas@sci.ccny.cuny.edu> wrote:

Nope, no refutation I can think of so far....and I've tried hard. Demetrios ...See More 33 minutes ago · Like

Joe Ganser Jack do you know a lot of people at CUNY? I take ph.d classes there. 26 minutes ago · Like

Joe Ganser I'm interested in who may do these sorts of topics in NYC 25 minutes ago · Like

Jack Sarfatti Daniel Greenberger! 9 minutes ago · Like · 1

a few seconds ago · Like

On Feb 5, 2013, at 1:15 PM, Demetrios Kalamidas <dakalamidas@sci.ccny.cuny.edu> wrote:

Nope, no refutation I can think of so far....and I've tried hard. Demetrios

On Tue, 5 Feb 2013 13:09:28 -0800 nick herbert <quanta@cruzio.com> wrote: Thanks, Demetrios. I understand now that alpha can be large while alpha x r is made small. Also I notice that your FTL signaling scheme seems to work both ways. In your illustration the photons on the left side (Alice) are combined at a 50/50 beam splitter so they cannot be used for which-way information. However if the 50/50 beamsplitter is removed, which-way info is present and the two versions of |1>|1> on the right-hand side (Bob) are now distinguishable and must be added incoherently, which presumably will give a different answer and observably different behavior by Bob's right-side detectors. So your scheme seems consistent -- FTL signals can be sent in either direction. This is looking pretty scary. Do you happen to have a refutation up your sleeve or are you just as baffled by this as the rest of us? Nick

Therefore, Nick it is premature for you to claim that the full machinery of the Glauber coherent states, i.e. distinguishable over-complete non-orthogonality is not necessary for KISS to work. Let's not rush to judgement and proceed with caution. This technology, if it were to work is as momentous as the discovery of fire, the wheel, movable type, calculus, the steam engine, electricity, relativity, nuclear fission & fusion, Turing machine & Von Neumann's programmable computer concept, DNA, transistor, internet ...

On Feb 5, 2013, at 12:18 PM, Demetrios Kalamidas <dakalamidas@sci.ccny.cuny.edu> wrote:

Hi Nick,

And thanks much for your careful examination of my scheme....however, there appears to be a misunderstanding. Let me explain:

"On page 3 you drop two r terms because "alpha", the complex amplitude of the coherent state can be arbitrarily large in magnitude."

I drop the two terms in eq.5b because they are proportional to 'r'....and 'r' approaches zero. However, the INITIAL INPUT amplitude, 'alpha', of each coherent state can be as large as we desire in order to get whatever SMALL BUT NONVANISHING AND SIGNIFICANT product 'r*alpha', which is related to the terms I retain.

In other words, for whatever 'r*alpha' we want, lets say 'r*alpha'=0.2, 'r' can be as close to zero as we want since we can always input a coherent state with large enough initial 'alpha' to give us the 0.2 amplitude that we want.

So, terms proportional to 'r' are vanishing, while terms proportional to 'r*alpha' are small but significant and observable. You state:

"But on page 4 you reduce the magnitude of "alpha" so that at most one photon is reflected. So now alpha cannot be arbitrarily large in magnitude."

The magnitude of 'alpha' is for the INITIAL coherent states coming from a3 and b3, BEFORE they are split at BSa and BSb. It is this 'alpha' that is pre-adjusted, according to how small 'r' is, to give us an appropriately small reflected magnitude, i.e. 'r*alpha'=0.2, so that the "....weak coherent state containing at most one photon...." condition is reasonably valid.

Demetrios

On Feb 5, 2013, at 12:28 PM, JACK SARFATTI <sarfatti@pacbell.net> wrote:

Thanks Nick. Keep up the good work. I hope to catch up with you on this soon. This may be a historic event of the first magnitude if the Fat Lady really sings this time and shatters the crystal goblet. On the Dark Side this may open Pandora's Box into a P.K. Dick Robert Anton Wilson reality with controllable delayed choice precognition technology. ;-)

On Feb 5, 2013, at 10:38 AM, nick herbert <quanta@cruzio.com> wrote:

Demetrios--

Looking over your wonderful paper I have detected one inconsistency but it is not fatal to your argument.

On page 3 you drop two r terms because "alpha", the complex amplitude of the coherent state can be arbitrarily large in magnitude.

But on page 4 you reduce the magnitude of "alpha" so that at most one photon is reflected. So now alpha cannot be arbitrarily large in magnitude.

But this is just minor quibble in an otherwise superb argument.

This move does not affect your conclusion--which seems to directly follow from application of the Feynman Rule: For distinguishable outcomes, add probabilities; for indistinguishable outcomes, add amplitudes.

To help my own understanding of how your scheme works, I have simplified your KISS proposal by replacing your coherent states with the much simpler state |U> = x|0> + y|1>. I call this variation of your proposal KISS(U)

When this state |U> is mixed with the entangled states at the beamsplitters, the same conclusion ensues: there are two |1>|1> results on Bob's side of the source that cannot be distinguished -- and hence must be amplitude added.

The state |U> would be more difficult to prepare in the lab than a weak coherent state but anything goes in a thought experiment. The main advantage of using state |U> instead of coherent states is that the argument is simplified to its essence and needs no approximations. Also the KISS(U) version shows that your argument is independent of special properties possessed by coherent states such as overcompleteness and non- orthogonality. The state |U> is both complete and orthogonal -- and works just as well to prove your preposterous conclusion. --- that there is at least one way of making photon measurements that violates the No-Signaling Theorem.

Thanks for injecting some fresh excitement into the FTL signaling conversation.

On Feb 3, 2013, at 12:42 PM, JACK SARFATTI <sarfatti@pacbell.net> wrote:

Fred, I think you are making an error here. The vacuum |0> is as good a state as |1> in Fock space for a given mode-radiation oscillator. DK's eq. 1 is a FOUR PHOTON state - two REAL PHOTONS & TWO VIRTUAL PHOTONS

Note also that Glauber coherent states use |0> in an fundamental way.

quantum optics interferometer experiments use the |0> states e.g. papers by Carlton Caves

http://info.phys.unm.edu/~caves/

http://info.phys.unm.edu/~caves/research.html

http://info.phys.unm.edu/~caves/talks/talks.html

Search Results [PDF]
Quantum-limited measurements: One physicist's crooked path from ... www.phys.virginia.edu/Announcements/Seminars/.../S1466.pd... File Format: PDF/Adobe Acrobat - Quick View physicist's crooked path from quantum optics to quantum information. I. Introduction. II. Squeezed states and optical interferometry. III. ... Carlton M. Caves ... [PDF]
Quantum metrology - University of New Mexico info.phys.unm.edu/~caves/talks/qmetrologylectures.pdf File Format: PDF/Adobe Acrobat - Quick View Carlton M. Caves. Center for Quantum ... Ramsey interferometry, cat states, and spin squeezing. Carlton M. ... Weinstein, and N. Mavalvala, Nature Physics 4, ...

On Feb 3, 2013, at 12:26 PM, fred alan wolf <fawolf@ix.netcom.com> wrote:

Nick and Demetrios, basic quantum physics tells me that eq. 1 of KISS is a 4-photon state. That is my point. Let the Hamiltonian go. Ergo, to claim it as 2-photon state cannot be correct. Eq. 1 says something about phases as well. If I write a quantum wave function as a sum over i of |ai>|bi>|ci>|di> then there must be 4 objects, not two, regardless of how large is i. Even if |ai> is a sum of possibilities such as (|A1>+|A2>) and similarly for the bi, ci and di states, I still can't get this to reduce to a sum over two particle states. Nicht wahr?
So I am confused how you both seem to see this as OK as far as quantum physics is concerned.

Jack, do you or do you not see my point?
Best Wishes,

I'll quickly respond to Fred's question. The state in eq.1 is perfectly legitimate and has been experimentally realized already. In this scheme it is tacitly assumed that the source S is a down-conversion source, since this is by far the main way in which entangled photon pairs are created. These sources need a pump to stimulate the nonlinear medium (i.e. down-conversion crystal). Usually about one in every million pump photons are split into an entangled pair, each photon of which comes out at a specific angle and energy. The way to create two photons in modes a1a2 is to have the pump come from the bottom and pass upward; the way to create two photons in modes b1b2 is the BACK-REFLECT the same pump downward through the crystal again. So,each run of the experiment is ONE DOUBLE-PASS of the pump through the crystal....most of the times you get nothing and, to very good approximation, the rest of the time you get one pair created (either in a1a2 or b1b2)....Of course there is also the far smaller amplitude of creating two pairs (one in a1a2 and one in b1b2, or two in a1a2, or two in b1b2)....according to the expansion of the Hamiltonian....but these are negligible terms and do not affect the outcomes in all these entanglement experiments. Demetrios

Jack SarfattiOn Feb 3, 2013, at 11:48 AM, JACK SARFATTI <sarfatti@pacbell.net> wrote:

I agree with Nick.

On Feb 3, 2013, at 11:25 AM, nick herbert <quanta@cruzio.com> wrote:

No need for Hamiltonians, Fred. The KISS proposal is as simple as LEGOs. Every part of it is something THAT HAS ALREADY BEEN DEMONSTRATED IN A LAB.

Kalamidas has put these existing Legos together in an imaginative way that seems to permit superluminal signaling.

But probably does not.

If you, Fred, are waiting for a Hamiltonian formulation of this experiment you will be waiting for a long time and will have essentially disconnected yourself from the KISS adventure.

Nick Herbert KISS = Kalamidas's Instant Signaling Scheme ---- end of Nick's message above, I wrote: OK there are two separate issues here.

Question 1: Fred if DK's wave function

Could be made, then do you agree with DK's logic for the rest of the paper.

I think the above wave function is perfectly legitimate in principle although whether one can make it in the lab is another question.

(1) is perfectly sensible in quantum field theory in Fock space.

There are four radiation oscillators with two real photons and two zero point photons distributed among them. The vacuum states |0> are legitimate states.

Question 2. Accepting (1) is DK's logic etc. correct? I think Nick Herbert is working on that question.

I personally am still thinking about the whole thing looking at Mandel as well and trying to understand the whole thing better.

My previous work on the Glauber state distinguishable non-orthogonality loop hole in the no-signaling belief is generally compatible with the spirit of what DK is proposing. I mean

On Feb 3, 2013, at 9:53 AM, fred alan wolf wrote:

Guys and girls,

I don't believe this will work simply because to my knowledge there is no foundation based on quantum physics which supports this initial supposedly 2-particle quantum wave function. What Hamiltonian does it solve? You can always invent quantum wave functions (which are not connected to reality) but to claim this one (which apparently uses 4 photons not 2) has solved the ftl problem is simply bad physics as I see it. If I am wrong here, will somebody explain how this quantum wave function is a two body quantum wave function? Can you show me the Hamiltonian it is solution for?

Thanks Nick. What would Santa do without you in his workshop? ;-) Looks good.
Remember I have been stressing the relevance of Glauber coherent states. They are obviously distinguishably non-orthogonal & over-complete.

On Feb 2, 2013, at 1:48 PM, nick herbert <quanta@cruzio.com> wrote:

Demetrios--

Congratulations again on your clever FTL-signaling scheme.

I am busy constructing (on my white board) your thought experiment using my own notation.

First: I hope you do not mind the acronym I have chosen for this project = KISS

KISS = Kalamidas's Instant Signaling Scheme.

Second: It has become conventional to imagine these signals sent between Alice and Bob. So everything on left side should be labeled "A" and on the right side "B".

Since A and B photons are delivered into two (entangled) modes, I have chosen to label these modes
U and D (for Up and Down). In this labeling convention the basic entangled state vector |ES> becomes

Also it is conventional for beam-splitter modes to be labeled 1, 2, 3, 4 where 1 and 2 are inputs and 3 and 4 are outputs.

So for my thought experiment I will label the 4 modes of Bob's two beam splitters U and D as |U1>, |U2>, |U3>, |U4> and |D1>, |D2>, |D3> and |D4> with a similar convention for the
50/50 beamsplitter encountered by Alice's photons.

I like your clever use of coherent states to muddle the which-way question. But instead of inputting coherent states
at Bob's beamsplitters U and D, I will be inputting the coherent XYZ states |BU> and |BD>

where |BU> = x|0> + y|1> + z|2>

and |BD> has a similar definition.

These are truncated coherent states sufficient to produce the ambiguities you claim will lead to coincidence-less, Bob-controllable
interference in Alice's 50/50 beamsplitter and are easier to calculate than the infinite sums of real coherent states.

Thanks for the opportunity to return to the algebra of few photons on an asymmetric beam splitter. And for the chance to reformulate
your clever KISS experiment in terms that make sense to me.

I am always looking for (high quality) work to do.

And your KISS proposal is both of high quality and within my modest abilities for calculating quantum outcomes.

warm regards Nick Herbert http://quantumtantra.blogspot.com

"Numerous experiments to date, mainly in the quantum-optical domain, seem to strongly support the notion of an inherent nonlocality pertaining to certain multiparticle quantum mechanical processes. However, with apparently equal support, this time from a theoretical perspective, it is held that these nonlocal “influences” cannot be exploited to produce superluminal transfer of information between distant parties. The theoretical objection to superluminal communication, via quantum mechanical multiparticle entanglement, is essentially encapsulated by the “no-signaling theorem” [1]. So, it is within this context that we present a scheme whose mathematical description leads to a result that directly contradicts the no-signaling theorem and manifests, using only the standard quantum mechanical formalism, the capacity for superluminal transmission of information."