Text Size


See the pdf I uploaded August 5, 2011 on Library Resources "Advanced Propulsion"
Aug 05
First of all this word "inertia" is bandied about too loosely causing much of the confusion. It has at least two different meanings in relativity.

Meaning # 1  Origin of inertia in one of Sciama's senses is trying to explain why test particles move on geodesics.
i.e. zero g-force free-float "weightless" "inertial motion" universal independent of the rest masses m of the test particles.

This is essentially trying to explain Newton's first law of mechanical motion generalized to curved space-time. Einstein with Infeld actually proved the geodesic rule from his nonlinear field equations

Ruv + 8piGTuv/c^4 = 0

modeling the test particle as a singularity in the 4th rank curvature tensor Ruvwl field if I remember correctly off the top of my head.

The Einstein-Infeld calculation is purely local not needing Mach's Principle.

Mach's Principle is archaic and quaint referring to "distant matter" that makes no real sense at all in modern precision cosmology since the 1990's because the only kind of "matter" Mach knew about is only 4% of all the gravitating stuff in the universe!

Woodward refers to the distant matter in "causal contact" with his local flux capacitor space drive prototype device that he actually has running in his lab much to his engineering credit. On the other hand, the situation seems to be like cold fusion and high-frequency-gravity-wave propulsion & Podkletnov type allegations discussed at the JASON meeting I attended a few years ago at General Atomics in La Jolla - hard to replicate by independent parties.

Woodward is not clear what he means by "causal contact" at times he seems to mean only the past light cone out to the "particle horizon" of the flux capacitor, but this contradicts the key assumption in his eq (44) that the universe as a constant density of "distant matter" if by that he means what Mach meant the "distant stars" - there is great ambiguity here and his equations rest on very shaky ground.

Keep in mind Tamara Davis's diagram

At other times Woodward alludes to Wheeler-Feynman and of course Sciama worked with Hoyle who developed the Wheeler-Feynman classical retrocausal ideas to quantum theory and cosmology. This brings in the future light cone of his device reaching to our future event horizon - hence John Cramer's "transactions" and Yakir Aharonov's "destiny" (post-selection final boundary condition) and even 't Hooft-Susskind's hologram conjecture.

Meaning #2 of "inertia" as resistance to non-gravity forces pushing the test particle of rest mass m off its timelike geodesic determined locally by source stress-energy density tensors Tuv(non-gravity fields). No need for Mach's Principle here.
So this is essentially F = ma Newton's 2nd law of mechanical motion of test particles generalized to curved spacetime.

The computation of the rest masses m of ordinary matter (real on mass shell in sense of quantum field theory's Feynman propagators) does not require gravity or Mach's principle, but is explained quantitatively for hadrons by Franck Wilczek's quantum chromodynamic supercomputer computations using the Higgs field couplings to the quarks in the input to the program.

What Sciama means by the "origin of inertia" is really the origin of the universal geodesic inertial motion of neutral test particles. He does not mean that Mach's principle is needed to explain why or how the rest mass of the electron is ~ 10^-27 grams or the rest mass of the proton is ~ 10^-24 grams - that is not a gravity physics problem needing the cosmological scale Mach's Principle.
On Aug 4, 2011, at 2:33 PM, JACK SARFATTI wrote:

I found my copy of L&L's Electrodynamics of Continuous Media - tricky stuff.

I will have more to say on why my eq 8 for the gravity field in materials in Woodward's paper is completely justified because of similar arguments Landau & Lifshitz's make for the D and H electromagnetic fields in moving conductors and dielectrics.

Basic rule of thumb is you keep the covariant form of the equations while changing the constituitive parameters.

Yes, E and B are the exterior vacuum applied EM fields independent of the material structure

D and H are the interior material forms depending on induced polarization (charge separations from E) as well as magnetic moment couplings to B

the energy densities are E.D and B.H.

For example, starting in vacuum with its bare permittivity and permeability, the field tensor for D & H in the material is same as it would be in vacuum using the modified lumped parameter "dressed" permittivity and permeability - even though the speed of light is slower in the material, so strictly speaking it is not invariant as it is in vacuum as proved by the Cerenkov effect.

e.g. for moving dielectrics for exterior measurements by inertial observers outside the dielectric (in contrast to Maxwell nano-demons inside the dielectric clamped to its lattice or moving SLOWLY v/c << 1 Galilean relativity) through it)

"to ensure relativistic invariance ... it is necessary that the components of the vectors D and H should be transformed exactly as the components of a four-tensor similar to" the one for E and B in vacuum section 76 of L&L ED of continuous media

i.e. a simple charge renormalization of sorts. I do the same thing in my eq 8 keeping Einstein's gravity field equation formally covariant inside the meta-material in the same way that Landau and Lifshitz do for Maxwell's EM field equation inside a dielectric that can be in motion relative to an exterior observer.

So what I have done in eq 8 is very plausible and definitely Popper falsifiable. Vague handwaving appeals to "background independence" are definitely specious at this early stage in the game of unexplored physics.

Again - there is no group velocity for near coherent EM fields because they do not obey an f = f(k) dispersion relation unlike coherent far fields.

coherent near induction fields (e.g. Tesla's playground) are Glauber states of virtual photons of all four polarizations with one gauge constraint.

coherent far radiation fields (lasers, masers) are Glauber states of real photons of only two transverse polarizations (for zero rest mass).

changing the subject

F = dA

dF = 0 is for Faraday induction and no real magnetic monopole - it is topological independent of the material structures

d*F = *J is Ampere's law and Gauss's law including the interiors of materials

i.e. *F uses induced D and H fields not the bare vacuum E and B fields.

d is the Cartan exterior derivative in globally flat Minkowski spacetime of 1905 Einstein special relativity

d^2*F = d*J = 0 is LOCAL conservation of electric current densities.

Maxwell's EM field coupled to Einstein's curved spacetime gravity field is gotten by universal minimal coupling (i.e. the strong equivalence principle) that

d is replaced by D the covariant exterior derivative

D = d + Spin Connection 1-form

Einstein's curvature 2-form is

R = D(Spin Connection)

S = spin connection

Maxwell's EM equations coupled to gravity are simply

F' = DA = dA + S/\A

DF' = 0


d(S/\A) + S/\dA + S/\S/\A = 0

D*F' = *J

D*J = 0 i.e. local conservation of electric current densities in curved spacetime.

On Aug 4, 2011, at 11:19 AM, X wrote:

I don't honestly see a distinction between the Higgs field in the Standard Model, and the gravinertial field in Mach's Principle.  

That's because you have no understanding of the pre-requisite mathematics beyond vague informal plain English metaphors of pop books and internet posts. It does not matter how high your IQ is if you can't speak the language - in this case applied mathematics. Start by reading Roger Penrose's "The Road to Reality."

OK test yourself

1) What are the equations for the Higgs field in the standard model?

2) What are the premises for that model?

3) What are the equations for the gravi-inertial field in Mach's principle?

4) What are the premises for it?

5) What is the relation of it to Einstein's general relativity?

The Higgs field comes out of the standard model in Special Relativity, there is no gravity in it at all.

In other words, are your words above not even wrong malapropisms?

Test yourself. I am not trying to insult you or anyone else. I am only acting as any hard-core physics professor would in an elite university (like Cornell, UCSD, UCB, Cal Tech ....) would do - at least in the 1950's and 60's before political correctness took over.

Both are universal.

Inertial pseudo-forces are contingent artifacts of the acceleration of the small detector that forms the local frame of reference.

Newton's universal gravity force per unit test particle is actually, in Einstein's terms of curved spacetime, simply the covariant acceleration needed to keep the detector "static" fixed in space relative to the static spherically symmetric mass curving the spacetime. This is shown in Hawking's picture

The post-inflation Higgs and Goldstone smooth coherent c-number ODLRO vacuum fields are the amplitudes and phases of condensates of virtual massless lepton and quark particle-antiparticle pairs and their induced virtual photons, massless W mesons and massless gluons that give rest masses to real leptons and quarks as well as real W mesons at high energies.

There is no connection at all to Newton's "gravity force" that is a contingent low energy emergent inertial pseudo-force corresponding to a very particular arbitrary convenient choice - since we are stuck on the surface of hard rock.

The only objective c-number gravity fields are the tetrads e^I and spin connections w^I^J for the geodesic local inertial frames (LIFs) in curved spacetime and the curvature tensor fields derived from them.

Both are said to give matter its mass or mass its inertia, depending upon your choice of phrase.  In either case, what Jim is talking about is manipulating a non-zero field in order to temporarily fluctuate the mass caused by the field.

This is hogwash. The gravi-inertial field PROPERLY DEFINED in battle-tested Einstein GR, has nothing whatsoever to do with creating the invariant rest mass of elementary particles. That's pure bunkum.

If one wanted to presume inertia was an intrinsic property of mass, as it sounded Jean-Luc was doing the other day, then Higgs bosons would not need the Higgs field to have mass.  Seems the Standard model is saying this isn't so.

Just trying to be clear on this for myself--I don't see a distinction with a difference here, between what the Standard Model says about the Higgs field, and what Mach, Sciama, Woodward, etc. say about the origins of inertia.  In the Standard Model, a universal, non-zero field gives the Higgs its mass. I think what we have here is the history of the Standard Model is in accord with what Mach was saying well before and in fact is requiring once again that Mach was right, same as GR requires Mach was right.

On Thu, Aug 4, 2011 at 2:05 PM, Jack Sarfatti <sarfatti@pacbell.net> wrote:

On Aug 4, 2011, at 10:39 AM, Paul Zielinski <iksnileiz@gmail.com> wrote:

> So you're saying that the Higgs mechanism doesn't actually require or predict the existence of a detectable Higgs boson?

Right can u see an isolated Cooper pair boson in empty space outside the crystal lattice?
> Then why all the hoopla about the search for the "God particle"?

Some models predict it like pulling a helium atom out of a superfluid helium 4 condensate.

There is a lot of leeway Spontaneous breakdown of vacuum solution symmetries of the underlying local Lagrange field equations that keep the symmetries is very general.

U can break Lorentz group as well as general covariance group in the solutions just like U1 breaks in a superconductor.
> On 8/4/2011 9:41 AM, JACK SARFATTI wrote:

>> Z's remark below is a common misconception based on the fact that many physicists do not properly understand the physical meaning of the distinction between real and virtual particles and spontaneous symmetry breaking.
>> The Higgs field that gives the small rest masses to leptons and quarks is macroquantum off-diagonal order of the density matrix of  virtual particles inside the vacuum. It does not even require that there be a real Higgs boson, though there might be. If the off-diagonal order ODLRO is a Bose-Einstein condensate of Higgs Goldstone bosons possibly one will see it. However, that is not the only alternative. The difference is seen in the case of the BCS superconductor. There, the ODLRO is a Bose-Einstein condensate of Cooper-Pairs, but you never see an isolated Cooper pair outside of the condensate, because the Cooper pair of electrons is bound by crystal vibrations - VIRTUAL phonons. The vacuum Higgs field may be similar, e.g. "Cooper pairs" of virtual quarks. If they are virtual quark-antiquark pairs they will be virtual mesons.
>> In short the constituents of a virtual Bose-Einstein condensate do not need to exist as isolated real particles, though they might. Whether they do or not depends on other details of the dynamics.
>> On Aug 4, 2011, at 8:59 AM, Paul Zielinski wrote:
>>> If CERN can't produce convincing evidence for the existence of the Higgs boson, I will begin to suspect that
>>> the Standard Model is also a Ponzi scheme.
>>> On 8/4/2011 8:56 AM, .... wrote:
>>>> Feynman thought that string theory was something which physicists could
>>>> immerse themselves in which was interesting, engaging, beautiful, and
>>>> ultimately useless. It's a ponzi scheme to the extent that it keeps them
>>>> employed but will never actually produce anything worthwhile. If Feynman
>>>> knew what he was talking about. Lee Smolin's book, "The Trouble with
>>>> Physics" discusses this and other related questions.
>>>> PSB, Ph.D.
Aug 04

AM modulated entangled laser beam signaling?

Posted by: JackSarfatti |
Tagged in: Untagged 
See v2 paper I just uploaded on August 3, 2011 to the Library Resources Quantum Computer Archive
Aug 04

"Words, words, words. I get sick of words."


Guv = 8pi(index of refraction)^4GTuv/c^4    (8)

JW:What's wrong with all this? Well, several things are wrong. Working back through the argument, the first thing to note is that the substitution that leads to equation (8) is not valid.
JS: Give reasons. I think it's an empirical issue that must be settled by experiment.
JW:Neither is it supported by the facts of observation.
JS: I believe you have just made a false statement. What published experiments refute equation (8)? Please provide precise references.
JW: It's not valid because the polarizable vacuum model is not background independent, and any plausible theory that is physically equivalent to general relativity must be background independent.
JS: You have confounded my theory with Puthoff's theory. Yes, I was motivated by Puthoff's PV theory qualitatively. However,  I agree with you that Puthoff's theory is not background independent. Indeed, that's the basic reason I rejected it - he does not use tensors. Indeed, the basic dynamical laws of nature should be background independent, but your argument about my equation (8) is fallacious because their solutions need not be! We can see why, with the simpler example of Maxwell's electromagnetic field theory. Maxwell's equations must be Lorentz invariant when written in terms of fundamental particle sources moving through the vacuum in the old microscopic Lorentz formulation with electric charges as Newton's hard massy marbles. However, when you are talking about electromagnetic fields in a material you use a lumped - parameter effective field theory version and the Lorentz invariance is spontaneously broken for a sub-class of interior measurements made inside the material (in principle). Another example is the formation of crystal lattices. The basic laws of nature are thought to be translationally invariant, but when a crystal forms the 3D translational continuous Lie group is spontaneously broken to a discrete crystal group say for the motions of electrons - as in Bloch band theory for example.
Therefore, as an effective low energy coarse-grained field theory (think statistical mechanics ---> thermodynamics), general coordinate invariance is spontaneously broken in a material for the same reason that Lorentz invariance is broken because the index of refraction changes the speed of light in the material. The material provides a preferred frame of reference for interior measurements. A particle can outrun the slowed speed of light inside the material at this coarse-grained level (Cerenkov radiation). Hence special relativity in the sense of solutions is violated in this limited sense.  Similarly, general covariance is spontaneously broken in cosmology where we have a global cosmological time and a global rest frame of reference, i.e. temperature of the CMB and isotropy of the CMB Big Bang remnant respectively.
Remember, in spontaneous broken symmetry the ground state of the system does not respect a continuous symmetry of the total exact dynamical action S of the system and their Euler-Lagrange local equations from the Action Principle
&S = 0
in the sense of the calculus of variations generalized by the Feynman path integral for quantum theory.
Sure, if you look at the chunk of material as a unit, and look at its EM field outside it from two inertial frames outside it in free space - Lorentz transformations will describe the the fields outside the material as well as the effective currents inside the material. Indeed, Einstein was led to special relativity in 1905 by considering relative motions between a permanent magnet and a solenoid.
But if you are a tiny nano-observer inside the material it's a different story - very tricky see e.g. Chs 9 -11 Panofsky and Phillips "Classical Electricity and Magnetism" (convective currents with moving media). Also Landau & Lifshitz "Electrodynamics of Continuous Media" (missing from my library maybe packed away) I seem to remember Landau & Lifshitz being explicit about this as well as P.W. Anderson. Bye and bye I will try to find the quotes.
Remember what Maxwell's equations look like in vacuum in the elegant Cartan form notation.
F = dA
dF = 0
d*F = 0
where *F is the Hodge dual of F.
roughly F has the fields E and B in vacuum
*F has the fields D and H modified by the material's electric polarizability and magnetization from magnetic moments of particle constituents
D = (permittivity)E
B = (permeability)H
(it's possible I got B & H switched in my memory)
Maxwell's equations in a material are
F = dA  purely topological independent of Lorentz group
dF = 0 (Faraday's EMF law + no magnetic monopoles)
d*F = *J (current densities)  depends on constituitive relations of the material lumped parameters effective field theory
Rovelli gives similar equations for the gravity field in terms of tetrad forms (Ch 2 "Quantum Gravity")
So the question is, can one naively apply the Lorentz transformations inside the material where the speed of light is no longer invariant like it is in vacuum? For example, as I said above we have the Cerenkov effect when a charge moves faster than the speed of light in vacuum.
Nevertheless, we formally write Maxwell's equations inside the material in what looks like a Lorentz invariant form using D & H even though strictly speaking that is not the case for interior measurements (in principle) made by Maxwell Demons - the material provides an absolute rest frame for coarse grained lumped parameter effective field theory where we can use ideas like electric permittivity and magnetic permeability. The D and H fields are coarse-grained lumped parameter solutions that spontaneously break the Lorentz group symmetry of the fundamental Maxwell action written in terms of the fundamental charges moving in vacuum. The situation here for electromagnetism is analogous to the Hubble flow in the standard model of cosmology with inflation and a cosmological constant where the CMB provides a preferred global frame of reference even though the fundamental GR equations are background independent in the sense that they are generally covariant. What is true for the local field equations need not be true for the global solutions of those equations that also involve additional boundary constraints.
See also http://en.wikipedia.org/wiki/Background_independence
there is the Einstein hole problem which is solved by the proper understanding of gauge invariance in which all solutions connected by the gauge transformations of the theory form an equivalence class of "orbits" i.e. they correspond to the same objective physical reality seen from different sets of detectors in arbitrary subluminal motion. In the case of 1916 GR the gauge gravity field transformations are T4(x) analogous to U1(x) for the Maxwell electromagnetic field.
In summary, Woodward's vague appeal to "background independence" is not sufficient reason to reject my idea of eq (8) prematurely since the payoff would be so great - the survival of life on this planet is what we are talking about.
i.e. speed of light inside the material = 1/(permittivity permeability)^1/2
in the same sense, I write equation (8).
We simply don't know yet. It's an empirical issue. Woodward is wrong that experiments prove (8) wrong since no experiments have been done looking for anomalous gravity when the index of refraction >> 1 especially in capacitors filled with a proper kind of negative permittivity meta-material. There are no such experiments yet - I know of. If Woodward knows different let's see the references.
JW: The problem is that notwithstanding that the index of refraction of metamaterials is negative, the energy density of the electromagnetic field as it propagates through a metamaterial is not negative.
JS: I beg to differ since Woodward has just violated Maxwell's equations in material media with that remark. Again this is an empirical issue that requires experiments. Also Woodward makes another error talking about propagating far fields of real photons on the light cone. I am talking about non-propagating near fields both inside and outside the light cones made out of coherent Glauber states of virtual photons.
JW: Notice that the definition of the index of refraction in equation (1) above does not require that the signs of either or be negative for n to be negative. Negative n registers, rather, that the relationship between the group velocity of an electromagnetic wave and its constituent phase waves in a metamaterial is the reverse of that which normally obtains. Instead of the group and phase wave velocities being in the same direction as usual, in a metamaterial they are in opposite directions. When this is the case, the circular polarization of the wave is changed from right handed to left handed. So metamaterials are sometimes referred to as “left handed materials”.
JS: Woodward's argument is false. While what he says is true, he pulls it out of context and it does not at all with correct logic follow that the near field energy density is not negative. Note also that he keeps thinking of the group velocity of radiation that has nothing at all to do with the non-propagating near fields I am talking about. Indeed, Woodward's argument here is a Red Herring that is not even wrong in the context I mean. Again this is a matter for actual experiment.
JW: The practical reason why there is no reason to believe that metamaterials transform the energy densities of electromagnetic waves propagating through them from positive to negative is that were that the case, as already noted, serious violations of the law of local energy conservation would ensue.
JS: Woodward's argument here is also false as I have shown. There is no problem at all with conservation of energy. Start with the meta-material filled capacitor without any electric charge on the plates. It has internal energy Ui. We must do positive work Wi to charge the plates switching on the D field inside the metamaterial. The internal energy has now dropped to Uf where Uf < Ui because the capacitor is doing positive external work Wf on some load. Total energy is conserved
Ui + Wi(input) = Uf + Wf(output)
Uf < Ui
Wf > Wi
It's easy in principle to check this with experiments that have never been tried.
There is no negative energy instability here that is obvious.  The effect will saturate. What will happen is that Ui will decrease to a minimum - the capacitor will cool down and it will be impossible to load more charge on the plates. Remember this requires a negative permittivity in the near field where the material response functions have independent k wave vectors and frequencies f not constrained by an equation f = f(k) as it is for real far EM fields that Woodward has confused it with.
JW: Consider an electromagnetic wave as it makes the transition from propagation in free space to propagation in a metamaterial. If the energy density of the wave propagating in the metamaterial is negative, the wave must divest itself of enough energy to change its state from positive to negative as it enters the metamaterial. That energy must go somewhere – presumably into the physical structure of the metamaterial. So, where the wave enters the metamaterial we should expect to see strong heating. And where the wave exits the metamaterial, we should expect to see the reverse process – strong cooling. No reports of this behavior, which could hardly be missed as it would be a pronounced effect, are to be found in the literature.
JS: Again Woodward confounds propagating EM far field waves with f = (c/n)|k| with my non-propagating EM near field modes in a completely different region of the f-k domain of support of the response susceptibility function Chi(k,f) of the metamaterial - in the sense of many-body Feynman propagators. Woodward is talking about apples while I am talking about oranges. All of his arguments are Red Herrings because he has not really understood my proposal - near fields not far fields.

The behavior of near fields can be counter-intuitive to the behavior of far fields. In terms of Feynman diagrams, Woodward is talking about the external lines while I am talking about the internal lines with c-number condensates like the Green's functions of Gorkov for superconductors and superfluids. Indeed, coherent near EM fields are analogous to superfluid order parameters and this will affect the many-body theory for the susceptibility response correlation functions of the metamaterial. Furthermore, Woodward says a "strong cooling" off the top of his head. Has he done an actual calculation of that number? I bet he has not done so. Also has anyone tried to measure temperature variations in this context? I think not. Show me. I predict anomalous cooling will be detected since no one looked for it yet. Also it may be a very weak effect for propagating waves. Remember I am talking about a capacitor of ordinary conductors sandwiched with metamaterial "meat" of the proper design for nearly static ELF near fields to begin with. Has anyone done any experiments of the kind I envision? I doubt it. If so, show me.

Aug 03
On Aug 3, 2011, at 1:10 PM, jfwoodward@juno.com wrote:

---------- Original Message ----------


The promised answers to Jack's questions.  They will be brief, but more can be found in the stuff that I've been writing and circulating.  First, however, a comment that is motivated by this conversation, brought home to me with particular force because of the conversation.

There's been a lot of talk about the "problems" of current physics: the lack of a quantum theory of gravity, instantaneous propagation of effects, for example, entanglement and so on, what is the correct theory of cosmology, and on and on.  It has also been suggested that all of these problems need to be fixed before we can have any real hope of making progress on the starship/stargates (S/S) problem.  I disagree.

So far I agree with your disagreement on the above. Remember I will be on a multi-day DARPA/NASA panel on precisely this topic early October, so I would like to have at least a qualitative understanding of your conceptual narrative, premises and how your particular approach connects with orthodox General Relativity.

Indeed, I am sure that if we wait around for all of the "problems of physics" to be solved before we get serious about the S/S problem, hell will have long since frozen over.  What we need to do is identify those physical principles that must be correct if the S/S problem is to be solved, and then go do experiments to see if the requisite principles actually work as needed.


Mach's principle is a good example of what I am talking about.  It has been defined and discussed ad nauseam for decades.  


Many simply dismiss it as an ill defined idea that does not merit serious consideration.  It is, however, the only "principle" that addresses the origin of inertia.  

OK you just lost me. First of all I don't know what you mean by "inertia." Let me tell you first what I mean by that term.

1) Given a massive test particle on a timelike geodesic in orthodox Einstein General Relativity (GR) a non-gravity force must be applied to push it off the time like geodesic. This is given by Newton's covariant second law of motion in curved spacetime

D^2x^u/ds^2 = F^u/m


m = invariant rest mass of the test particle

ds = invariant proper time differential (c = 1) along the test particle's (center of mass if extended) world line.

D/ds is the covariant derivative with respect to the Levi-Civita-Christoffel connection field for parallel transport of tensors relative to the invariant proper time differential along the world line.

F^u is the 4-vector non-gravity force

e.g. the electromagnetic force on the charged test particle is the covariant

F^u(EM) = qF^uvdx^v/ds

where F^uv = g^u^wFwv

Fwv is the EM field tensor

g^u^v is the metric tensor (that reduces to the Minkowski metric in a small region around a small geodesic detector in free float).

m = rest mass of test particle = inertia

you agree or disagree?

2) The origin of inertia is given by the Yukawa couplings in the standard particle quantum field theory model using the Higgs vacuum "superconductor" fields plus Frank Wilzcek's quantum chromodynamic super-computer computations of the hadronic rest masses of the confined quarks and their cloud of virtual off-mass-shell gluons and virtual quark-antiquark pairs to dominant approximation.

Therefore, I say there is no mystery at all about the origin of inertial. It is a pseudo-problem based on Mach's obsolete 19th century ideas of "matter" prior to quantum theory and prior to General Relativity.

And it should be obvious that you have to understand inertia if you are to solve the S/S problem.

Agreed, but please comment where we disagree on the meaning of "inertia."

Much of the confusion/contention arises from the way Einstein chose to first articulate the principle: asserting that the geometry of reality should be uniquely determined by the "material" sources contained it it.  

What is meant by "material sources" has dramatically changed since Heisenberg's uncertainty principle and quantum field theory giving us virtual particles inside the vacuum as well as real particle elementary excitations of the vacuum, i.e. the "mass shell" idea based on poles of the Feynman propagators in the complex energy plane. Real particles are the poles and virtual particles are everything else in the complex energy plane that contributes to the propagator vacuum correlation functions.

Furthermore, ordinary near EM fields of power lines, electric motors, transformers et-al are macro-quantum coherent Glauber states of off-mass-shell virtual photons off the classical light cone as is seen by a Fourier transform of the electrostatic Coulomb field of a point charge in its rest frame as the most elementary example. This is distinct from the random zero point virtual photons of hf/2 per mode in the Casimir force effect, for example.

Every time one has a response susceptibility in soft condensed matter many-body problem the wave vector k-frequency f plane domain of support of the response functions are from virtual quasi particles and virtual collective modes that are real only on the "mass shell" e.g.

f = csk

cs = speed of sound in a crystal lattice


similarly for the vacuum response functions!

The important point here is that Einstein's strong equivalence principle (SEP) demands that virtual particles directly bend space time as well as real particles. This is proved by the dark energy acceleration of the expansion rate of our observable universe. Most physicists have failed to fully understand this since the discovery of the anomalous z redshifts in Type 1a supernovae are only about 12 years old. The diffusion time of new physics is obviously too long here.

The problem with this is that the field equations of GRT are really those of a local field theory (that can be applied to cosmological situations), so it really shouldn't be surprising that they have "non-Machian" solutions.  The historical record then follows. . . .

OK, In fact, orthodox GR of 1916 is simply the local gauge theory of the Abelian 4-parameter globally rigid translation group T4 whose infinitesimal generators are the global total energy and linear momentum operators of the non-gravity matter fields in globally flat Minkowski spacetime where they are well defined.

The four Cartan 1-form tetrad fields e^I describe the motion of local inertial frames not rotating along the geodesics of the curved spacetime. There are even Penrose-Newman complex null tetrads for the null geodesic light cones as well as the more familiar ones for timelike geodesics corresponding to massive detectors with "inertia."

The non-Minkowski part of the tetrad fields in 1916 GR are the compensating gauge potentials that keep the total global dynamical actions of the non-gravity matter fields (leptons, quarks, gauge bosons of EM-weak-strong interactions) invariant under the local T4(x) that contains T4 as a subgroup.

The spin-connection for lepton and quark spinors (Weyl, Dirac, Majorana) comes from the tetrads as shown in detail by Rovelli in Ch 2 of his online lectures "Quantum Gravity," All of this is for the plain vanilla minimal curvature only 1916 GR.

To include a torsion field, one must locally gauge the 10-parameter Poincare group including the six space-time rotations in addition to the above four translations.

To go even further include the even larger conformal de Sitter group and spontaneously break it down to the de Sitter group with the cosmological constant /\ that connects us to the 't Hooft-Susskind world hologram picture via Tamara Davis's 2003 Ph.D (online).

The vacuum superconductor Glauber coherent states (order parameters) of course have physics in them that is yet to be properly explored. There are some papers on this in the literature.

From the S/S point of view, another approach is possible.  You can ask: what is the essential physical content of Mach's principle that's relevant to the S/S problem?  

Exactly my question.

Can it be restated in such a way that you can ignore all of the cosmological complications and leave them to be debated by cosmologists?

I hope so. The scales are so different that it's plausible they are decoupled sufficiently for practical applications.

I argue that the answer to these questions is yes.  

Extraordinary claims require extraordinary proof.

As far as local physics is concerned (and the S/S problem is one of local physics), the physical content of Mach's principle is the assertion that inertial reaction forces are due to the gravitational action of chiefly distant matter.  

You have lost me completely with this remark. It has no meaning for me. How do you falsify it?  Also what do you even mean by "distant matter"? This premise is too vague for my mind and that's why my instinct is that your approach rests on shaky ground at least 9.0 on the Richter scale! ;-)

Look at Tamara Davis's conformal time diagram of the entire history of our universe.

Where and when is the "distant matter" on this diagram? Do you mean the entire space-time bounded by our past light cone and our past particle horizon? If so, you do not allow Wheeler-Feynman (Hoyle-Narlikar-Cramer) advanced signal "transactions."

This is already implied by the Equivalence Principle which says, in effect, gravitational forces are the same as "fictitious" forces [inertial forces], so you can geometrize them away.  My version of Mach's principle simply says, not only are fictitious and gravitational forces the same in that they are geometrizable, they are the same because they are all gravitational forces.

Again these English words are too vague for me. I don't understand how you can relate that to propulsion.

Sure, in the simplest metric field Newton's gravity force is actually the non-gravity reaction force needed to keep the static LNIF detector still in curved spacetime, that's the meaning of the metric component

g00 = 1 + 2Newton's potential per unit test mass/c^2


Newton's potential = -GM/r

r > 2GM/c^2

If that were all one could do to "localize" Mach's principle, you probably couldn't make much progress on the S/S problem.  But there is more that can be done.  It is possible, indeed, easy to show that phi must be equal to c^2 if inertial reaction forces are gravitational.  (See Sciama's and Nordtvedt's calculations appended to Mach's principle and Mach effects.)  And since m = E/c^2, m phi = E; and now you have more than a statement about the nature of inertial reaction forces.  You have a statement of the origin of inertia.  And you can use this stuff to go looking for transient effects that you might be able to use to solve the S/S problem.  That's what the "Mach effects" part of the recent piece circulated is all about.

You lost me completely in the above paragraph. I need to see equations/diagrams etc. I think all you are saying is what I just said above? In any case I can't connect your dots to propellantless propulsion for the Star Ship Study Panel. Not yet. You need to show more.

Now, you can wait around for all of the "critical problems of physics" to be solved, or at least the quantum theory of gravity to be invented, or, like Jack, you can speculate about coupling coefficients and metamaterials, or, like me, you can try to identify the real physical content of Mach's principle and look for transient effects, and then you can go do EXPERIMENTSto see if you're on the right track.  I'm not interested in seeing hell freeze over.  And in any event, I know that that's far beyond my realistic life expectancy.  So I vote for trying to figure out the physics without worrying about grand problems, and then doing experiments.  Perhaps, being an experimentalist, I am prejudiced.  But I don't think much progress on a problem of this sort is possible without experiments.

I don't understand "transient effects". I don't see how that connects to warp drive in the sense of Einstein's GR where to use Puthoff's "metric engineering" we need to manipulate the ship's timelike geodesic from the ship itself with small amounts of power many powers of ten smaller than Jupiter's mass-energy. In short, we need to reverse engineer what we see flying saucers actually do above our nuclear weapons bases etc with impunity. We don't want to screw around with the mass of the ship because of the Anthropic Principle's "Just Six Numbers" (Martin Rees). Of course if we can make negative mass in a small sub-mass of the ship without blowing apart the universe in an Ice 9 vacuum "strangelet" (Martin Rees "Our Final Hour") good.

Off the soap box again, and on to Jack's questions:

On Aug 1, 2011, at 9:40 PM, jfwoodward@juno.com wrote:

I've been off doing other things, so I've not caught up with all of the conversation.  Let me say, though, that all of Jack's questions (and some of his comments) are addressed in the Stargates paper.

Maybe I missed it. Please resend it.

I'll resend the version that I'm working on for the book as soon as it's ready.

I guess what I am asking is for you to give here a synopsis of

1) what precisely you mean by Mach's principle?

Inertial reaction forces are not only "like" gravitational forces in that they display "fictitious" behavior (in the sense of Adler, Bazin, and Schiffer), they are gravitational forces.  For this to be true, and for Newton's third law to be true everywhere and everywhen, then it must be that the total scalar gravitational potential (non-localizable under the EEP) is a locally measured invariant equal to the square of the speed of light.

I don't understand your last sentence here.

1a) do you have a mathematical formulation consistent with Einstein's mainstream equations of gravity?

Yes, see above and Stargates, or Making the Universe Safe for Historians, or the piece in progress; Mach's princple and Mach effects.

2) what are the equations for propulsion?

The equations are tedious to write out here, and can be found in any of a number of peer reveiwed publications.  So I'll just verbally synopsize the effects.

No, not good enough. I need to see YOUR equations and understand their premises before I can endorse your approach to DARPA/NASA and several billionaires keen on interstellar space-travel. Please resend professional details ASAP as I seem to have lost them in the noise of email traffic.

There are two Mach effects.  The normally larger effect occurs at twice the frequency of a signal that is producing a sinusoidal acceleration of a system in which internal energy is fluctuating at the same frequency.  This fluctuation is "rectified" by adding another double frequency acceleration to the part where the mass fluctuation is occuring.  The formal equations for all this can be found in any of a number of peer reviewed publications.

Again, show me. OK so you claim to be able to rectify. But what mass is actually fluctuating? How much is it? I gather you are not changing the rest masses of real electrons and real atomic nuclei? (Just Six Numbers) You are messing with their chemical binding energies? What about making an explosion with that Sorceror's Apprentice approach?

The second Mach effect, normally vanishingly small, is always negative.

Show me. Again, what is this mass-energy you are making negative with w > -1 so that you get anti-gravity?

In special circumstances, it can be made large using periodic signals to excite both effects.  A crude example of how to do this is in "TWISTS of Fate: Can we make traversable wormholes in spacetime?"  Alas, the published version is marred by serious post copy editing misprints in some of the key equations.  But there you will find a very crude numerical integration that shows how extreme exoticity might be induced.

I can't understand your last paragraph here. I need to see the nitty-gritty ASAP.

3) is your drive a true warp drive?

Yes.  That based on the second Mach effect is a true means to produce the Jupiter mass of exotic matter needed for a warp drive.

You are claiming to make a Jupiter mass of exotic matter in your laboratory with microwaves? With ELF? .... Without destroying the Earth? Pardon my skepticism - extraordinary claims ...

3a) are the objects inside the ship weightless?

With respect to, say, a nearby Earth or suchlike, yes.  They are weightless in the sense you mean.  [Strictly speaking, for external observers, the contents of a warp bubble are effectively inertia-less as they can be accelerated to arbitrarily large velocities (including FTL).  But inertialess encompasses the sense of weightless that I think you intend.]

OK, but again these are merely empty claims without any evidence I have seen.

3b) are you controlling the shape of the timelike geodesic of the center of mass of the ship?

Configured as a warp drive, yes.  My preferred embodiment, however, is the absurdly benign wormhole where stabilizing the throat is the foremost issue of concern.

I fully agree a good trick if you can do it. How do you fix the distant mouth and cosmic global time of the other end of the wormhole?

In real warp drive the inertial mass of the ship over-all is irrelevant - it drops out of the geodesic equation for the center of mass of the ship. It is the intrinsic geometry at the ship that is being manipulated from the ship - that's what "metric engineering means".

Yes, I know.  That's been obvious since the '90s.  See the above remark on weightlessness.

4) How do you compute the energy needed for a real large star ship as big as a modern day super-carrier?

The basic equations are fairly straight-forward.  Applying them to practical situations should not prove insuperable.  So far, however, they have been applied to experimental configurations.

Again I need to see details for DARPA/NASA panel. More unsubstantiated extraordinary claims.

Note - in the retrocausal world hologram theory with our future event horizon as the hologram screen - that is arguably a Wheeler-Feynman version of historically vague Mach's Principle.

Remember "Killing Time"?  Yes, I have known since the very early '90s (as others on this circulation can attest) that Mach's principle entails W-F action at a distance, and that requires a future absorber.  I'm ambivalent about whether it gets called a hologram.

OK, so again this brings us back to the Tamara Davis's picture above. Where and when is your distant matter?

Also how does the distant matter connect with your local transient effect - is it the Dirac radiative reaction that Wheeler-Feynman started with. So you need the future light cone intersection with our future horizon or not?

Sorry about not trying to put equations into the above with this text editor.  But the equations really are in essentially all of the formally published stuff.  :-)

I don't have access to that published stuff please send it. Also you can screen capture the equations as I have been doing if you have digital copies.



I gather he has not read it with care yet.  I am sure that if (and when?) he does, he will find answers to most of his concerns.

Perhaps the most fundamental is the issue of mass and the Higgs process.  The Higgs process confers rest mass on rest massless particles in the standard model.  It is NOT an explanation of the origin of mass.  

Sure it is. The rest mass of the leptons and quarks comes from the coupling to the macro-quantum coherent c-number  QUANTUM VACUUM Higgs field. That c-number field is a Glauber coherent state of VIRTUAL Higgs-Goldstone bosons from a spontaneous broken continuous symmetry (SU2weak) at the moment of inflation.

Whether or not we can excite REAL Higgs bosons is not relevant to the origin of the quark and lepton rest masses.

That, indeed, was Wilczek's point in both papers, talks, and his book.  The bulk of the masses of the nucleons is attributable to the energies of the real gluons that bind the quarks.  As W puts it, Einstein's "second" law: m = E/c^2.

I think you have this confused Jim. Where do you get real gluons binding the quarks? That is wrong. The gluons are all virtual just like the virtual photons that bind the electron and the proton in the hydrogen atom.

In fact it's the zero point vacuum plasma fluctuation energy of virtual gluons and also virtual quark-antiquark pairs that gives the hadron rest masses. If you look at the quantum numbers you cannot use real gluons. It's like saying there is a real antineutrino and a real electron inside a neutron BEFORE it decays into a proton and an electron and an anti-neutrino.

There is a tricky point here. When you try and fail to separate say a real quark - antiquark pair in a meson you have a kind of vortex string of chromodynamic fields. Just like the electromagnetic near field the vortex string is made out of coherent states of virtual gluons. The effective string potential ~ (separation of the real quark from the real antiquark).

So you have both random zero point vacuum fluctuations and coherent virtual gluon Glauber states "inside" the hadron. But I am pretty sure that thinking of real on-mass-shell gluons inside the hadron is not correct. Of course the gluons have zero rest mass.

The point is that real gluons are like real photons - they are far field radiating energy to infinity. The hadron needs NON-RADIATING spatially confined near fields of gluons (i.e. Glauber states of virtual gluons).

The standard model (as explained already in MUSH [1995] and Stargates) is not suitable for asking about the gravitational properties of elementary particles for the obvious reason that it doesn't include gravity.  The ADM model, however, is an EXACT general relativistic model that does include gravity.

So what? How does that give propellantless propulsion?

There are two kinds of propellantless propulsion:

one with g-force (not warp drive) i.e. still impulse drive like a rocket even though no real particles are ejected

the other is the true zero g-force warp drive I am interested in.

Using entangled laser beams, my design in Fig 9.1 of David Kaiser's "How the Hippies Saved Physics" may work after all.
See the pdf I just uploaded in Resources Quantum Computing Aug 2, 2011.
see pdf uploaded today to library resources advanced propulsion on this web page

I don't know how much progress they made since then.
"The primary objective of this effort is to study the signaling potential of quantum information processing systems based on quantum entanglement. ...
The nonlocality of the correlations of two particles in quantum entanglement has no classical analog. It allows coherent effects to occur instantaneously in spatially separate locations. The question naturally arises as to whether a more general formulation of QT could provide a basis for superluminal communications. This issue has recently been the subject of considerable debate in the open literature.
There are basically two schools of thought: one, which precludes this possibility (based, for example, on conflicts with the theory of special relativity), and one which allows it under special provisions. We will discuss these issues in some detail in the sequel. First, however, we briefly highlight a few of the more significant new findings in the growing experimental and theoretical evidence of superluminal effects.
A conference on superluminal velocities took place in June 1998 in Cologne [21]. Theoretical and experimental contributions to this topic focused primarily on evanescent mode propagation and on superluminal quantum phenomena. The issues of causality, superluminality, and relativity were also examined. In the area of electromagnetic propagation, two exciting developments were addressed. Nimtz reported on experimental measurements of superluminal velocities achieved with frequency band-limited signals carried by evanescent modes [22]. Specifically, he timed a microwave pulse crossing an evanescent barrier (e.g., undersized waveguides, or periodic dielectric heterostructures) at 4.7c. He demonstrated that, as consequence of the frequency band limitation of information signals, and if all mode components are evanescent, an actual signal might travel faster than the speed of light. Capelas de Oliveira and Rodrigues introduced the intriguing theory of superluminal electromagnetic X-waves (SEXW) defined as undistorted progressive waves solutions of the relativistic Maxwell equations [23]. They present simulations of finite aperture approximations to SEXW, illustrate the signaling mechanism, and discuss supporting experimental evidence.
What are the key arguments put forward against the possibility of superluminal signaling? Chiao and Steinberg analyze quantum tunneling experiments and tachyon-like excitations in laser media [24]. Even though they find the evidence conclusive that the tunneling process is superluminal, and that tachyon-like excitations in a population-inverted medium at frequencies close to resonance give rise to superluminal wave packets, they argue that such phenomena can not be used for superluminal information transfer. In their view, the group velocity can not be identified as the signal velocity of special relativity, a role they attribute solely to Sommerfeld’s front velocity. In that context, Aharonov, Reznik, and Stern have shown that the unstable modes, which play an essential role in the superluminal group velocity of analytical wave packets, are strongly suppressed in the quantum limit as they become incompatible with unitary time evolution [25].
At the Cologne symposium [21] Mittelstaedt reviewed the arguments that had been put forward in recent years in order to show that non-local effects in quantum systems with EPR-like correlations can not be used for superluminal communications. He demonstrated that most of these arguments are based on circular proofs. For instance, a “locality principle” can not be used to exclude superluminal quantum signals and to justify quantum causality, since the locality principle itself is justified by either quantum causality or an equivalent “covariance postulate” [32]. In a similar vein, van Enk shows that the proof given by Westmoreland and Schumacher in [33] that superluminal signaling violates the quantum no-cloning theorem is in fact incorrect [34]. Hegerfeld uses the formalism of relativistic quantum mechanics to show that the wave function of a free particle initially in a finite volume instantaneously spreads to infinity and, more importantly, that transition probabilities in widely separated systems may also become nonzero instantaneously [35]. His results hold under amazingly few assumptions (Hilbert space framework and positivity of the energy). Hegerfeld observes that, in order to retain Einstein causality, a mechanism such as “clouds of virtual particles or vacuum fluctuations” would be needed. To conclude this review, we note a recent suggestion of Mittelstaedt [36]. If the existence of superluminal signals is assumed ab initio (viz. [22] and [35]), and consequently a new space-time metric (different from the Minkowskian metric) is adopted, all the paradoxes and difficulties discussed above would immediately disappear.
Let us now examine EPR-based superluminal schemes. Furuya et al analyze a paradigm proposed by Garuccio, in which one of the photons of a polarization-entangled EPR pair is incident upon a Michelson interferometer in which a phase-conjugation mirror (PCM) replaces one of the mirrors [26]. The sender (located at the source site) can superluminally communicate with a receiver (located at the detector site), based on the presence or absence of interferences at the detector. The scheme uses the PCM property that a reflected photon has the same polarization as the incident photon (contrary to reflection by an ordinary mirror), allowing to distinguish between circular and linear polarization. In a related context, Blaauboer et al also proposed [27] a connection between optical phase conjugation and superluminal behavior. Furuya et al prove that Garuccio’s scheme would fail if non coherent light is used, because then the interferometer could not distinguish between unpolarized photons prepared by mixing linear polarization states or by mixing circular polarization states. They admit, however, that their counterproof would not apply to a generalized Garuccio approach, which would use coherent light states. Finally, in terms of criticism, let us mention the recent article by Peres [28], where criteria that prevent superluminal signaling are established. These criteria must be obeyed by various operators involved in classical interventions on quantum systems localized in mutually spacelike regions.
What are the arguments in favor of superluminal information transfer? Gisin shows [29] that Weinberg’s general framework [30] for introducing nonlinear corrections into quantum mechanics allows for arbitrary fast communications. It is interesting to note that, in a recent book [31], Weinberg himself states: “I could not find a way to extend the nonlinear version of quantum mechanics to theories based on Einstein’s special theory of relativity (...) both N. Gisin in Geneva and my colleague Joseph Polchinsky at the University of Texas independently pointed out that (...) the nonlinearities of the generalized theory could be used to send signals instantaneously over large distances”.
In this paper, we have presented recent progress achieved at CESAR/ORNL in the area of QT. We have also highlighted some of the formidable theoretical challenges that must be overcome if an application of this technology to communications is to become possible. The feasibility question is, in our minds, still open. To summarize, we now succinctly indicate our near-term proposed road map.
From a theory perspective, we will focus our attention on two recent proposals for superluminal communications. Greenberger has demonstrated [37] that if one can construct a macroscopic Schrodinger cat state (i.e., a state that maintains quantum coherence), then such a state can be used for sending superluminal signals. His scheme assumes that the following two requirements can be realized. First, it should be possible to entangle the signal-transmitting device with the signal itself, thereby constructing a GHZ state. Second, that non-unitary evolution can be established and controlled in a subset of the complete Hilbert space. This latter property has already been demonstrated successfully in several down conversion experiments. Greenberger uses an optical phase shifter as model for his signaling device. We believe that as of this date better alternatives are available. The second Gedankenexperiment we intend to examine was introduced by Srikanth [37]. His proposed method uses a momentum-entangled EPR source. Assuming a pure ensemble of entangled pairs, either position or momentum is measured at the sender. This leaves the counterpart in the EPR pair as either a localized particle or a plane wave. In Srikanth’s scheme, the receiver distinguishes between these outcomes by means of interferometry. Since the collapse of the wavefunction is assumed to be instantaneous, superluminal signal transmission would be established.
We intend to explore possible experimental realizations of the above paradigms. We will also continue to focus on cascaded type-II OPDC, with emphasis on walk-off, optical collimation, optimal generation efficiency, and maximal entanglement. Special attention will also be given to multi-photon entanglement."

Impressionistic searches for the Golden Fleece of entanglement signaling (without the classical signal key) both within and beyond orthodox quantum theory. Stapp accepts Daryl Bem's "feeling the future" data as indicating the latter - we all agree about that. I still don't want to leave any stone unturned on the former, though I am fighting a rear guard action in that case.
The mainstream papers assume no entanglement signaling and deduce the bounds on the fidelity of imperfect copying from that. This may be dangerous circular logic.
Of course no one doubts that only mutually orthogonal states can be perfectly cloned with linear unitary operators.
Also when they use non-unitary operators in open quantum computers they want to use error-correction codes to restore the original bit pattern rather than trying to use the non-unitarity as a value added to get entanglement signaling.
Dan Greenberger wrote that Schrodinger Tigers allow superluminal entanglement signals.
D. Greenberger, “If one could build a macroscopical Schrodinger cat state, one could communicate superluminally”, PhysicaScripta,T76,
57-60 (1998).


"A deep-rooted concept in quantum theory is the linear
superposition principle which follows from the linearity
of the equations of motion [1]. Linear superposition
of states is the key feature which elevates
a two-state system into a qubit. The possibility of
exploiting greater information processing ability using
qubits is now being investigated in the emerging
field of quantum computation and information technology
[2]. Further, linear evolution makes certain
operations impossible on arbitrary superpositions of
quantum states. For example, one of the simplest, yet
most profound, principles of quantum theory is that
we cannot clone an unknown quantum state exactly
[3,4]. Indeed, stronger statements may be made with
stronger assumptions: unitarity of quantum evolution
requires that even a specific pair of non-orthogonal
states cannot be perfectly copied [5]. If we give up
the requirement of perfect copies then it is possible to
copy an unknown state approximately by deterministic
cloning machines [6–11]. Recent work shows that
non-orthogonal states from a linearly independent set
can be probabilistically copied exactly [12,13] and can
evolve into a superposition of differing numbers of
copy states [14].
Notwithstanding the above, we might ask: what
could go wrong if one were to clone an arbitrary state?
In 1982 Herbert argued that the copying of half of an
entangled state, such as by a laser amplifier, would
allow one to send signals faster than light [15]. That
same year the no-cloning theorem demonstrated the
flaw in this proposed violation of causality [3,4]. Thus,
the linear evolution of even non-relativistic quantum
theory and special relativity were not in contradiction.
In fact, one can go a step further and ask if the
no-signalling condition (the impossibility of instantaneous
communication) lies behind some of the basic
axiomatic structure of quantum mechanics [16].
It turns out that the achievable fidelity of imperfect
cloning follows from this no-signalling condition [17,
18]. Further, it can be shown that even probabilistic exact
cloning cannot violate the no-signaling condition
Physics Letters A 315 (2003) 208–212
Quantum deleting and signalling
Arun K. Pati a,b,∗, Samuel L. Braunstein b
a Institute of Physics, Bhubaneswar-751005, Orissa, India
b Informatics, Bangor University, Bangor, LL57 1UT, UK
Received 26 May 2003; received in revised form 30 June 2003; accepted 1 July 2003
"It is known that if one could clone an arbitrary quantum state then one could send signals faster than the speed of light.
Here, we show that deletion of an unknown quantum state for which two copies are available would also lead to superluminal signalling. However, the (Landauer) erasure of an unknown quantum state does not allow faster-than-light communication."
So far, so good, for Stapp's theorem (based only on linear unitarity and the Born rule) that orthodox quantum theory does not permit pure entanglement signals without a classical signal key to unlock the nonlocally encoded message. The classical key restores luminal (or subluminal signaling). However,
"We conclude with a remark that classical information
is physical but has no permanence. By contrast,
quantum information is physical and has permanence
(in view of the recent stronger no-cloning and nodeleting
theorems in quantum information [26]). Here,
permanence refers to the fact that to ‘duplicate’ quantum
information the copy must have already existed
somewhere in the universe and to ‘eliminate’ it, it must
be moved to somewhere else in the universe where it
will still exist. It would be interesting to see if the violation
of this permanence property of quantum information
can itself lead to superluminal signalling.
That it should be true is seen here partly (since deleting
implies signalling). It remains to be seen whether
negating the stronger no-cloning theorem leads to signalling."
Quantum Copying: Beyond the No-Cloning Theorem
Vladimir Buzek, Mark Hillery
(Submitted on 20 Jul 1996)
We analyze to what extent it is possible to copy arbitrary states of a two-level quantum system. We show that there exists a "universal quantum copying machine", which approximately copies quantum mechanical states in such a way that the quality of its output does not depend on the input. We also examine a machine which combines a unitary transformation with a selective measurement to produce good copies of states in a neighborhood of a particular state. We discuss the problem of measurement of the output states.
The Wootters-Zurek no-cloning theorem forbids the copying of an arbitrary quantum state. If one does not demand that the copy be perfect, however, possibilities emerge. We have examined a number of these. A quan-tum copying machine closely related to the one used by Wootters and Zurek in the proof of their no-cloning theorem copies some states perfectly and others poorly. That is, the quality of its output depends on the input. A second type of machine, which we called a universal quantum copying machine, has the property that the quality of its output is independent of its input. Finally, we examined a machine which combines a unitary transformation and a selective measurement to produce good copies of states in the neighborhood of a particular state.
A problem with all of these machines is that the copy and original which appear at the output are entangled. This means that a measurement of one affects the other. We found, however, that a nonselective measurement of the one of the output modes will provide information about the input state and not disturb the reduced density matrix of the other mode. Therefore, the output of these xerox machines is useful.
There is further work to be done; we have only explored some of the possibilities. It would be interesting to know, for example, what the best input-state independent quantum copying machine is. One can also consider machines which make multiple copies. Does the quality of the copies decrease as their number increases? These questions remain for the future.
they do not mention entanglement signaling in the above paper
Quantum copying: A network
V. Buzek, S.L. Braunstein, M. Hillery, D. Bruss
(Submitted on 24 Mar 1997)
We present a network consisting of quantum gates which produces two imperfect copies of an arbitrary qubit. The quality of the copies does not depend on the input qubit. We also show that for a restricted class of inputs it is possible to use a very similar network to produce three copies instead of two. For qubits in this class, the copy quality is again independent of the input and is the same as the quality of the copies produced by the two-copy network
It is possible to construct devices which copy the information in a quantum state as long as one does not demand perfect copies. One can build either a duplicator, which produces two copies, or a triplicator, which produces three. Both of these devices can be realized by simple networks of quantum gates, which should make it possible to construct them in the laboratory.
There are a number of unanswered questions about quantum copiers. Perhaps the most obvious is which quantum copier is the best. Recently it has been shown [5] that the UQCM described in this paper is the best quantum copier able to produce two copies of the original qubit. It is not known, however, how to construct the best quantum triplicator (or, in general, a device which will produce multiple copies, the so-called multiplicator). There exist bounds on how well one can do, which follow from unitarity, but they are not realized by existing copiers [8]. This is at least partially the fault of the bounds which are probably lower than they have to be.
A quantum copier takes quantum information in one system and spreads it among several. It would be nice to be able to see how this happens qualitatively, but, at the moment, it is not clear how to do this. The problem is that we are interested in how only a part of the information flows through the machine. It is only the information in the input state, and not that in the two input qubits, which enter the machine in standard states, the so-called “blank pieces of paper”, which matters, but it seems to be difficult to separate the effect of the two in the action of the machine.
This issue is connected to another, which is how to best use the copies to gain information about the input state. In a previous paper we showed how nonselective measurements of a single quantity on one of the copies can be used to gain information about the original and leave the one-particle reduced density matrix of the other copy unchanged. An interesting extension of this would be to ask, for a given number of copies, how much information we can gain about the original state by performing different kinds of measurements on the copies.
It is clear that quantum copying still presents both theoretical and experimental challenges. We hope to be able to address some of issues raised by the questions in the preceding paragraphs in future publications.
Quantum communication can send both classical and quantum information. By use of this way, we can implement pure quantum communication in directly sending classical information, Ekert’s quantum cryptography[9] and the quantum teleportation[10-13] without the help of classical communications channel.
Bell’s inequality[19] continues to be examined[20]. If Bell’s inequality can be proved and we can get long time EPR pairs against decoherence in future, pure quantum communication implies that sending information can be faster than light! We believe that this will excite much studies on many relative issues.
In summary, I design a simple way of distinguishing non-orthogonal quantum states with perfect reliability using only quantum CNOT gates in the condition. We emphasize that the key of distinguishing the two set states is the difference of discrete and two convergent values of the statistical distribution. We expect that the conditional quantum distinguishability will be proved in experiments and those pure quantum communications can be implemented."
He talks of perfect reliability and he is probably wrong. However, e.g.


he's not the only one - growing group of physicists using imperfect measurements to distinguish non-orthogonal quantum states.
"The security of quantum cryptography relies on the fact that we cannot distinguish, with certainty, several quantum nonorthogonal states ( with 100% efficiency ). That is, if we can distinguish the several quantum nonorthogonal states used as the information carriers in quantum cryptography, then we can successfully eavesdrop it. However, there is one exception in the indistinguishability: in the case of two nonorthogonal states, we can distinguish between them
with certainty, albeit with an efficiency η < 1 [7]-[9]  ... Thus the one at site 2 can distinguish the two mixtures and can implement the superluminal communication. It follows that the unambiguous measurement is not possible in this case from the impossibility of the superluminal communication."
However, if we have ambiguous discrimination of non-orthogonal states we may have albeit uncertain superluminal communication?"
Phys. Rev. A 64, 022311 (2001) [10 pages]
Optimum unambiguous discrimination between linearly independent nonorthogonal quantum states and its optical realization
Yuqing Sun1, Mark Hillery1, and János A. Bergou1,2
1Department of Physics, Hunter College, City University of New York, 695 Park Avenue, New York, New York 10021
2Institute of Physics, Janus Pannonius University, H-7624 Pécs, Ifjúság útja 6, Hungary
Received 24 October 2001; published 13 July 2001
Unambiguously distinguishing between nonorthogonal but linearly independent quantum states is a challenging problem in quantum information processing. In principle, the problem can be solved by mapping the set of nonorthogonal quantum states onto a set of orthogonal ones, which then can be distinguished without error. Such nonunitary transformations can be performed conditionally on quantum systems; a unitary transformation is carried out on a larger system of which the system of interest is a subsytem, a measurement is performed, and if the proper result is obtained the desired nonunitary transformation has been performed on the subsystem. We show how to construct generalized interferometers (multiports), which when combined with measurements on some of the output ports, implement nonunitary transformations of this type. The input states are single-photon states in which the photon is divided among several modes. A number of explicit examples of distinguishing among three nonorthogonal states are discussed, and the networks that optimally distinguish among these states are presented.
© 2001 The American Physical Society
to be continued