You are here:
Home Jack Sarfatti's Blog Blog (Full Text Display)

Jan
25

Tagged in:

On Jan 24, 2011, at 4:26 PM, Jack Sarfatti wrote in the locker room:

Nothing like a steam to clear the mind On I phone so this is brief lest I forget

Newtonian cosmic surface gravity c^2Lamba^1/2 at edge of our future light cone is 10^-7 cm/sec^2

The Einstein static LNIF blue shift factor is 10^28/x = 1/Lambda^1/2x

x = distance to horizon

k = 10^-16

h = 10^-27

c = 10^10

Unruh temperature

T = (10^28/x)(h10^-7/ck)

= 1/x

x in cm

T in deg Kelvin This is 10^11 deg when x = h/mc

Jan 30, 2011. The above calculation is suspect. I now get

g(x) ~ c^2/^1/4x^-1/2

and this gives x ~ Gm/c^2 ~ 10^-56 cm to get T > 2mc^2

However, this is smaller than Lp ~ 10^-33 cm and so this is also suspect.

For a real electron-positron plasma pulled out of the vacuum in h/mc fuzzy quantum uncertainty layer of our future horizon. This real plasma is the Wheeler-Feynman total absorber.

Bohr's insistence on the total experimental arrangement holds here. As Hawking and Gibbons pointed out in 1974 or so GR is also observer dependent, acceleration (deviation from geodesics) changes the quantum vacuum - real vs virtual particles are not invariant when there is acceleration.

For non-accelerating LIF geodesic detectors as the universe expands the wavelength stretches - i.e. conventional redshift for retarded photons moving past to future in the expanding universe. Static LNIFs are a very different story.

On Jan 24, 2011, at 11:28 PM, JACK SARFATTI wrote:

remember the acceleration of static LNIFs in these SSS metrics is

g(r) = (Newton's acceleration)g00(r)^-1/2

where g00 ---> 0 at a horizon

therefore g(r) ---> infinity at a horizon

therefore, the Unruh temperature for virtual electron-positron pairs clamped to the horizon T = hg(r)/ckB ---> infinity classically, but is finite from the Heisenberg uncertainty principle, e.g. smear over h/mc from the classical horizon for example.

On Jan 24, 2011, at 11:21 PM, JACK SARFATTI wrote:

There is no contradiction Nick. I should have been more precise - The future event horizon is an infinite redshift surface for an advanced photon traveling back in time to the present.

It is an infinite blue shift surface for a retarded photon traveling from the present emitter to the horizon.

Unlike a black hole horizon we can never get a retarded photon from our future event horizon.

Similarly for a black hole horizon BTW. However, retarded photons from just outside the past black hole event horizon will be infinitely redshifted by the time they reach us in the present. Similarly, a retarded photon from us will be infinitely blue shifted for a static LNIF detector just outside the horizon.

The math is simple.

For our future horizon for static LNIF detectors

g00 = 1 - Lambda r^2

where we are at r = 0

In contrast for a black hole horizon again for static LNIF detectors

g00 = 1 - 2rs/r

we are at r ---> infinity

keep dt = ds/g00^1/2 invariant to get the relative periods and frequencies at r1 and r2.

On Jan 24, 2011, at 8:57 PM, nick herbert wrote:

you claim that future horizon photons undergo

an infinite red shift.

Now you are claiming those same photons

are infinitely blue shifted.

Jan
24

Tagged in:

http://www.lenr-canr.org/ Italian Cold Fusion Experiment 1/24/11 update

Nick Herbert wrote:

On Jan 24, 2011, at 4:50 PM, nick herbert wrote:

I've looked at lenr.org with great interest but as yet with very little enlightenment. Thanks.

About your suggestion that lattice recoil might absorb positron and gamma energies:

When a heavy nucleus emits light particles such as positrons and gammas the recoil of the nucleus absorbs a lot of momentum but not very much energy. It's like a light person pushing off into the water from a heavy diving raft. The Mossbauer effect (if it exists in this situation)

just makes this situation worse. In the M-effect the entire lattice recoils giving the light particle an effectively infinite mass to push against, hence zero loss of energy due to recoil. This is why the energy of Mossbauer photons is so precise--no recoil. So neither nuclear recoil nor lattice recoil can reduce the energy of the positron or gammas.

What we really need, Brian, before spinning our wild theories is more info.

Since the F-M experiment is so robust, ie, producing kilowatts of power not mere

"anomalous heat", several experiments suggest themselves.

The most obvious experiment is to take a very small amount of Nickel, measure the isotope

abundance in a mass spectrometer, run the experiment for as long as it takes to exhaust

the fuel (whatever that fuel might be). And of course measuring the declining power curve.

Then run a second mass spec on the exhausted fuel to discover what isotopes of Nickel were used up and what isotopes of Copper (if any) were created. This experiment would tell us a lot about what might be going on inside the reactor and produce some very solid evidence for the theorists to chew on.

This is an obvious experiment that any idiot could come up with and I would not be surprised if some version of it going on right now in Bologna. I certainly wouldn't invest in this machine until the results of some version of this simple and obvious experiment were carried out with the high precision that modern lab technology now routinely permits for these sorts of measurements.

Nick Herbert

Brian--

I did not realize you had done so much leg work on cold fusion. Thanks for the input. I admit my ignorance in this field.

I recognize your criticism that my analysis is just conventional nuclear physics and that other factors (you mention the lattice) might have to be taken into account. Fair enough.

Are you familiar with any serious analysis of the Focardi-Rossi experiment that uses conventional physics in a new way and that explains the high power output and negligible gamma ray output? My analysis suggests that for every kilowatt of heat produced a kilowatt of gamma rays should also emerge.

What reaction produces the energy? How much energy is produced per event? How is it transferred to the water?

I eagerly await your answers--more sophisticated than mine-- to these important questions. No hydrinos, please.

If you accept my "conventional nuclear physics" estimate of 1.5 MeV per event you get a reaction rate of 5x10^16 events/sec. Comparing this to Avogadro's Number 6.02x10^23 we find that to keep the reactor running for one year (3x10^7 sec) requires only 2.5 moles of nickel.

Since a mole of nickel weighs 60 grams, only 150 grams of nickel would be needed for a year's operation. Nice.

An American nickel weighs exactly 5 grams, so if it were made of pure nickel, it would only take 30 nickel coins

to run this reactor for a year. Fantastic! Coin-operated fusion to power your house for less than 1/2 cent a day.

(You will also need 2.5 grams of hydrogen)

I realize that my "conventional analysis" may be flawed. And am very interested in seeing your "unconventional analysis"

of this very unconventional experiment.

Nick Herbert

On Jan 24, 2011, at 1:38 AM, Brian Josephson wrote:

--On 23 January 2011 15:24:11 -0800 nick herbert wrote:

Thank you Brian for providing information on what

might be the most important discovery of the 21th-century--

the Focardi-Rossi cold-fusion demonstration at University

of Bologna (1/14/11) which produced 12 kW of heat for more than 30

minutes using ordinary hydrogen and Ni as the reactants.

This experiment is important not only because of the large amount

of heat produced but also because it uses hydrogen rather than

deuterium which would make power reactors based on the F-R Effect

much cheaper to run.

However I (Nick Herbert) am very skeptical concerning the results of

this experiment and predict that it will go the way of Pons-Fleishman,

that is, it will never be able to be reproduced by other researchers and may

even be a scam.

Nick,

First of all, in saying that the P-F results have never be reproduced you have yourself been the victim of a scam, organised by those for whom the phenomenon would be inconvenient in various ways. You should look at the library and other items at lenr.org to get the facst, which you will not get from journals such as Nature who censor such research (though there are accredited, less well known journals that have published such research).

The predominant fallacy is to start from the assumption that 'cold fusion' is just a scaled down version of hot fusion so that the same equations may be applied. This ignores the lattice, which may take away the excess energy instead of high energy particles/gamma rays.

My own perspective on this was changed by being given the video 'Fire from Water', which shows scientists who work on this explaining their experiments. It became clear from this that the usual assessment is badly flawed. While in Boston for a conference I took the opportunity to visit the lab of Mitchell Swartz, who had a pretty convincing setup: there are two identical boxes, one with an ordinary resistor and one with his 'phusor', and you compare temperature rises. In this case the phusor's temperature rise was bigger by 30%, not a negligible amount. While the total power was of the order of a watt it continued for the order of days, so that stored or chemical energy could be excluded as an explanation (the amount of active material being very small). I have also visited the lab of Thomas Claytor at LANL, where he uses a glow discharge and finds tritium being produced, the evidence for this including detecting the specific decay with a scintillation counter, and having the correct half-life. Finally there is the famous anomalous (and fortunately not reproducible) event in an expt. by Mizuno, where the metering equipment recorded the rapid rise of the water temperature to near boiling (the resulting explosion left Mizuno deaf for a week. The energy required to heat this amount of water was far greater than could be explained by ordinary means.

I should add that while Pd-D gets the most publicity, Ni-H has been studied also, so it should not be thought that there is something fishy about this. While Rossi's demonstration was far from perfect, it looks fairly convincing assuming it has not been 'fixed'. One hopes that they will do a better and more controlled expt. (e.g. measuring heat produced more directly, and running it longer) before long, to settle doubts that have been raised. The difficulty is that if one plans to develop something for practical use and make money then one cannot publish in the usual way so people can reproduce it -- there a number of people who are doing this. The test will be when (if) people start to use the devices for energy generation.

Brian

PS: I wonder what Hal Puthoff, who I see is on the cc list, thinks of the Rossi expt.?

* * * * * * * Prof. Brian D. Josephson :::::::: bdj10@cam.ac.uk

* Mind-Matter * Cavendish Lab., JJ Thomson Ave, Cambridge CB3 0HE, U.K.

* Unification * voice: +44(0)1223 337260 fax: +44(0)1223 337356

* Project * WWW: http://www.tcm.phy.cam.ac.uk/~bdj10

* * * * * * *

From: Paul Zielinski
Date: January 24, 2011 12:37:08 PM PST

To: nick herbert
Subject: Re: Italian Cold Fusion Experiment 1/14/11

OK I guess the otherwise unexplained appearance of copper isotopes in the nickel could be

considered to be the signature of a nuclear reaction. So the situation here is not simple.

I really cannot understand the ferocity of the attacks on Fleischmann and Pons. If it turns out

that they did discover a reproducible anomalous effect that cannot be explained chemically,

then they should be given full credit for the discovery regardless of which theoretical explanation

eventually eventually comes to be accepted by the trade union of physicists.

What is this, scientific Balkanization?

Could it be that members of the trade union of chemists are simply not permitted to make

fundamental discoveries in physics? Even if the discovery is purely empirical in character,

coupled with a declaration that there is no obvious chemical explanation for the phenomenon?

On Mon, Jan 24, 2011 at 10:33 AM, Paul Zielinski wrote:

Obviously, even if there is no ready theoretical explanation for the anomalous quantity of heat

generated by such devices (given that none of the usual signatures of a nuclear reaction are

present) the empirical fact of anomalous heat generation is in itself significant and, as Nick

says, potentially world changing. That such a process is theoretically unexpected is then

merely a challenge for the theoreticians.

On Jan 24, 2011, at 12:57 PM, Paul Murad wrote:

Paul:

This process is quite interesting. When the original experiment came up, people did their due diligence but expected things to happen right away. They did not recognize that the process may not start from 24 hours to a week.

In an attempt to eliminate unknowns, they used electrodes involving pure Palladium; however, it was the impurities in the Palladium that actually started the reaction. The purer the Palladium, the longer the time required to start the process.

George Miley presented a STAIF paper where he used a laminated anode that consisted of several layers of material. What was intriguing was that the reaction almost started instantaneously. His next part of the experiment was to count neutrons to ensure that it was a nuclear reaction and not something else.

Ufoguy....

Jan
24

Tagged in:

Scott Chubb in Nov 2010 Physics Today Letter gives a theory of cold fusion without gamma rays.

from a public email list

On Jan 23, 2011, at 3:24 PM, nick herbert wrote:

"Thank you Brian for providing information on what

might be the most important discovery of the 21th-century--

the Focardi-Rossi cold-fusion demonstration at University

of Bologna (1/14/11) which produced 12 kW of heat for more than 30 minutes

using ordinary hydrogen and Ni as the reactants.

This experiment is important not only because of the large amount

of heat produced but also because it uses hydrogen rather than

deuterium which would make power reactors based on the F-R Effect

much cheaper to run.

However I (Nick Herbert) am very skeptical concerning the results of this

experiment and predict that it will go the way of Pons-Fleishman, that is,

it will never be able to be reproduced by other researchers and may even

be a scam.

I hope I am wrong because this Bologna invention could truly change the world.

Here is my reasoning (I welcome input from Brian and other physicists).

Focardi & Rossi are disturbingly vague concerning the actual mechanism of

their reaction except that it is some sort of nuclear reaction between protons

(hydrogen nuclei) and Nickel. Such a p-Ni reaction would be expected to produce

an isotope of copper in an excited state which upon de-excitation would transfer some

of its energy to the water bath. Since this is a nuclear reaction rather than a chemical

one, we expect a great deal of energy from each reaction R.

The conversion factor between nuclear energies (MeVs) and heat energy (Joules) is:

1 MeV = 1.6 X 10^-13 Joules

Now suppose each nuclear reaction R produces 1.5 Mev.

Then to produce a power output of 12 kW (= 12000 Joules/sec) would require

a reaction rate R(dot) of 5 x 10^16 reactions/second.

R(dot) = 5x10^16 reactions/second

Consider a typical p-Ni reaction with Ni(58)--the most abundant Ni isotope (68.3%)

(taken from an AEC Chart of the Nuclides)

p + N(58) --> Cu(59)* --> Ni(59) + positron + neutrino + gamma.

According to the AEC chart the positron and neutrino share an energy of 3.78 Mev

and the gamma takes away another 1.1 Mev. For the sake of easy calculation I assume

the average kinetic energy of the positron is 1.5 Mev AND THAT THIS ENERGY IS ABSORBED BY THE WATER BATH. The rest of the beta-decay energy is taken away by the neutrino and plays no further part in the calculation.

Hence the figure used here of 1.5 Mev per reaction.

However in addition to the heat energy, each nuclear reaction produces 3 gamma rays-- one

(of energy 1.1 Mev) from deexcitation of Cu(59)* and two gamma rays of (0.511 Mev each)

from annihilation of the positron.

Hence the Focardi-Rossi reactor operating at 12000 watts might be expected to produce a gamma ray output of

G(dot) = 15 x 10^16 gamma rays/second.

The strength of a radioactive source is measure in Curies

1 Curie = 3.7 x 10^10 disintegrations/sec

Thus the calculated gamma ray flux from the F-R reactor is then

equivalent to a radiation source of strength

4x10^6 Curies of gamma radiation

[expected output from the Focardi-Rossi reactor]

From the Wikipedia entry on the Curie as a unit of radiation

"A radiotherapy machine may have 1000 Curies of a radioisotope such as Cesium 137

or Cobalt 60. This quantity of nuclear material can produce severe health effects within a few minutes of exposure."

ONE THOUSAND CURIES is considered a seriously dangerous radiation source.

Yet the calculated radiation strength of the Focardi-Rossi reactor (running at 12000 watts) is in the range of ONE MILLION CURIES.

To their credit the researchers at Bologna placed a gamma ray detector along side the reactor but it detected essentially nothing during the course of the experiment.

In conclusion, I wish the Italian experimenters every success and hope that their invention will revolutionize the world by producing a cheap, efficient and clean source of energy for mankind.

However the total lack of gamma rays from what is supposed to be a nuclear reaction is highly suspicious. If I were an investor I would be extremely cautious about putting money into this scheme until F & R manage to produce a plausible mechanism for the operation of their device.

What's your thinking concerning this device, Brian?

===============================

Nick Herbert

http://quantumtantra.blogspot.com

On Jan 23, 2011, at 2:34 AM, Brian Josephson wrote:

--On 22 January 2011 16:54:37 -0800 Paul Zielinski wrote:

It's a long thesis and not everyone has the time to wade through it.

She has an interesting article in July 2010's Scientific American (requires subscription to read it all on line):

Is the Universe Leaking Energy?

Total energy must be conserved. Every student of physics learns this

fundamental law. The trouble is, it does not apply to the universe as

a whole

...

Brian

PS on the subject of Sci.Am., take a look at

and my comment. Disgraceful, but one expects no better from them!

PS2: are you (Sharon especially) aware that a cold fusion reactor has been demonstrated in Italy? The 'scientific report' is due out soon. See

<http://www.journal-of-nuclear-physics.com/?p=360>

and <http://www.lenr-canr.org/News.htm>

* * * * * * * Prof. Brian D. Josephson :::::::: bdj10@cam.ac.uk

* Mind-Matter * Cavendish Lab., JJ Thomson Ave, Cambridge CB3 0HE, U.K.

* Unification * voice: +44(0)1223 337260 fax: +44(0)1223 337356

* Project * WWW: http://www.tcm.phy.cam.ac.uk/~bdj10

* * * * * * *

Jan
23

Tagged in:

1) Remember retarded light falling into a black hole is blue shifted for static LNIFs just outside the horizon. The infinite redshift is for retarded light leaving a static LNIF emitter just outside the event horizon. It is not clear if light emitted by a geodesic LIF emitter falling through the horizon will be similarly redshifted because the LIF metric is approximately Minkowski not g00 = 1 - rs/r. That is, the LIF signal should be same as the static LNIF at r ---> infinity. It is not correct to use the g00 = 1 - rs/r metric representation for any devices that are not static hovering. The Pound-Rebka experiment for example is done with static LNIF detectors clamped to the Harvard Tower. On the other hand we have the GPS satellites which in free orbit should correspond to r --> infinity. Therefore, the question is whether the accuracy and precision of the GPS redshift corrections can detect the difference between r = position of satellite and "infinity"?

Suppose the satellite is twice the distance from the center of Earth to the Earth's surface where the approximately static LNIF ground detector is.

1/f(r)(1 - rs/r)^1/2 = 1/f(2r)(1 - rs/2r)^1/2 f(2r)/f(r) = (1 - rs/r)^1/2/(1 - rs/2r)^1/2

rs/r << 1

f(2r)/f(r) ~ (1 - rs/2r)/(1 - rs/4r) ~ (1 - rs/2r)(1 + rs/4r) ~ 1 - rs/2r + rs/4r ~ 1 - rs/4r

vs 1 - rs/2r if we use r --> infinity for the satellite

note that rs ~ .4cm

& r ~ 6x 10^8 cm

rs/r ~ (2/3)10^-9

for a visible light signal the gravity redshift is of order 10^-9 10^15 Hz ~ megaHz

so the issue is can these fractions be unambiguously detected? It would seem so.

Again the problem is, does the free-float GPS detector see the static LNIF metric or the LIF metric? The equivalence principle says the latter. What is seen depends on a transaction between both the sender and receiver detectors.

2) We can never see retarded light from our future horizon - unlike the black hole situation. Is the dark energy advanced Unruh radiation from our future horizon at temperature T ~ c^2/^1/2?

Following Lenny Susskind - each BIT on the horizon is nonlocally smeared over the entire horizon from our POV. The Stephan-Boltzmann law gives Poynting flux ~ T^4 ~ Lambda^2 per BIT, but the number of BITS is N ~ 1/Lp^2Lambdawhich gives the observed

dark energy density ~ NLambda^2 ~ hcLambda/Lp^2

3) The apparent conflict between the local gauge principle and the Wheeler-Feynman light cone limited advanced-retarded idea needs clarification I agree. I mean the issue of dynamically independent vector fields to mediate forces between spinor fields.

4) The conformal singularity is an artifact of a solution of Guv + /guv = 0. Therefore it cannot be fundamental to the problem of deriving that law of nature.

5) Hoyle and Narlikar do assert that in the de Sitter / > 0 solution that the total absorber condition is obeyed so we need to see if their argument is correct.

6) I am hopeful about my real electron-positron plasma idea from the blue-shift Unruh effect high temperature at our observer-dependent future event horizon.

On Sat, Jan 22, 2011 at 10:50 PM, JACK SARFATTI wrote:

I don't quite understand your verbal argument below on first reading, but my question is, what does it mean that gravity is fundamentally electromagnetic?

How do you get Einstein's field equations

Guv + kTuv = 0

from

Maxwell's EM field equations?

dF = 0

d*F = *J

How do you get a metric field guv from them in Minkowski spacetime?

Or do you mean something else?

EM is the local gauge field of an internal compact U1 group that is not universal for uncharged fields.

Gravity is the local gauge field of an external non-compact universal group, e.g. T4 for a start.

*At first I mean only that EM radiation is a change in the state of the gravitational background. That conclusion seems forced after considering the effect of the future conformal singularity AND the observed fact of retarded radiation. These two are compatible only if the radiation can be re-interpreted a reduction in a negative energy field. The only candidate is gravitation. What does this mean for the relationship between the two? I guess you could take the view that gravity is more fundamental than EM if you wish, so my statement about which is more the fundamental is premature.*

With no fields, the U1 gauge invariance loses its meaning. Starting from direct action, one is free to manufacture fields to carry the forces in order to facilitate the mathematics, provided of course one does not accidentally introduce additional degrees of freedom (associated with vacuum solutions). If one does so, then those fields must respect U1 invariance.

Perhaps if direct action EM were somehow to underpin gravity, then the resulting theory would likewise be direct action, along the lines of Regge Calculus perhaps. If so, if the tetrad could in principle be eliminated from the theory, then, just as for U1 in EM the associated invarainces would loose their meaning.

This is not to imply the above moves us any closer to identifying the conjectured relationship between EM and GR. All it means is that I would not start by trying to construct a theory exhibiting diffeomorphism covariance.

- Michael

On Jan 22, 2011, at 8:29 PM, michael ibison wrote:

On Sat, Jan 22, 2011 at 5:39 PM, JACK SARFATTI wrote:

OK thanks

On Jan 22, 2011, at 12:35 PM, michael ibison wrote:

Jack:

A few notes of clarification in response to your recent exchanges.

I looked at the evolution of the Dirac wavefunction as it crossed the conformal singularity.

True, I did treat the EM fields classically, but the outcome for the EM fields would have been the same if they had been quantized.

In a sense the fields do continue across the singularity 'forever'. But the constraint that the Friedmann equation is obeyed means that the post singularity evolution must mirror the pre-singularity evolution up to a certain set of discrete symmetries. This turns out to be enough to make the singularity look like a time-like magnetic mirror in the coordinate system I analyzed. In other coordinate representations of the de Sitter asymptotic future, the singularity has a different structure and the set of discrete symmetries are correspondingly different with corresponding consequences for the property of the mirror.

An outcome of this - if correct - is that discrete local symmetries (C P T Mass) are related to the global topology as implied by the particular choice of coordinate system to represent the singularity.

Fine, but does that tell us anything about total absorber condition?

As I point out in the paper, this has observable consequences unrelated to direct action versus field theory debate. Roughly, the effect of the mirror is to invert the W/F argument and leads to the conclusion that the natural Green's function is advanced, not retarded. By natural I mean automatically consistent with the mirror condition, requiring no additional complimentary-function terms.

1. Presume that matter is electromagnetically bound, i.e. inter-particle interaction energies are generally negative.

and then

2. Re-interpret the increase of field energy diverging on the future oriented cone (normally associated with radiation) as a reduction in the negative binding energy of the matter.

Retarded EM radiation from a source is therefore re-intepreted as annihilation of negative energy on the advanced cone of that source.

I argue on the paper that the only way to allow for pair annihilation (the maximum radiation possible from a source) is to identify the negative background energy with (Cosmological) gravitation. And so I see no way to reconcile theory with observation unless gravity is fundamentally electromagnetic.

*Subject: Re: why doesn't light just go on forever?In the retro-causal hologram idea, / is fundamental./^-1 is the entropy of our observable universe.
I mean in the Wheeler-Feynman sense - the area at the intersection of our future light cone with our 2D future event horizon with N(t) BITS.All material objects at our here-now are 3D hologram images of the N(t) BITS.This is really crazy and I don't really believe it, but it is the logical conclusion of what 't Hooft and Susskind started. ;-)On Jan 24, 2011, at 11:10 PM, JACK SARFATTI wrote:On Jan 24, 2011, at 9:07 PM, michael ibison wrote:On Sun, Jan 23, 2011 at 2:09 PM, JACK SARFATTI <sarfatti@pacbell.net> wrote:1) the apparent conflict between the local gauge principle and the Wheeler-Feynman light cone limited advanced-retarded idea needs clarification I agree.I mean the issue of dynamically independent vector fields to mediate forces between spinor fields.Yes. I am writing this up.
2) The conformal singularity is an artifact of a solution of Guv + /guv = 0. Therefore it cannot be fundamental to the problem of deriving that law of nature.That is a very interesting statement. First of all, what does artifact mean in this context?Contingent, accident of history - think of the landscape idea.Presumably the field equations are more fundamental than their particular solutions with arbitrary boundary/initial/final conditions/In other words some property of a solution like the conformal singularity cannot be the fundamental basis for the field equations that contain solutions without a conformal singularity (or any other particular feature in a solution). The field equation (critical points of the action functional) gives a space of solutions for all possible boundary/initial/final constraints. I include final conditions in the sense of Yakir Aharonov since quantum field theory, unlike classical field theory, has independent pre-selected initial and post-selected final constraints for measurements in-between.It implies I think an accident. Something that might conceivably have been otherwise.Right.I also am uncomfortable that such an important boundary condition arises accidentally, from an unrelated direction (the presence of the / vacuum term). Therefore, I expect these things are related.Maybe so, Eugene Wigner I think worried about this, so did Wheeler. 3) Hoyle and Narlikar do assert that in the de Sitter / > 0 solution that the total absorber condition is obeyed so we need to see if their argument is correct.I doubt this is correct. Can you please give a reference? I have their 2 books and their paper and do not recall them calling on intrinsically Cosmological effects to produce a W/F boundary condition. I recall them discussing only absorption by inter-galactic plasmas. Anyhow, with all due respect to Hoyle and Narlikar, they are wrong if did so. In his book 'The Physics of Time Asymmetry' Davies details the historical attempts to justify the boundary condition. None succeeds according to that analysis.It's in their two little books, also it's in their RMP paperThe "steady state" theory has the same essential properties as the de Sitter / > 0 in this regard. - Michael4) I am hopeful about my real electron-positron plasma idea from the blue-shift Unruh effect high temperature at our observer-dependent future event horizon.
On Jan 23, 2011, at 9:19 AM, michael ibison wrote:On Sat, Jan 22, 2011 at 10:50 PM, JACK SARFATTI <sarfatti@pacbell.net> wrote:I don't quite understand your verbal argument below on first reading, but my question is, what does it mean that gravity is fundamentally electromagnetic?How do you get Einstein's field equationsGuv + kTuv = 0fromMaxwell's EM field equations?dF = 0d*F = *JHow do you get a metric field guv from them in Minkowski spacetime?
Or do you mean something else?EM is the local gauge field of an internal compact U1 group that is not universal for uncharged fields.Gravity is the local gauge field of an external non-compact universal group, e.g. T4 for a start.At first I mean only that EM radiation is a change in the state of the gravitational background. That conclusion seems forced after considering the effect of the future conformal singularity AND the observed fact of retarded radiation. These two are compatible only if the radiation can be re-interpreted a reduction in a negative energy field. The only candidate is gravitation.What does this mean for the relationship between the two?I guess you could take the view that gravity is more fundamental than EM if you wish, so my statement about which is more the fundamental is premature.Regarding the invariances of EM and GR, I would point out that an EM theory that fits the above description, i.e. with a compliant conformal singularity acting as an effective mirror, plus the re-interpretation of radiation I suggested, means that it is possible to construct a direct action version of EM i.e. with no genuine field degrees of freedom consistent with the observation of radiation - just as Wheeler and Feynman had hoped. The direct action version is not mandated by this reasoning, but it would surely be favored over field theory by William of Ockham.With no fields, the U1 gauge invariance loses its meaning. Starting from direct action, one is free to manufacture fields to carry the forces in order to facilitate the mathematics, provided of course one does not accidentally introduce additional degrees of freedom (associated with vacuum solutions). If one does so, then those fields must respect U1 invariance.Perhaps if direct action EM were somehow to underpin gravity, then the resulting theory would likewise be direct action, along the lines of Regge Calculus perhaps. If so, if the tetrad could in principle be eliminated from the theory, then, just as for U1 in EM the associated invarainces would loose their meaning.This is not to imply the above moves us any closer to identifying the conjectured relationship between EM and GR. All it means is that I would not start by trying to construct a theory exhibiting diffeomorphism covariance.- Michael
On Jan 22, 2011, at 8:29 PM, michael ibison wrote:On Sat, Jan 22, 2011 at 5:39 PM, JACK SARFATTI <sarfatti@pacbell.net> wrote:OK thanksOn Jan 22, 2011, at 12:35 PM, michael ibison wrote:Jack:A few notes of clarification in response to your recent exchanges.The calculation I did was not purely classical.
I looked at the evolution of the Dirac wavefunction as it crossed the conformal singularity.
True, I did treat the EM fields classically, but the outcome for the EM fields would have been the same if they had been quantized.
In a sense the fields do continue across the singularity 'forever'. But the constraint that the Friedmann equation is obeyed means that the post singularity evolution must mirror the pre-singularity evolution up to a certain set of discrete symmetries. This turns out to be enough to make the singularity look like a time-like magnetic mirror in the coordinate system I analyzed. In other coordinate representations of the de Sitter asymptotic future, the singularity has a different structure and the set of discrete symmetries are correspondingly different with corresponding consequences for the property of the mirror.An outcome of this - if correct - is that discrete local symmetries (C P T Mass) are related to the global topology as implied by the particular choice of coordinate system to represent the singularity.Fine, but does that tell us anything about total absorber condition?If you believe the reasoning that leads to the boundary condition at the singularity (frankly I can so no other option) then Cosmology provides a mirror - not an absorber - at the future conformal singularity.As I point out in the paper, this has observable consequences unrelated to direct action versus field theory debate. Roughly, the effect of the mirror is to invert the W/F argument and leads to the conclusion that the natural Green's function is advanced, not retarded. By natural I mean automatically consistent with the mirror condition, requiring no additional complimentary-function terms.Though this would seem to be in direct conflict with the observation of (predominantly) retarded radiation there is a way to achieve agreement as follows:1. Presume that matter is electromagnetically bound i.e. inter-particle interaction energies are generally negative.and then2. Re-interpret the increase of field energy diverging on the future oriented cone (normally associated with radiation) as a reduction in the negative binding energy of the matter.Retarded EM radiation from a source is therefore re-intepreted as annihilation of negative energy on the advanced cone of that source.I argue on the paper that the only way to allow for pair annihilation (the maximum radiation possible from a source) is to identify the negative background energy with (Cosmological) gravitation. And so I see no way to reconcile theory with observation unless gravity is fundamentally electromagnetic.
*

On Jan 22, 2011, at 11:15 AM, Paul Zielinski wrote:

On Sat, Jan 22, 2011 at 1:11 AM, JACK SARFATTI <sarfatti@pacbell.net> wrote:

On Jan 21, 2011, at 11:18 PM, Paul Zielinski wrote:

On Fri, Jan 21, 2011 at 7:23 PM, JACK SARFATTI <sarfatti@pacbell.net> wrote:

On Jan 21, 2011, at 6:53 PM, Paul Zielinski wrote:

If their "physical part" LC^ represents a "true" physical quantity, why would it not be generally

covariant in GTR? Given that GTR is a generally covariant theory?

all that shows is that your intuition detached from the mathematical machinery leads you to wrong hunches.

It's a question. A perfectly reasonable one in my view.

What is the point of general covariance if physical quantities are not generally covariant?

Fair question.

General covariance is simply the local gauge invariance of the translation group T4(x).

Mathematically this is just a fancy recipe for generating GCTs. It's not at all clear to me that such local

invariance has any more meaning than that in gauge gravity.

The local gauge principle is an organizing meta-principle that unifies and works. Also it give physical meaning to GCTs when combined with the equivalence principle as the computation of invariants by locally coincident Alice and Bob each independently in arbitrary motion measuring the same observables. SR is restricted to inertial motions and constant acceleration hyperbolic motion (Rindler horizons & maybe extended to special conformal boosts).

Locally gauging SR with T4 ---> T4(x) gives 1916 GR.

However, the INDUCED spin 1 vector tetrad gravity fields e^I are fundamental with guv spin 2 fields as secondary. Nick's problem why no spin 1 & spin 0 in addition to spin 2 still needs a good answer of course.

In terms of reference frames, doesn't this simply mean that the observer's velocity is allowed to vary from point to point in spacetime?

No. It means that and a lot more. The coincident observers also can have, acceleration, jerk, snap, crackle, pop, i.e. D^nx^u(Alice, Bob ...)/ds^n =/= 0 for all n.

Physically it corresponds to locally coincident frame transformations between Alice and Bob each of which is on any world line that need not be geodesic, but can be.

I think you should say here that it is the invariance of tensor quantities under such transformations.

It's COVARIANCE not INVARIANCE. Invariants can be constructed by contractions of COVARIANTS.

e.g. in non-Abelian gauge fields SU2 & SU3, unlike the U1 Maxwell electrodynamics the curvature 2-form F^a is not invariant, but is covariant

i.e.

F^a = dA^a + f^abcA^b/\A^c

[A^b,A^c] = fa^b^cA^a

F^a ---> F^a' = G^a'aF^a

This is COVARIANCE not INVARIANCE (U1 is a degenerate case exception).

G^a'a is a matrix irrep of G (Lie gauge group of relevant frame transformations).

when the gauge connection Cartan 1-form transforms inhomogeneously (not a G-tensor)

A^a --> A^a' = G^a'aA^a + G^bc'G^a'b,c'

In gravity G is a universal space-time symmetry group for all actions of all physical fields including their couplings. This is the EEP in most fundamental form.

But if "physical" quantities (e.g., LC^) are not invariant under such transformations, what is the point of general covariance?

As far as I can see calling GCTs "gauge transformations" based on a superficial analogy with internal parameter gauge theory

doesn't change anything.

The intrinsic induced pure gravity fields are the four tetrads e^I that form a Lorentz group 4-vector hence spin 1.

Well this is tricky. It is the tetrad *transformations* e^u_a that represent the Einstein field. Such transformation take

you from a coordinate LNIF basis to an LIF orthonormal non-coordinate tetrad basis. Thus the e^u_a pick up both

the intrinsic geometry *and* the coordinate representation of the LNIF

Of course the e^u_a and the e^a_u can also be treated as the components of the LNIF coordinate basis vectors in the tetrad

basis, and vice versa, but that is another matter.

Each e^I is generally INVARIANT i.e. scalar under GCTs T4(x).

Right. Local Lorentz frames and LLTs are represented by orthonormal tetrad basis vectors in this model, while we

are free to apply arbitrary GCTs in the local frames. I think it is this subtlety of the tetrad model that has led Chen

and Zhu astray as to their attempted decomposition of the LC connection into physical and "spurious" parts in the

context of plain vanilla GTR (coordinate frame model).

What they appear to have done is extract the first order variation of the metric g_uv from the part that encodes the

Riemann curvature, attributing such first order variation in its entirety to the choice of coordinates. If so then the

entire paper is misconceived IMO.

The LC connection is not gauge invariant nor even gauge covariant - that's an effect of the equivalence principle that Newton's "gravity force" is a chimera - 100% inertial force from the acceleration of the detector in curved spacetime.

Of course and no one is saying that it is. We are talking about the "physical part" LC^ that Chen and Zhu claim

to have extracted from LC, after removiing what they call the "spurious" part LC_ that according to them simply reflects

the choice of coordinates.

But that choice is also physical though not intrinsic. Its physical because its a state of motion of a detector - ultimately at the operational level where the hard rubber hits the ground of experience.

My point is that if their "physical part" LC^ is not a covariant quantity, then its intrinsic value likewise depends on

the choice of coordinates. This makes no sense to me. Not only that, but they claim to be able to derive a tensor

vacuum stress-energy density from such a quantity. Since the whole problem with the Einstein and various other

stress-energy pseudotensors is precisely that they are not covariant quantities, what exactly *is* the point of their

paper?

It depends what you mean by "physical". Arbitrary concomitant g-forces are observables even though they are are not tensor covariants or part of the intrinsic curvature geometry, which is 100% geodesic deviations.

It's not clear to me whether Chen and Zhu are saying this must be the case in gauge gravity, or in the GTR,

or both. Their reasoning strikes me as obscure.

What happens in local gauging of a rigid group G to a local group G(x) is that the induced compensating connection A Cartan 1-form (principle bundle etc) needed to keep the extended action of the source matter field (associated bundle etx) invariant can never be a tensor relative to G(x). That's in the very definition of local gauging'.

You're talking here about a connection. Of course, everyone knows that. If the connection itself is a tensor, then you

don't get a covariant derivative. A connection has to be non-covariant. In order to correct for curved coordinate artifacts

in partial derivatives, it has do depend non-tensorially on the coordinates.

But Chen and Xhu said they were going to remove the coordinate dependent part LC_ from the LC connection to get

their "physical part" LC^. If so, then why is the resulting LC^ not a covariant quantity?

And if it isn't, how does it help with the construction of a vacuum stress-energy tensor?

Clear as mud.

All you can hope for is covariance of the "field" 2-form, i.e. the 2-form A-covariant derivative of itself is a tensor under G(x).

D = d + A/\

Jack, no one is saying that connections are tensors. Please.

But A/\A = 0 for U1(x)

a = 1

but

A/\A =/= 0

for SU2(x)

a = 1,2,3

&

SU3(x)

a = 1,2,3,4,5,6,7,8

In general A/\A -> fbc^aA^b/\A^c

i.e. F^a = DA^a = dA^a + fbc^aA^b/\A^c

[A^b,A^c] = f^abcA^c

In the special case G(x) -> U1(x) the field 2-form F = dA is actually invariant, but not so for SU2(x) & SU3(x)

If G(x) has the representation U(G(x)) then

A -> A' = UAU^-1 + dUU^-1

F --> F' = UFU^-1

Now for Einstein's GR G(x) -> T4(x)

and the induced A is NOT the spin 2 Christoffel symbol etc. but the non-trivial TETRAD set.

I guess you mean the tetrad *transformations*, starting from an LNIF coordinate basis.

The induced A clearly depends on the initial coordinates and on the geometry in the general

case.

the internal index a is replaced by the Lorentz group index I (J,K etc).

The induced gravity spin 1 tetrad connection is A^I analog to A^a (Yang-Mills)

I = 0, 1, 2, 3

the relation to the spin 2 Christoffel symbol is very indirect and complicated.

OK fine but beside the point. No one is arguing that a connection is a tensor. As far as I know

no one ever has.

Exactly what is Chen and Zhu's so-called "geometric part" LC_ ? Do you know?

And how do Chen and Zhu propose to derive a vacuum stress-energy *tensor* from LC^ if

LC^ is not itself covariant? How can non-covariant LC^ be a solution to the GR energy

problem?

Doesn't make sense.

On Fri, Jan 21, 2011 at 6:44 PM, JACK SARFATTI <sarfatti@pacbell.net> wrote:

On Jan 21, 2011, at 6:15 PM, Paul Zielinski wrote:

Yes you're right -- they start by saying that they are separating a coordinate dependent

part LC_ from LC, leaving what they call the "physical part" LC^ that represents the true

gravity field, but then on p8 they say that LC^ cannot transform covariantly under GCTs

due to the way that LLTs are represented in gauge gravity.

So I think they've been led astray by their gauge gravity template.

No, I think it means that what you want to do cannot be done.

I know you think that.

Note, that Arnowitt, Deser & Misner in 1962 have a solution, but it too is too limited in the end.

Yes I'm looking at it.

I think my approach is much simpler, much more direct -- you just remove the coordinate

correction terms from the LC connection, leaving a unique tensor residue that encodes

the intrinsic spacetime geometry. No need for perturbation expansions and so on -- exact

decomposition.

As I said your words are too vague. You just beat around the push as if saying what you want to do is the same as doing it.

Come on Jack -- the math is settled. Nothing vague about it.

Just ask Waldyr.

You are doing magickal cargo cult thinking as if wishing makes it so - in my opinion.

Sure Jack.

So how do Chen and Zhu propose to build a covariant vacuum stress energy tensor from non-covariant

LC^? Aren't they just going around in circles? Isn't *that* magickal cargo cult thinking?

This brings us back to the chronic confusion between passive and active diffeomorphisms

in gauge gravity. I think this may be also the root of the confusion in this paper.

On Fri, Jan 21, 2011 at 4:35 PM, JACK SARFATTI <sarfatti@pacbell.net> wrote:

Z you are mistaken - you have not correctly read the text on p.8

<PastedGraphic-21.tiff>

transformations in eq. 6

On Jan 21, 2011, at 4:17 PM, JACK SARFATTI wrote:

On Jan 21, 2011, at 3:50 PM, Paul Zielinski wrote:

OK I read it.

This is exactly the same decomposition I've been talking about for years, approached from the perspective of gauge

gravity. And yes it points directly to a localized tensor gravitational vacuum stress-energy density, as I have always

maintained.

The "physical" part of the LC connection defined in this paper is just the tensor component of LC

Where do they say that? Copy and paste the exact text please. "Tensor" with respect to what group of frame transformations? With respect to the rigid Poincare group of the background Minkowski spacetime - no problem. Remember they do perturbation theory on a non-dynamical globally flat background.

guv = nuv + huv

huv << nuv

therefore no horizons g00 -> 0 in this limit .

that corrects for

(and thus encodes) the spacetime geometry, as I've already explained. This part is zero everywhere in a Minkowski

spacetime in *any* coordinate system. I have been calling this the "geometric" part of the LC connection.

What the authors of this paper erroneously describe as the "pure geometric" part of the LC connection is the part that

corrects only for *coordinate* artifacts (what I've been calling the "curved coordinate correction term"). This part is zero

in a Minkowski spacetime in *rectilinear* coordinates, but not in *curvilinear* coordinates. This is the "gauge dependent"

part of the LC connection field.

There is no actual need for any perturbation expansion here -- the decomposition is exact as well as unique and can be

arrived at without the use of any approximations (as I've explained). So the use of perturbation methods in this paper looks

like a quirk of the authors' gauge gravity mindset.

I get the impression that while they are on the right track, Chen and Zhu have not yet fully understood the fundamental

meaning of the LC decomposition, blinded as they are by the arcane mysteries of gauge theory. :-)

On Wed, Jan 19, 2011 at 8:38 PM, JACK SARFATTI <sarfatti@pacbell.net> wrote:

Unfortunately it's a perturbation series technique - it obviously cannot describe thermal horizons. Throws the baby out with the bath water, but it's a step in the right direction.

Because of non-linearity, we have to rely again on perturbative method, and require that the gravitational field be at most moderately strong.

On Jan 19, 2011, at 7:39 PM, JACK SARFATTI wrote:

yes, this is relevant thanks.

On Jan 19, 2011, at 7:35 PM, Jonathan Post wrote:

If we're trying to distinguish between gravitational effects and pseudo-effects that depend on coordinatizations, is this useful?

Cross-lists for Thu, 20 Jan 11

[73] arXiv:1006.3926 (cross-list from gr-qc) [pdf, ps, other]

Title: Physical decomposition of the gauge and gravitational fields

Authors: Xiang-Song Chen, Ben-Chao Zhu

Comments: 11 pages, no figure; significant revision, with discussion on relations of various metric decompositions

Subjects: General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Theory (hep-th); Mathematical Physics (math-ph)

Physical decomposition of the non-Abelian gauge field has recently solved the two-decade-lasting problem of a meaningful gluon spin. Here we extend this approach to gravity and attack the century-lasting problem of a meaningful gravitational energy. The metric is unambiguously separated into a pure geometric term which contributes null curvature tensor, and a physical term which represents the true gravitational effect and always vanishes in a flat space-time. By this decomposition the conventional pseudo-tensors of the gravitational stress-energy are easily rescued to produce definite physical result. Our decomposition applies to any symmetric tensor, and has interesting relation to the transverse-traceless (TT) decomposition discussed by Arnowitt, Deser and Misner, and by York.

Jan
22

Tagged in:

On Jan 21, 2011, at 6:53 PM, Paul Zielinski wrote:*If their "physical part" LC^ represents a "true" physical quantity, why would it not be generally covariant in GTR? Given that GTR is a generally covariant theory?*

To which I replied:

"All that shows is that your intuition detached from the mathematical machinery leads you to wrong hunches. What happens in local gauging of a rigid group G to a local group G(x) is that the induced compensating connection A Cartan 1-form (principal bundle etc) needed to keep the extended action of the source matter field (associated bundle etc) invariant can never be a tensor relative to G(x). That's in the very definition of local gauging.

All you can hope for is covariance of the "field" 2-form, i.e. the 2-form A-covariant derivative of itself is a tensor under G(x).

D = d + A/

But A/A = 0 for U1(x)

a = 1

but

A/A =/= 0

for SU2(x)

a = 1,2,3

&

SU3(x)

a = 1,2,3,4,5,6,7,8

In general A/A -> fbc^aA^b/A^c

i.e. F^a = DA^a = dA^a + fbc^aA^b/A^c

[A^b,A^c] = f^abcA^c

In the special case G(x) -> U1(x) the field 2-form F = dA is actually invariant, but not so for SU2(x) & SU3(x)

If G(x) has the representation U(G(x)) then

A -> A' = UAU^-1 + dUU^-1

F --> F' = UFU^-1

Now for Einstein's GR G(x) -> T4(x)

and the induced A is NOT the spin 2 Christoffel symbol etc. but the non-trivial TETRAD set.

the internal index a is replaced by the Lorentz group index I (J,K etc).

The induced gravity spin 1 tetrad connection is A^I analog to A^a (Yang-Mills)

I = 0, 1, 2, 3

the relation to the spin 2 Christoffel symbol is very indirect and complicated.

LC^ is not itself covariant? How can non-covariant LC^ be a solution to the GR energy

problem?

Doesn't make sense.

On Fri, Jan 21, 2011 at 6:44 PM, JACK SARFATTI wrote:

On Jan 21, 2011, at 6:15 PM, Paul Zielinski wrote:

part LC_ from LC, leaving what they call the "physical part" LC^ that represents the true

gravity field, but then on p8 they say that LC^ cannot transform covariantly under GCTs

due to the way that LLTs are represented in gauge gravity.

So I think they've been led astray by their gauge gravity template.

No, I think it means that what you want to do cannot be done.

Note, that Arnowitt, Deser & Misner in 1962 have a solution, but it too is too limited in the end.

I think my approach is much simpler, much more direct -- you just remove the coordinate

correction terms from the LC connection, leaving a unique tensor residue that encodes

the intrinsic spacetime geometry. No need for perturbation expansions and so on -- exact

decomposition.

As I said your words are too vague. You just beat around the push as if saying what you want to do is the same as doing it.

Just ask Waldyr.

Physics uses math. It's not enough to have correct math if the math cannot be connected to laboratory techniques - that's the hard part.

You are doing magickal cargo cult thinking as if wishing makes it so - in my opinion.

So how do Chen and Zhu propose to build a covariant vacuum stress energy tensor from non-covariant

LC^? Aren't they just going around in circles? Isn't *that* magickal cargo cult thinking?

This brings us back to the chronic confusion between passive and active diffeomorphisms

in gauge gravity. I think this may be also the root of the confusion in this paper.

On Fri, Jan 21, 2011 at 4:35 PM, JACK SARFATTI wrote:

Z you are mistaken - you have not correctly read the text on p.8

transformations in eq. 6

On Jan 21, 2011, at 4:17 PM, JACK SARFATTI wrote:

On Jan 21, 2011, at 3:50 PM, Paul Zielinski wrote:

This is exactly the same decomposition I've been talking about for years, approached from the perspective of gauge gravity. And yes it points directly to a localized tensor gravitational vacuum stress-energy density, as I have always maintained. The "physical" part of the LC connection defined in this paper is just the tensor component of LC

Where do they say that? Copy and paste the exact text please. "Tensor" with respect to what group of frame transformations? With respect to the rigid Poincare group of the background Minkowski spacetime - no problem. Remember they do perturbation theory on a non-dynamical globally flat background.

guv = nuv + huv

huv << nuv

therefore no horizons g00 -> 0 in this limit .

corrects only for *coordinate* artifacts (what I've been calling the "curved coordinate correction term"). This part is zero in a Minkowski spacetime in *rectilinear* coordinates, but not in *curvilinear* coordinates. This is the "gauge dependent" part of the LC connection field. There is no actual need for any perturbation expansion here -- the decomposition is exact as well as unique and can be arrived at without the use of any approximations (as I've explained). So the use of perturbation methods in this paper looks

like a quirk of the authors' gauge gravity mindset. I get the impression that while they are on the right track, Chen and Zhu have not yet fully understood the fundamental meaning of the LC decomposition, blinded as they are by the arcane mysteries of gauge theory. :-)

On Wed, Jan 19, 2011 at 8:38 PM, JACK SARFATTI wrote:

Unfortunately it's a perturbation series technique - it obviously cannot describe thermal horizons. Throws the baby out with the bath water, but it's a step in the right direction.

"Because of non-linearity, we have to rely again on perturbative method, and require that the gravitational field be at most moderately strong."

On Jan 19, 2011, at 7:39 PM, JACK SARFATTI wrote:

Yes, this is relevant thanks.

On Jan 19, 2011, at 7:35 PM, Jonathan Post wrote:

If we're trying to distinguish between gravitational effects and pseudo-effects that depend on coordinatizations, is this useful?

Cross-lists for Thu, 20 Jan 11

[73] arXiv:1006.3926 (cross-list from gr-qc) [pdf, ps, other]

Title: Physical decomposition of the gauge and gravitational fields

Authors: Xiang-Song Chen, Ben-Chao Zhu

Comments: 11 pages, no figure; significant revision, with discussion on relations of various metric decompositions

Subjects: General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Theory (hep-th); Mathematical Physics (math-ph)

Physical decomposition of the non-Abelian gauge field has recently solved the two-decade-lasting problem of a meaningful gluon spin. Here we extend this approach to gravity and attack the century-lasting problem of a meaningful gravitational energy. The metric is unambiguously separated into a pure geometric term which contributes null curvature tensor, and a physical term which represents the true gravitational effect and always vanishes in a flat space-time. By this decomposition the conventional pseudo-tensors of the gravitational stress-energy are easily rescued to produce definite physical result. Our decomposition applies to any symmetric tensor, and has interesting relation to the transverse-traceless (TT) decomposition discussed by Arnowitt, Deser and Misner, and by York.

Jan
19

Tagged in:

Subject: Saudi Global Leaders Conference

On Jan 18, 2011, at 3:12 PM, Stanton T. Friedman wrote to me (Jack Sarfatti) and others:

"Arabia & UFOs. Jan. 12, 2011 Stanton T. Friedman Column

When I first heard of the comments from the Vatican Observatory Director, Astronomer (Reverend) Jose Gabriel Funes)in May 2008 and then Pope Benedict XVI about God could have made our brethen in space as well as us, I was quite surprised. I had years ago heard pro UFO comments from Father Balducci who worked at the Vatican. But the Pope? My first thought was what did he know that we didn’t? He doesn’t need to answer any questions he doesn’t want to answer. Was he trying to prepare the world’s 1.13 billion Catholics for an upcoming announcement of the reality of alien visitation? Islam’s leaders and other groups already believe there is intelligent life out there. This was quite a contrast with the views of fundamentalist preachers Pat Robertson and the late Jerry Falwell, who have loudly proclaimed that Earthlings are the only intelligent life in the Universe and that the UFO stuff is a reflection of demonic activity. (Surely the Lord can do better than Earthlings?)

I was similarly surprised to be invited to be a participant in the fifth annual Global Competitiveness Forum to be held in Riyadh, Saudi Arabia, in January, 2011. I don’t know how my contact had heard of me. But it certainly took courage to have a panel on “Innovation and UFOs”. He seemed to be pleased that, in my response expressing interest, I noted that my mantra is that technological progress comes from doing things differently in an unpredictable way and that the future, technologically speaking, is not an extrapolation of the past. We have to change how we do things. Lasers aren’t just better light bulbs. The nuclear rockets I worked on are not just better chemical rockets. Computer chips aren’t just better vacuum tubes. In all these cases, entirely different physics is required for the new than was required for the old technique. Think computers versus slide rules.

Turns out this will be the 5th annual GCF. A major speaker at the first one was Bill Gates. Last year British prime MinisterTony Blair was a speaker. He is also speaking this year. This year I will be one of three Canadians. The others are Former Prime Minister Jean Chretien and Canadian Senator Marie Poulin.The objective was for me to be a member of a 5 man panel on Innovations and UFOs. The other panelists are to be Nick Pope of England, Dr. Jacques Vallee, who, besides being an outstanding ufologist, is involved in selecting new business ventures for support by venture capitalists, as well as Dr. Micho Kaku an astrophysicist who has spoken enthusiastically about the possibility of aliens out there, and Dr. Zaghloul El Naggar an Egyptian scholar and a member of the Supreme Council of Islamic Affairs.

The panel will last 75 minutes. The newest title is “Contact: Learning from Outer Space”. I intend to stress that aliens have taken innovative approaches to interstellar travel, travel in the atmosphere, examination of earthlings. So far as I know I am the only one of the 5 who has strongly stated a conviction that SOME UFOs are of alien origin. The others indicate that something of interest is happening, but they are unwilling to admit to the conclusion that aliens are visiting as opposed to merely being out there. I have appeared on Michio’s radio show a couple of times. Nick and I have a bet that earthlings will first find out flying saucer reality from UFOLOGY (my view) as opposed to from the SETI movement. (Silly Effort To Investigate.) Nick’s. Michio has certainly shown much more courage than most academic theoretical astrophysicists. He came across well on that sad Peter Jennings mockumentary “UFOs: Seeing is Believing” on ABC-TV on February 24, 2005, noting that physics may well permit interstellar travel. They cut my interview from an hour to 20 seconds and called me a promoter twice. They also didn’t bother to note that Dr. Jesse Marcel Jr. was a medical Doctor, a Flight surgeon, a helicopter pilot, who was then serving (by government request at age 68) in Iraq . He got in 225 combat hours as a helicopter pilot. One would have thought that these factors would have lent credibility to his comments as would have my scientific background, also not mentioned. Of course Dr. Marcel had also handled wreckage from Roswell. Michio also appeared on a Larry King show which was reacting to Stephen Hawking’s strange comments about don’t let aliens know we are here because of what happened to the natives who encountered Christopher Columbus. He also suggested that if aliens were visiting we would have heard from them.I have pointed out that we don’t know that there hasn’t been contact and that Columbus after all didn’t send smoke signals.

Preliminary estimates are that there will be about 800 attendees per day for the three day event and about 120 panelists and speakers. I was quite impressed with the positions held by various speakers. Corporate and government managers prevail. The Early Registration admissions fee is only $4000. Technically it is sponsored by the Saudi Arabian General Investment Authority. And it is a very long way. I must say it will certainly be warmer in Riyadh than in Fredericton in January. Interestingly while there had been initial comments about signing my books , books will no longer be available as the Forum will be will be an e-forum, a first for me and them, too. Supposedly I will be provided with an I-Pad.

Here is the actual description of my panel as given in the program which is on the web:

“Psychological and socio-cultural assumptions and preconceptions constrain us to a large extent, and shape our views of the universe so that we are inclined to find what we are looking for and to fail to see what we are not. Using knowledge gained from research in the fields of Ufology and the search for extraterrestrial life, what might we possibly learn about hindrances to innovation in other areas of Inquiry? Subtitles: Innovation and anthropomorphism, ethnocentrism and ego.-Falsification and the evidence of absence—What Giordano Bruno would say.”

The above could have been part of a preface for the 2010 book “Science Was Wrong: Startling Truths about Cures, Theories, and Inventions “They” Declared Impossible” by Kathleen Marden and myself.

I, and all the other panelists and speakers were asked to submit a 250 word summary . Here is mine

Summary for Riyadh Panel . Innovation and UFOs Stanton T. Friedman, Nuclear Physicist, Canada

After 52 years of study and investigation I have concluded that the evidence is overwhelming that Earth is being visited by intelligently controlled Extraterrestrial spacecraft. In other words, SOME UFOs are alien vehicles. I am also convinced, after working for many major corporations as a nuclear physicist, that technological progress comes from doing things differently in an unpredictable way. The future is not an extrapolation of the past. Microcircuits are not just small vacuum tubes; lasers are not just better light bulbs, nuclear reactors are not just better chemical combustion chambers. All involve “new” physics. Reviewing the UFO evidence forces us to examine our assumptions as to how things work. Clearly chemical rockets cannot get us to the stars. Winged vehicles using propellers, jet engines, or rocket engines cannot provide silent high speed, highly maneuverable craft able to hover, make right angle turns and land and take off from sites not much larger than themselves.

Searching for new techniques, we find that nuclear fusion propulsion systems can on their way to the stars eject particles having 10 million times as much energy per particle as in a chemical rocket. Magnetoaerodynamic techniques seem able to control lift, drag, heating, sonic boom production.Shortening the duration of the acceleration increases the G-loads people can withstand. We find that mother Nature can be used to provide a lot of cosmic freeloading. Being smart is as important as being powerful.

Some of the other panel titles are “Innovation as a Means of Competitiveness”; “Innovation in Health Care”; “Rebooting Education”; “Capital: Wanted”; “Conscious Capitalism”; “Creativity at Work”; “New Skills for New Jobs”;”The Next Frontier”: and many more. A keynote speech by Michael Porter Professor at Harvard Business School will be “Competitiveness: a Strategic Perspective”. Another panel is “Building BRICS” (BRICS stands for Brazil, Russia, India, China, and South Africa). Among the who’s who to be present will be a senior Vice President of IBM, The Chief Pilot for Boeing, the president of Georgia Tech, the executive vice president and chief Innovation Officer for DuPont, the Chairman of Walt Disney International and a whole host of other movers and shakers.

It seems ironic that very recently two different articles have appeared dealing with interstellar travel. One in Technology week, came from Marc Millis former head of NASA’Ss breakthrough Propulsion Physics Project and head of the Tau Zero Foundation which supports the science of Interstellar Travel.

“By looking at the rate at which our top speed and financial clout are increasing and then extrapolating into the future, it’s possible to predict when such missions [to the stars] might be possible. The depressing answer in every study so far is that interstellar travel is centuries away”.And Dr Campbell in 1941 said a rocket to get a man to the moon would weigh a million million tons. What is a factor of 300 million between friends?

Papers have also come out echoing the nonsense from the astronomers and SETI specialists presented them at a Meeting of the Royal Society in London, last year.. nobody can go anywhere. Some of these same guys were saying space travel is utter bilge back in 1956.The 16 papers have now been published In the February 11 issue of The Philosophical Transactions of the Royal Society A No. 369 pp499-699.They seem to believe nobody out there got started before we did.. despite our very short history compared to the age of the neighborhood." Stan Friedman fsphys@rogers.com www.stantonfriedman.com

Jan
18

Tagged in:

Memorandum for the Record

"Since they were introduced by Alcubierre [1], warp-drive spacetimes have been certainly one of the most studied solutions of the Einstein equations among those requiring exotic matter [2]. They are not only an exciting theoretical test for our comprehension of general relativity and quantum field theory in curved spacetimes, but they might also be, at least theoretically, a way to travel at superluminal speed. The warp drive consists of a bubble containing an almost flat region, moving at arbitrary speed within an asymptotically flat spacetime.

After the proposal of this solution, its most investigated aspect has been the amount of exotic matter (i.e. energy-conditions violating matter) required to support such a spacetime [3, 4, 5, 6]. It has been found, using the so called quantum inequalities (QI), that such a matter must be confined in Planck-size regions at the edges of the bubble. This bound on the wall thickness turns into lower limits on the amount of exotic matter required to support the bubble (at least of the order of 1 solar mass)."

Of course this is much too big in terms of http://www.darpa.mil/news/2010/starshipnewsrelease.pdf

The only hope is amplification of the effective gravity coupling G/c^4 of curvature Guv to applied stress-energy current densities Tuv by at least 40 powers of ten over a small spatial "Yukawa" length, or in some frequency-wavevector region of the EM spectrum with nonlinear transduction to DC EM fields with negative energy density (superconducting meta-material)."

In such a case the required mass-energy drops from 10^33 gm to 10^-7 gm because, roughly

Guv(curvature) ~ 8piGNewton(index of refraction)^4/c^4Tuv(applied electromagnetic field).

"Less effort has been devoted to other important issue regarding the feasibility of these spacetimes: the study of the warp-drive semiclassical stability. In particular, it was studied in [7] the case of an eternal superluminal warp drive by discussing its stability against quantum effects. It was there noticed that, to an observer within the warp-drive bubble, the backward and forward walls (along the direction of motion) look respectively as the future and past event horizon of a black hole. By imposing over the spacetime a quantum state which is vacuum at the null infinities (i.e. what one may call the analogue of the Boulware state for an eternal black hole) it was found that the renormalized stress-energy tensor (RSET) had to diverge on the horizons .

In this contribution we consider the more realistic case of a warp drive which is created with zero velocity at early times and then accelerated up to some superluminal speed in a finite time (a more detailed treatment can be found in [8]). We found, as expected, that in the centre of the bubble there is a thermal flux at the Hawking temperature corresponding to the surface gravity of the black horizon. However, this surface gravity is inversely proportional to the wall thickness, leading to a temperature of the order of the Planck temperature, for Planck-size walls. Even worse, we do show that the RSET does increase exponentially with time on the white horizon (while it is regular on the black one). This clearly implies that a warp drive becomes rapidly unstable once superluminal speed are reached."

The effective Planck length is increased by a factor of ~ 10^20.

Indeed we practical micron-tech fabrication we want not 10^-13 cm, but 10^-3 cm i.e. another factor of 10^20 in G. Ideally we want 10^60G i.e. index of refraction ~ 10^15 slowing down the speed of light to ~ 10^-5 cm/sec. This should improve the situation.

The Hawking temperature is reduced by a whopping factor of 10^30 in this Panglossian best of all possible worlds scenarios - and I suspect that the time it takes for the instability to develop will also be very long so as not to be a problem.

All is not yet lost in my opinion.

On Jan 17, 2011, at 5:33 PM, JACK SARFATTI wrote:

http://iopscience.iop.org/1742-6596/229/1/012018/pdf/1742-6596_229_1_012018.pdf

not the final word, but important to know (semi-classical model)

1) Our 2D future event horizon is a hologram screen computer as shown by Seth Lloyd

http://www.ar-tiste.com/qcomp_onion/jan2002/UltimateLaptop.htm

2) The ~ 10^123 BITS on our once and future event horizon is precisely David Bohm's IMPLICATE ORDER and Wheeler's BIT.

http://en.wikipedia.org/wiki/Implicate_and_explicate_order_according_to_David_Bohm

3) We are Wheeler's Delayed Choice Post-Selected IT Back From the Future 3D hologram images - all the material world hidden variables are that. This is Bohm's EXPLICATE ORDER.

IT FROM BIT ~ EXPLICATE FROM IMPLICATE ~ BACK FROM THE FUTURE as in

http://discovermagazine.com/2010/apr/01-back-from-the-future

4) Now Nick Herbert has a point that gets us to Leibniz's monads

http://en.wikipedia.org/wiki/Gottfried_Leibniz

"The monads

Leibniz's best known contribution to metaphysics is his theory of monads, as exposited in Monadologie. Monads are to the metaphysical realm what atoms are to the physical/phenomenal.[citation needed] They can also be compared to the corpuscles of the Mechanical Philosophy of René Descartes and others. Monads are the ultimate elements of the universe. The monads are "substantial forms of being" with the following properties: they are eternal, indecomposable, individual, subject to their own laws, un-interacting, and each reflecting the entire universe in a pre-established harmony (a historically important example of panpsychism). Monads are centers of force; substance is force, while space, matter, andmotion are merely phenomenal.

The ontological essence of a monad is its irreducible simplicity. Unlike atoms, monads possess no material or spatial character. They also differ from atoms by their complete mutual independence, so that interactions among monads are only apparent. Instead, by virtue of the principle of pre-established harmony, each monad follows a preprogrammed set of "instructions" peculiar to itself, so that a monad "knows" what to do at each moment. (These "instructions" may be seen as analogs of the scientific laws governing subatomic particles.) By virtue of these intrinsic instructions, each monad is like a little mirror of the universe. Monads need not be "small"; e.g., each human being constitutes a monad, in which case free will is problematic. God, too, is a monad, and the existence of God can be inferred from the harmony prevailing among all other monads; God wills the pre-established harmony.

Monads are purported to having gotten rid of the problematic:

Interaction between mind and matter arising in the system of Descartes;

Lack of individuation inherent to the system of Spinoza, which represents individual creatures as merely accidental."

Here is a post-modern monad (loosely speaking) an entire observer-centered observable universe in the multiverse limited by the finite speed of light.

From Tamara Davis http://www.physics.uq.edu.au/download/tamarad/ hopefully the floods down under have not knocked out her web-server.

Tamara's server is down! (New South Wales)

5) The proper time of a photon is always TAU ZERO!

http://en.wikipedia.org/wiki/Tau_Zero

The future and past horizons are always connected by Tau Zero world lines into a Bohmian holographic implicate-explicate UNDIVIDED WHOLE so to speak.

This is why we can use the Bekenstein-Unruh-Hawking mechanisms on our observer-dependent horizons that are NOT optical horizons.

I will clarify this in future epistles - but the Bell is ringing and I am beginning to salivate - dear I hope it's not the full moon. ;-)

*5.5 - Merrily Rings the Luncheon Bell*

John Shea: One more oar to put in about Ida: I used to think "Merrily Rings the Luncheon Bell" a rather clunky bore, until I sat in on a rehearsal for our Savoy-aires' performance 15 years ago, and I finally heard what Sullivan was doing. This is a group of girls on an outing, and the music has an unmistakable sound of breaking free of the classroom, rejoicing in the great outdoors, but treating the experience with pedantic schoolgirl earnestness ("feast we body and mind as well"). I think it is quite perfect!

Harriet Meyer: Tennyson has "But hark the bell/For dinner, let us go!" and I suppose Gilbert changed it to "luncheon" to be funny. The song has always reminded me (in the wrong order chronologically) of Virginia Woolf's description of a meal at a university where she has wandered the grounds in her feminist essay "A Room of One's Own."

http://math.boisestate.edu/gas/princess_ida/discussion/pi5-2.html#5.5

Begin forwarded message:

From: JACK SARFATTI <sarfatti@pacbell.net>

Date: January 15, 2011 12:32:09 PM PST

To: Nick Herbert <quanta@cruzio.com>

Subject: Nick has confused past optical horizons with our future dark energy event horizon.

Nick you are confused about the meaning of "event horizon" - that is not our future event horizon.

Nick your argument is not even wrong because you are confounding past OPTICAL horizons with our FUTURE event horizon.

"event horizon" means gtt = 0

the past optical horizon you confound that with has nothing to do with gtt = 0 - the past optical horizon is only due to the finite speed of light not a property of the representation of the ds^2 metric field relative to a class of arbitrarily chosen detectors.

http://www.youtube.com/watch?v=09tPuCazzTo

In a contemplative fashion ...

http://www.ar-tiste.com/qcomp_onion/jan2002/UltimateLaptop.htm

2) The ~ 10^123 BITS on our once and future event horizon is precisely David Bohm's IMPLICATE ORDER and Wheeler's BIT.

http://en.wikipedia.org/wiki/Implicate_and_explicate_order_according_to_David_Bohm

3) We are Wheeler's Delayed Choice Post-Selected IT Back From the Future 3D hologram images - all the material world hidden variables are that. This is Bohm's EXPLICATE ORDER.

IT FROM BIT ~ EXPLICATE FROM IMPLICATE ~ BACK FROM THE FUTURE as in

http://discovermagazine.com/2010/apr/01-back-from-the-future

4) Now Nick Herbert has a point that gets us to Leibniz's monads

http://en.wikipedia.org/wiki/Gottfried_Leibniz

"The monads

Leibniz's best known contribution to metaphysics is his theory of monads, as exposited in Monadologie. Monads are to the metaphysical realm what atoms are to the physical/phenomenal.[citation needed] They can also be compared to the corpuscles of the Mechanical Philosophy of René Descartes and others. Monads are the ultimate elements of the universe. The monads are "substantial forms of being" with the following properties: they are eternal, indecomposable, individual, subject to their own laws, un-interacting, and each reflecting the entire universe in a pre-established harmony (a historically important example of panpsychism). Monads are centers of force; substance is force, while space, matter, andmotion are merely phenomenal.

The ontological essence of a monad is its irreducible simplicity. Unlike atoms, monads possess no material or spatial character. They also differ from atoms by their complete mutual independence, so that interactions among monads are only apparent. Instead, by virtue of the principle of pre-established harmony, each monad follows a preprogrammed set of "instructions" peculiar to itself, so that a monad "knows" what to do at each moment. (These "instructions" may be seen as analogs of the scientific laws governing subatomic particles.) By virtue of these intrinsic instructions, each monad is like a little mirror of the universe. Monads need not be "small"; e.g., each human being constitutes a monad, in which case free will is problematic. God, too, is a monad, and the existence of God can be inferred from the harmony prevailing among all other monads; God wills the pre-established harmony.

Monads are purported to having gotten rid of the problematic:

Interaction between mind and matter arising in the system of Descartes;

Lack of individuation inherent to the system of Spinoza, which represents individual creatures as merely accidental."

Here is a post-modern monad (loosely speaking) an entire observer-centered observable universe in the multiverse limited by the finite speed of light.

From Tamara Davis http://www.physics.uq.edu.au/download/tamarad/ hopefully the floods down under have not knocked out her web-server.

Tamara's server is down! (New South Wales)

5) The proper time of a photon is always TAU ZERO!

http://en.wikipedia.org/wiki/Tau_Zero

The future and past horizons are always connected by Tau Zero world lines into a Bohmian holographic implicate-explicate UNDIVIDED WHOLE so to speak.

This is why we can use the Bekenstein-Unruh-Hawking mechanisms on our observer-dependent horizons that are NOT optical horizons.

I will clarify this in future epistles - but the Bell is ringing and I am beginning to salivate - dear I hope it's not the full moon. ;-)

John Shea: One more oar to put in about Ida: I used to think "Merrily Rings the Luncheon Bell" a rather clunky bore, until I sat in on a rehearsal for our Savoy-aires' performance 15 years ago, and I finally heard what Sullivan was doing. This is a group of girls on an outing, and the music has an unmistakable sound of breaking free of the classroom, rejoicing in the great outdoors, but treating the experience with pedantic schoolgirl earnestness ("feast we body and mind as well"). I think it is quite perfect!

Harriet Meyer: Tennyson has "But hark the bell/For dinner, let us go!" and I suppose Gilbert changed it to "luncheon" to be funny. The song has always reminded me (in the wrong order chronologically) of Virginia Woolf's description of a meal at a university where she has wandered the grounds in her feminist essay "A Room of One's Own."

http://math.boisestate.edu/gas/princess_ida/discussion/pi5-2.html#5.5

Begin forwarded message:

From: JACK SARFATTI <sarfatti@pacbell.net>

Date: January 15, 2011 12:32:09 PM PST

To: Nick Herbert <quanta@cruzio.com>

Subject: Nick has confused past optical horizons with our future dark energy event horizon.

Nick you are confused about the meaning of "event horizon" - that is not our future event horizon.

Nick your argument is not even wrong because you are confounding past OPTICAL horizons with our FUTURE event horizon.

"event horizon" means gtt = 0

the past optical horizon you confound that with has nothing to do with gtt = 0 - the past optical horizon is only due to the finite speed of light not a property of the representation of the ds^2 metric field relative to a class of arbitrarily chosen detectors.

http://www.youtube.com/watch?v=09tPuCazzTo

In a contemplative fashion ...

Jan
15

Tagged in:

In orthodox quantum theory any influence one polarizer has on the other is masked by the statistical averaging one must do over the distant polarizer measurements in the ensemble - so all we see locally is random quantum noise - no image, no conscious experience, no remote viewing etc is possible in quantum theory only unperceived static - this is called "passion at a distance" (aka "signal locality") by Abner Shimony. Stapp gave a simple proof based on the linear unitary time evolution of the quantum waves etc.

We need a new theory beyond quantum theory that contains it as a limit. Bernard Carr describes this in his review article a few years ago.

"Can Psychical Research Bridge the Gulf Between Matter and Mind?" Bernard Carr Proceedings of the Society for Psychical Research, Vol 59 Part 221 June 2008

One such model is here

Subquantum Information and Computation

Antony Valentini

(Submitted on 11 Mar 2002 (v1), last revised 12 Apr 2002 (this version, v2))

It is argued that immense physical resources - for nonlocal communication, espionage, and exponentially-fast computation - are hidden from us by quantum noise, and that this noise is not fundamental but merely a property of an equilibrium state in which the universe happens to be at the present time. It is suggested that 'non-quantum' or nonequilibrium matter might exist today in the form of relic particles from the early universe. We describe how such matter could be detected and put to practical use. Nonequilibrium matter could be used to send instantaneous signals, to violate the uncertainty principle, to distinguish non-orthogonal quantum states without disturbing them, to eavesdrop on quantum key distribution, and to outpace quantum computation (solving NP-complete problems in polynomial time).

http://arxiv.org/abs/quant-ph/0203049

On Jan 15, 2011, at 12:31 PM, Russell Targ wrote:

*Dear Jack,*

I am writing a book about remote viewing and psi in general, for people who don't believe in ESP.

It is called "Questioning Reality." And it deals with some of the ways in which we misapprehend the phenomenal world.

Below I will spell out the physics problem I am puzzling about.

I say," Nonlocality and entanglement, are now one of the hottest research topics in modern physics. This intriguing phenomenon is explained very clearly by Anton Zeilinger, one of the world’s leading experimentalists in quantum optics, in his 2010 book Dance of the Photons: From Einstein to Teleportation. Prof. Zeilinger writes, 'Entanglement describes the phenomenon that two particles may be so intimately connected to each other that the measurement of one instantly changes the quantum state of the other, no matter how far away it may be.... This nonlocality is exactly what Albert Einstein called ‘spooky;’ it seems eerie that the act of measuring one particle could instantly influence the other one.' [emphasis mine]

My question is: Does turning a polarizer that measures one photon, actually affect or change the other distant entangled photon?

Yes, if you believe the Bohm interpretation. But the effect averages out to noise in the ensemble average for local measurements.

*I am aware that you cannot send a message with an EPR setup. I have now read several books on quantum optics. And I observe that they either equivocate, or agree with Zeilinger. *

That's because different interpretations give different answers - but it's all moot until one can test the theory by going beyond it with signal nonlocality.

*But some prominent physicists say that turning the polarizer at A does not affect the measurement at B.*

My question is about causality. What's your opinion please?

Causality is up for grabs now that Yakir Aharonov's theory got a medal from Obama in the White House. Even though Yakir's theory has back-from-the-future destiny influence it is still within the confines of signal locality therefore not able to explain our ordinary consciousness as well as remote viewing - in my opinion.

*Many thanks for your help.*

Russell

We need a new theory beyond quantum theory that contains it as a limit. Bernard Carr describes this in his review article a few years ago.

"Can Psychical Research Bridge the Gulf Between Matter and Mind?" Bernard Carr Proceedings of the Society for Psychical Research, Vol 59 Part 221 June 2008

One such model is here

Subquantum Information and Computation

Antony Valentini

(Submitted on 11 Mar 2002 (v1), last revised 12 Apr 2002 (this version, v2))

It is argued that immense physical resources - for nonlocal communication, espionage, and exponentially-fast computation - are hidden from us by quantum noise, and that this noise is not fundamental but merely a property of an equilibrium state in which the universe happens to be at the present time. It is suggested that 'non-quantum' or nonequilibrium matter might exist today in the form of relic particles from the early universe. We describe how such matter could be detected and put to practical use. Nonequilibrium matter could be used to send instantaneous signals, to violate the uncertainty principle, to distinguish non-orthogonal quantum states without disturbing them, to eavesdrop on quantum key distribution, and to outpace quantum computation (solving NP-complete problems in polynomial time).

http://arxiv.org/abs/quant-ph/0203049

On Jan 15, 2011, at 12:31 PM, Russell Targ wrote:

I am writing a book about remote viewing and psi in general, for people who don't believe in ESP.

It is called "Questioning Reality." And it deals with some of the ways in which we misapprehend the phenomenal world.

Below I will spell out the physics problem I am puzzling about.

I say," Nonlocality and entanglement, are now one of the hottest research topics in modern physics. This intriguing phenomenon is explained very clearly by Anton Zeilinger, one of the world’s leading experimentalists in quantum optics, in his 2010 book Dance of the Photons: From Einstein to Teleportation. Prof. Zeilinger writes, 'Entanglement describes the phenomenon that two particles may be so intimately connected to each other that the measurement of one instantly changes the quantum state of the other, no matter how far away it may be.... This nonlocality is exactly what Albert Einstein called ‘spooky;’ it seems eerie that the act of measuring one particle could instantly influence the other one.' [emphasis mine]

My question is: Does turning a polarizer that measures one photon, actually affect or change the other distant entangled photon?

Yes, if you believe the Bohm interpretation. But the effect averages out to noise in the ensemble average for local measurements.

That's because different interpretations give different answers - but it's all moot until one can test the theory by going beyond it with signal nonlocality.

My question is about causality. What's your opinion please?

Causality is up for grabs now that Yakir Aharonov's theory got a medal from Obama in the White House. Even though Yakir's theory has back-from-the-future destiny influence it is still within the confines of signal locality therefore not able to explain our ordinary consciousness as well as remote viewing - in my opinion.

Russell