Text Size

Stardrive

A SHORT DESCRIPTION ABOUT YOUR BLOG

Imagine an sender in spacetime region x and a receiver in spacetime region y. According to Feynman the partial retarded quantum amplitude A(x,y) for the signal to leave x in the past and arrive at y in the future along a definite path P is

The integrand L in the phase of the exponential function is the classical Lagrangian, which in the simplest toy model of a single point particle is L = Kinetic Energy – Potential Energy with only conservative non-dissipative and non-velocity dependent forces. The infinitesimal segments of the signal’s path P are arbitrary even superluminal outside the local light cone. However, when one sums over the set of all possible paths {P} with the same end points x and y, assuming there is no way to distinguish the paths with intermediate measurements, the total amplitude connecting x and y is the coherent superposition of all the partial amplitudes

There will be at least one special path Pc along which the phases of the summed exponential functions constructively interfere, analogous to the maxima in an optical interference pattern, and this is the classical path predicted say by Newton’s mechanics in the simplest toy model where special relativity’s time dilation and length contraction are ignorable.

We now also assume that the Lagrangian L is invariant under time reversal t à-t = t’ keeping x and y fixed. Therefore, the advanced time-reversed amplitude A(y,x) summed over the time-reversed paths P’ is simply the complex-conjugate of the original amplitude. This advanced amplitude is literally back from the future receiver to the past emitter.

The Born probability conjecture is that the conditional probability P(x,y) for the signal to leave x in the past and arrive at y in the future is the modulation of the retarded total amplitude by the advanced total amplitude.


There are two kinds of terms in the double sum, diagonal where P = P’ corresponding to closed timeloops with zero area and off-diagonal interference terms P =/= P’ corresponding to closed time loops with finite area. It’s the latter finite area timeloops that are specifically quantum.

"In quantum mechanics and quantum field theory, the propagator gives the probability amplitude for a particle to travel from one place to another in a given time, or to travel with a certain energy and momentum. Propagators are used to represent the contribution of virtual particles on the internal lines of Feynman diagrams. They also can be viewed as the inverse of the wave operator appropriate to the particle, and are therefore often called Green's functions. ...

Faster than light?
The Feynman propagator has some properties that seem baffling at first. In particular, unlike the commutator, the propagator is nonzero outside of the light cone, though it falls off rapidly for spacelike intervals. Interpreted as an amplitude for particle motion, this translates to the virtual particle traveling faster than light. It is not immediately obvious how this can be reconciled with causality: can we use faster-than-light virtual particles to send faster-than-light messages?


The answer is no: while in classical mechanics the intervals along which particles and causal effects can travel are the same, this is no longer true in quantum field theory, where it is commutators that determine which operators can affect one another.


So what does the spacelike part of the propagator represent? In QFT the vacuum is an active participant, and particle numbers and field values are related by an uncertainty principle; field values are uncertain even for particle number zero. There is a nonzero probability amplitude to find a significant fluctuation in the vacuum value of the field Φ(x) if one measures it locally (or, to be more precise, if one measures an operator obtained by averaging the field over a small region). Furthermore, the dynamics of the fields tend to favor spatially correlated fluctuations to some extent. The nonzero time-ordered product for spacelike-separated fields then just measures the amplitude for a nonlocal correlation in these vacuum fluctuations, analogous to an EPR correlation. Indeed, the propagator is often called a two-point correlation function for the free field.


Since, by the postulates of quantum field theory, all observable operators commute with each other at spacelike separation, messages can no more be sent through these correlations than they can through any other EPR correlations; the correlations are in random variables.


In terms of virtual particles, the propagator at spacelike separation can be thought of as a means of calculating the amplitude for creating a virtual particle-antiparticle pair that eventually disappear into the vacuum, or for detecting a virtual pair emerging from the vacuum. In Feynman's language, such creation and annihilation processes are equivalent to a virtual particle wandering backward and forward through time, which can take it outside of the light cone. However, no causality violation is involved.

Propagators in Feynman diagrams
The most common use of the propagator is in calculating probability amplitudes for particle interactions using Feynman diagrams. These calculations are usually carried out in momentum space. In general, the amplitude gets a factor of the propagator for every internal line, that is, every line that does not represent an incoming or outgoing particle in the initial or final state. It will also get a factor proportional to, and similar in form to, an interaction term in the theory's Lagrangian for every internal vertex where lines meet. These prescriptions are known as Feynman rules.


Internal lines correspond to virtual particles. Since the propagator does not vanish for combinations of energy and momentum disallowed by the classical equations of motion, we say that the virtual particles are allowed to be off shell. In fact, since the propagator is obtained by inverting the wave equation, in general it will have singularities on shell.
The energy carried by the particle in the propagator can even be negative. This can be interpreted simply as the case in which, instead of a particle going one way, its antiparticle is going the other way, and therefore carrying an opposing flow of positive energy. The propagator encompasses both possibilities. It does mean that one has to be careful about minus signs for the case of fermions, whose propagators are not even functions in the energy and momentum (see below).


Virtual particles conserve energy and momentum. However, since they can be off shell, wherever the diagram contains a closed loop, the energies and momenta of the virtual particles participating in the loop will be partly unconstrained, since a change in a quantity for one particle in the loop can be balanced by an equal and opposite change in another. Therefore, every loop in a Feynman diagram requires an integral over a continuum of possible energies and momenta. In general, these integrals of products of propagators can diverge, a situation that must be handled by the process of renormalization." Wikipedia

"In quantum field theory, the statistical mechanics of fields, and the theory of self-similar geometric structures, renormalization refers to a collection of techniques used to take a continuum limit.


When describing space and time as a continuum, certain statistical and quantum mechanical constructions are ill defined. To define them, the continuum limit has to be taken carefully.
Renormalization determines the relationship between parameters in the theory, when the parameters describing large distance scales differ from the parameters describing small distances. Renormalization was first developed in quantum electrodynamics (QED) to make sense of infinite integrals in perturbation theory. Initially viewed as a suspicious, provisional procedure by some of its originators, renormalization eventually was embraced as an important and self-consistent tool in several fields of physics and mathematics. ...

 The total effective mass of a spherical charged particle includes the actual bare mass of the spherical shell (in addition to the aforementioned mass associated with its electric field). If the shell's bare mass is allowed to be negative, it might be possible to take a consistent point limit.[1] This was called renormalization, and Lorentz and Abraham attempted to develop a classical theory of the electron this way. This early work was the inspiration for later attempts at regularization and renormalization in quantum field theory.


When calculating the electromagnetic interactions of charged particles, it is tempting to ignore the back-reaction of a particle's own field on itself. But this back reaction is necessary to explain the friction on charged particles when they emit radiation. If the electron is assumed to be a point, the value of the back-reaction diverges, for the same reason that the mass diverges, because the field is inverse-square.


The Abraham-Lorentz theory had a noncausal "pre-acceleration". Sometimes an electron would start moving before the force is applied. This is a sign that the point limit is inconsistent. An extended body will start moving when a force is applied within one radius of the center of mass.


The trouble was worse in classical field theory than in quantum field theory, because in quantum field theory a charged particle experiences Zitterbewegung due to interference with virtual particle-antiparticle pairs, thus effectively smearing out the charge over a region comparable to the Compton wavelength. In quantum electrodynamics at small coupling the electromagnetic mass only diverges as the log of the radius of the particle. ...

The divergences appear in calculations involving Feynman diagrams with closed loops of virtual particles in them.


While virtual particles obey conservation of energy and momentum, they can have any energy and momentum, even one that is not allowed by the relativistic energy-momentum relation for the observed mass of that particle. (That is, E^2 − p^2 is not necessarily the squared rest mass of the particle in that process (e.g. for a photon it could be nonzero).) Such a particle is called off-shell. When there is a loop, the momentum of the particles involved in the loop is not uniquely determined by the energies and momenta of incoming and outgoing particles. A variation in the energy of one particle in the loop can be balanced by an equal and opposite variation in the energy of another particle in the loop. So to find the amplitude for the loop process one must integrate over all possible combinations of energy and momentum that could travel around the loop.


These integrals are often divergent, that is, they give infinite answers. The divergences which are significant are the "ultraviolet" (UV) ones. An ultraviolet divergence can be described as one which comes from the region in the integral where all particles in the loop have large energies and momenta. very short wavelengths and high frequencies fluctuations of the fields, in the path integral for the field. Very short proper-time between particle emission and absorption, if the loop is thought of as a sum over particle paths. So these divergences are short-distance, short-time phenomena.

There are exactly three one-loop divergent loop diagrams in quantum electrodynamics.


1) a photon creates a virtual electron-positron pair which then annihilate, this is a vacuum polarization diagram.


2) an electron which quickly emits and reabsorbs a virtual photon, called a self-energy.


3) An electron emits a photon, emits a second photon, and reabsorbs the first. This process is shown in figure 2, and it is called a vertex renormalization.


The three divergences correspond to the three parameters in the theory:


1') the field normalization Z.


2') the mass renormalization of the electron.


3') the charge renormalization of the electron.


A second class of divergence, called an infrared divergence, is due to massless particles, like the photon. Every process involving charged particles emits infinitely many coherent photons of infinite wavelength, and the amplitude for emitting any finite number of photons is zero. For photons, these divergences are well understood. For example, at the 1-loop order, the vertex function has both ultraviolet and infrared divergences. In contrast to the ultraviolet divergence, the infrared divergence does not require the renormalization of a parameter in the theory. The infrared divergence of the vertex diagram is removed by including a diagram similar to the vertex diagram with the following important difference: the photon connecting the two legs of the electron is cut and replaced by two on shell (i.e. real) photons whose wavelengths tend to infinity; this diagram is equivalent to the bremsstrahlung process. This additional diagram must be included because there is no physical way to distinguish a zero-energy photon flowing through a loop as in the vertex diagram and zero-energy photons emitted through bremsstrahlung. ...

The solution was to realize that the quantities initially appearing in the theory's formulae (such as the formula for the Lagrangian), representing such things as the electron's electric charge and mass, as well as the normalizations of the quantum fields themselves, did not actually correspond to the physical constants measured in the laboratory. As written, they were bare quantities that did not take into account the contribution of virtual-particle loop effects to the physical constants themselves. Among other things, these effects would include the quantum counterpart of the electromagnetic back-reaction that so vexed classical theorists of electromagnetism. In general, these effects would be just as divergent as the amplitudes under study in the first place; so finite measured quantities would in general imply divergent bare quantities.


In order to make contact with reality, then, the formulae would have to be rewritten in terms of measurable, renormalized quantities. The charge of the electron, say, would be defined in terms of a quantity measured at a specific kinematic renormalization point or subtraction point (which will generally have a characteristic energy, called the renormalization scale or simply the energy scale). The parts of the Lagrangian left over, involving the remaining portions of the bare quantities, could then be reinterpreted as counterterms, involved in divergent diagrams exactly canceling out the troublesome divergences for other diagrams. ...

Gauge invariance, via a Ward-Takahashi identity, turns out to imply that we can renormalize the two terms of the electromagnetic covariant derivative ...

The physical constant e, the electron's charge, can then be defined in terms of some specific experiment; we set the renormalization scale equal to the energy characteristic of this experiment, and the first term gives the interaction we see in the laboratory (up to small, finite corrections from loop diagrams, providing such exotica as the high-order corrections to the magnetic moment). The rest is the counterterm. If we are lucky, the divergent parts of loop diagrams can all be decomposed into pieces with three or fewer legs, with an algebraic form that can be canceled out by the second term (or by the similar counterterms that come from Z0 and Z3). In QED, we are lucky: the theory is renormalizable ...

The splitting of the "bare terms" into the original terms and counterterms came before the renormalization group insights due to Kenneth Wilson. According to the renormalization group insights, this splitting is unnatural and unphysical.


Running constants
To minimize the contribution of loop diagrams to a given calculation (and therefore make it easier to extract results), one chooses a renormalization point close to the energies and momenta actually exchanged in the interaction. However, the renormalization point is not itself a physical quantity: the physical predictions of the theory, calculated to all orders, should in principle be independent of the choice of renormalization point, as long as it is within the domain of application of the theory. Changes in renormalization scale will simply affect how much of a result comes from Feynman diagrams without loops, and how much comes from the leftover finite parts of loop diagrams. One can exploit this fact to calculate the effective variation of physical constants with changes in scale. This variation is encoded by beta-functions, and the general theory of this kind of scale-dependence is known as the renormalization group.


Colloquially, particle physicists often speak of certain physical constants as varying with the energy of an interaction, though in fact it is the renormalization scale that is the independent quantity. This running does, however, provide a convenient means of describing changes in the behavior of a field theory under changes in the energies involved in an interaction. For example, since the coupling constant in non-Abelian SU3 quantum chromodynamics becomes small at large energy scales, the theory behaves more like a free theory as the energy exchanged in an interaction becomes large, a phenomenon known as asymptotic freedom. Choosing an increasing energy scale and using the renormalization group makes this clear from simple Feynman diagrams; were this not done, the prediction would be the same, but would arise from complicated high-order cancellations. ...

Since the quantity "infinity - infinity"  is ill-defined, in order to make this notion of canceling divergences precise, the divergences first have to be tamed mathematically using the theory of limits, in a process known as regularization.


An essentially arbitrary modification to the loop integrands, or regulator, can make them drop off faster at high energies and momenta, in such a manner that the integrals converge. A regulator has a characteristic energy scale known as the cutoff; taking this cutoff to infinity (or, equivalently, the corresponding length/time scale to zero) recovers the original integrals.
With the regulator in place, and a finite value for the cutoff, divergent terms in the integrals then turn into finite but cutoff-dependent terms. After canceling out these terms with the contributions from cutoff-dependent counterterms, the cutoff is taken to infinity and finite physical results recovered. If physics on scales we can measure is independent of what happens at the very shortest distance and time scales, then it should be possible to get cutoff-independent results for calculations. ...

Attitudes and interpretation

The early formulators of QED and other quantum field theories were, as a rule, dissatisfied with this state of affairs. It seemed illegitimate to do something tantamount to subtracting infinities from infinities to get finite answers.


Dirac's criticism was the most persistent. As late as 1975, he was saying:

Most physicists are very satisfied with the situation. They say: 'Quantum electrodynamics is a good theory and we do not have to worry about it any more.' I must say that I am very dissatisfied with the situation, because this so-called 'good theory' does involve neglecting infinities which appear in its equations, neglecting them in an arbitrary way. This is just not sensible mathematics. Sensible mathematics involves neglecting a quantity when it is small - not neglecting it just because it is infinitely great and you do not want it!


Another important critic was Feynman. Despite his crucial role in the development of quantum electrodynamics, he wrote the following in 1985:

The shell game that we play ... is technically called 'renormalization'. But no matter how clever the word, it is still what I would call a dippy process! Having to resort to such hocus-pocus has prevented us from proving that the theory of quantum electrodynamics is mathematically self-consistent. It's surprising that the theory still hasn't been proved self-consistent one way or the other by now; I suspect that renormalization is not mathematically legitimate.


While Dirac's criticism was based on the procedure of renormalization itself, Feynman's criticism was very different. Feynman was concerned that all field theories known in the 1960s had the property that the interactions becomes infinitely strong at short enough distance scales. This property, called a Landau pole, made it plausible that quantum field theories were all inconsistent. In 1974, Gross, Politzer and Wilczek showed that another quantum field theory, Quantum Chromodynamics, does not have a Landau pole. Feynman, along with most others, accepted that QCD was a fully consistent theory.

The general unease was almost universal in texts up to the 1970s and 1980s. Beginning in the 1970s, however, inspired by work on the renormalization group and effective field theory, and despite the fact that Dirac and various others—all of whom belonged to the older generation—never withdrew their criticisms, attitudes began to change, especially among younger theorists. Kenneth G. Wilson and others demonstrated that the renormalization group is useful in statistical field theory applied to condensed matter physics, where it provides important insights into the behavior of phase transitions. In condensed matter physics, a real short-distance regulator exists: matter ceases to be continuous on the scale of atoms. Short-distance divergences in condensed matter physics do not present a philosophical problem, since the field theory is only an effective, smoothed-out representation of the behavior of matter anyway; there are no infinities since the cutoff is actually always finite, and it makes perfect sense that the bare quantities are cutoff-dependent.


If QFT holds all the way down past the Planck length (where it might yield to string theory, causal set theory or something different), then there may be no real problem with short-distance divergences in particle physics either; all field theories could simply be effective field theories. In a sense, this approach echoes the older attitude that the divergences in QFT speak of human ignorance about the workings of nature, but also acknowledges that this ignorance can be quantified and that the resulting effective theories remain useful.


In QFT, the value of a physical constant, in general, depends on the scale that one chooses as the renormalization point, and it becomes very interesting to examine the renormalization group running of physical constants under changes in the energy scale. The coupling constants in the Standard Model of particle physics vary in different ways with increasing energy scale: the coupling of quantum chromodynamics and the weak isospin coupling of the electroweak force tend to decrease, and the weak hypercharge coupling of the electroweak force tends to increase. At the colossal energy scale of 10^15 GeV (far beyond the reach of our current particle accelerators), they all become approximately the same size (Grotz and Klapdor 1990, p. 254), a major motivation for speculations about grand unified theory. Instead of being only a worrisome problem, renormalization has become an important theoretical tool for studying the behavior of field theories in different regimes.


If a theory featuring renormalization (e.g. QED) can only be sensibly interpreted as an effective field theory, i.e. as an approximation reflecting human ignorance about the workings of nature, then the problem remains of discovering a more accurate theory that does not have these renormalization problems. As Lewis Ryder has put it, "In the Quantum Theory, these [classical] divergences do not disappear; on the contrary, they appear to get worse. And despite the comparative success of renormalisation theory the feeling remains that there ought to be a more satisfactory way of doing things."[4]

Renormalizability
From this philosophical reassessment a new concept follows naturally: the notion of renormalizability. Not all theories lend themselves to renormalization in the manner described above, with a finite supply of counterterms and all quantities becoming cutoff-independent at the end of the calculation. If the Lagrangian contains combinations of field operators of high enough dimension in energy units, the counterterms required to cancel all divergences proliferate to infinite number, and, at first glance, the theory would seem to gain an infinite number of free parameters and therefore lose all predictive power, becoming scientifically worthless. Such theories are called nonrenormalizable.


The Standard Model of particle physics contains only renormalizable operators, but the interactions of general relativity become nonrenormalizable operators if one attempts to construct a field theory of quantum gravity in the most straightforward manner, suggesting that perturbation theory is useless in application to quantum gravity."

[Sarfatti comment: The LIF spin 1 tetrad "Dirac square root" reformulation of Einstein's spin 2 metric tensor gravity may be renormalizable. In addition, the Penrose-Newman null tetrads are quadratic in advanced and retarded qubit light cone spinors. In contrast, the non-null LIF tetrads are simply the entangled Bell pair states of two spinor qubits. This formal correspondence of Penrose and Rindler seems the obvious implementation of John Archibald Wheeler's heuristic "IT FROM BIT"]

"However, in an effective field theory, "renormalizability" is, strictly speaking, a misnomer. In a nonrenormalizable effective field theory, terms in the Lagrangian do multiply to infinity, but have coefficients suppressed by ever-more-extreme inverse powers of the energy cutoff. If the cutoff is a real, physical quantity—if, that is, the theory is only an effective description of physics up to some maximum energy or minimum distance scale—then these extra terms could represent real physical interactions. Assuming that the dimensionless constants in the theory do not get too large, one can group calculations by inverse powers of the cutoff, and extract approximate predictions to finite order in the cutoff that still have a finite number of free parameters. It can even be useful to renormalize these "nonrenormalizable" interactions.
Nonrenormalizable interactions in effective field theories rapidly become weaker as the energy scale becomes much smaller than the cutoff. The classic example is the Fermi theory of the weak nuclear force, a nonrenormalizable effective theory whose cutoff is comparable to the mass of the W particle. This fact may also provide a possible explanation for why almost all of the particle interactions we see are describable by renormalizable theories. It may be that any others that may exist at the GUT or Planck scale simply become too weak to detect in the realm we can observe, with one exception: gravity, whose exceedingly weak interaction is magnified by the presence of the enormous masses of stars and planets." Wikipedia

Jul 03

Why the world economy is collapsing.

Posted by: JackSarfatti |
Tagged in: Untagged 

Click here.

PRL 105, 010405 (2010) PHYSICAL REVIEW LETTERS 2 JULY 2010

"Measuring Small Longitudinal Phase Shifts:Weak Measurements or Standard Interferometry?
Nicolas Brunner1 and Christoph Simon2
A cornerstone of quantum mechanics is that a measurement generally perturbs the system. Indeed, during the process of a (standard) quantum measurement, the state of the system is projected onto one of the eigenstates of the measured observable. However, in 1988, in the context of foundational research on the arrow of time in quantum theory, Aharonov, Albert, and Vaidman [1] discovered that quantum mechanics offers a much larger variety of
measurements. As a matter of fact, the only restriction quantum mechanics imposes on measurements is a tradeoff between information gain and disturbance. Therefore, strong (or standard) quantum measurements are only part of the game. There are also ‘‘weak’’ measurements [2], which disturb the system only very little, but which give only limited information about its quantum state. Weak measurements lead to striking results when postselection comes into play. In particular, the ‘‘weak value’’ found by a weak measurement on a preselected and postselected system can be arbitrarily large, where the most
famous example is the measurement of a spin particle leading to a value of 100 [1]. Because of such unorthodox predictions, weak measurements were initially controversial [3] and were largely considered as a strange and purely theoretical concept. However, they turn out to be a useful ingredient for exploring the foundations of quantum mechanics. In particular, they bring an interesting new perspective to famous quantum paradoxes, as illustrated by recent experiments [4] on Hardy’s paradox [5,6]. Furthermore, they also perfectly describe superluminal light propagation in dispersive materials [7,8], polarization effects in optical networks [9], and cavity QED experiments [10]. Weak measurements have been demonstrated in numerous experiments [4,7,8,11] and were recently shown to be relevant in solid-state physics as well [12]. Already in 1990 Aharonov and Vaidman [13] pointed out the potential offered by weak measurements for performing very sensitive measurements. More precisely, when weak measurements are judiciously combined with preselection and postselection, they lead to an amplification phenomenon, much like a small image is magnified by a microscope. This effect is of great interest from an experimental perspective, since it gives access to an experimental sensitivity beyond the detector’s resolution, therefore enabling the observation of very small physical effects"

"Hardy's paradox is a thought experiment in quantum mechanics devised by Lucien Hardy[1][2] in which a particle and its antiparticle may interact without annihilating each other. The paradox arises in that this may only occur if the interaction is not observed and so it seemed that one might never be able to confirm this.[3] Experiments[4][5] using the technique of weak measurement[3] have studied an interaction of polarized photons and these have demonstrated that the phenomenon does occur. However, the consequence of these experiments maintain only that past events can be inferred about after their occurrence as a probabilistic wave collapse. These weak measurements are considered by some[who?] to be an observation themselves, and therefore part of the causation of wave collapse, making the objective results only a probabilistic function rather than a fixed reality." Wikipedia

I have suggested that Einstein's gravity/curvature is a post-inflation macro-quantum coherent emergent collective effect in the Higgs-Goldstone vacuum superconductor, an elastic-plastic analog of superflow in liquid helium below the Lambda point ~ gradient of the macro-quantum coherent helium ground state with spontaneous broken non-electromagnetic U1 symmetry.

Here is another interesting approach with the SU2 weak group not the SU3 strong group. It has a kind of twistor nonlocality and non-commutative geometry.

He's got a torsion field there.

I think there is an inconsistency in his math. Can't have d^x^u/ds = constant when the antisymmetric torsion tensor =/= 0.

i.e. the torsion field tensor makes a real force that cannot be eliminated because it is a tensor under the original general coordinate transformations unlike the Christoffel symbol (of Newton's G-force).

GRAVITATIONAL AND ELECTROWEAK UNIFICATION BY REPLACING DIFFEOMORPHISMS WITH LARGER GROUP
DAVE PANDRES, JR.
Abstract. The covariance group for general relativity, the diffeomor-
phisms, is replaced by a group of coordinate transformations which con-
tains the diffeomorphisms as a proper subgroup. The larger group is
defined by the assumption that all observers will agree whether any given
quantity is conserved. Alternatively, and equivalently, it is defined by the
assumption that all observers will agree that the general relativistic wave
equation describes the propagation of light. Thus, the group replacement
is analogous to the replacement of the Lorentz group by the diffeomor-
phisms that led Einstein from special relativity to general relativity, and
is also consistent with the assumption of constant light velocity that led
him to special relativity. The enlarged covariance group leads to a non-
commutative geometry based not on a manifold, but on a nonlocal space
in which paths, rather than points, are the most primitive invariant enti-
ties. This yields a theory which unifies the gravitational and electroweak
interactions. The theory contains no adjustable parameters, such as those
that are chosen arbitrarily in the standard model.

Bohm's Hidden Variables HV in orthodox quantum theory are "test particles" in a similar sense to Einstein's GR test particles. The paths of HV are piloted by the quantum potential Q(nonlocal configuration space for entangled systems) without any direct influence of HV back on Q just like geodesic GR test particles are piloted by the local curvature tensor Ruvwl without being included in Tuv(source).  The lack of direct back reaction in the quantum case corresponds to Valentini's "sub-quantal thermal equilibrium" with Shimony's "passion at a distance" and the no-cloning a quantum theorem.

"Now, in special relativity we can think of an inertial coordinate system, or
`inertial frame', as being dened by a field of clocks, all at rest relative to each
other. In general relativity this makes no sense, since we can only unambiguously
dene the relative velocity of two clocks if they are at the same location. Thus
the concept of inertial frame, so important in special relativity, is banned from
general relativity!"

Baez means Global Inertial Frame (GIF) above. GR replaces GIF with LIF as he points out below.

"If we are willing to put up with limited accuracy, we can still talk about the
relative velocity of two particles in the limit where they are very close, since
curvature eects will then be very small. In this approximate sense, we can talk
about a `local' inertial coordinate system. However, we must remember that
this notion makes perfect sense only in the limit where the region of spacetime
covered by the coordinate system goes to zero in size. ...

Einstein's equation can be expressed as a statement about the relative acceleration
of very close test particles in free fall. Let us clarify these terms a
bit. A `test particle' is an idealized point particle with energy and momentum
so small that its effects on spacetime curvature are negligible." A particle is said
to be in `free fall' when its motion is affected by no forces except gravity. In
general relativity, a test particle in free fall will trace out a `geodesic'. This
means that its velocity vector is parallel transported along the curve it traces
out in spacetime. A geodesic is the closest thing there is to a straight line in
curved spacetime. ...

in general relativity gravity is not really a `force', but just a manifestation
of the curvature of spacetime. ...

To state Einstein's equation in simple English, we need to consider a round ball
of test particles that are all initially at rest relative to each other. As we have
seen, this is a sensible notion only in the limit where the ball is very small. If
we start with such a ball of particles, it will, to second order in time, become
an ellipsoid as time passes. This should not be too surprising, because any
linear transformation applied to a ball gives an ellipsoid, and as the saying goes,
'everything is linear to first order'. Here we get a bit more: the relative velocity
of the particles starts out being zero, so to first order in time the ball does not
change shape at all: the change is a second-order effect.

Let V(t) be the volume of the ball after a proper time t has elapsed, as
measured by the particle at the center of the ball. Then Einstein's equation
says:

V^-1d^2V/dt^2 ~ -(1/2)(Ttt + Txx + Tyy + Tzz)

where these flows are measured at the center of the ball at time zero, using local
inertial coordinates. These flows are the diagonal components of a 4x4 matrix
T called the `stress-energy tensor'."

I add here that for random virtual quanta (zero point vacuum fluctuations) in the isotropic case, Lorentz invariance + equivalence principle demand

w = (Pressure per space dimension)/(energy density) = -1

Ttt = - Txx = - Tyy = - Tzz = - P

Therefore the RHS of (2) reduces to - P

If there are anisotropic Casimir effect boundaries this result will change.

Free virtual bosons have P < 0 causing anti-gravity repulsion dark energy

Free closed virtual fermion loops have P > 0 causing gravity attraction dark matter that mimics w = 0 CDM.

Jun 28

Book Review Aharonov's "Quantum Paradoxes" v1

Posted by: JackSarfatti |
Tagged in: Untagged 


Working on Amazon.com book review

From the last chapter (18 The Quantum World)

Physics theories are passive descriptions of certain kinds of controlled procedures "measurements". If the theories do not correctly describe the results of the measurements in a coherent narrative using mathematics as well as ordinary language and if they do not make predictions of future measurements, then they are not good theoretical physics. There is always a subjective element of what is a good elegant coherent narrative in ordinary language. All the work today in quantum theory, with rare exceptions, is axiomatizing quantum theory in its limiting case that excludes what Einstein called "spooky telepathy" - anything paranormal. No one argues that quantum theory does not work for dead matter, however, the Fat Lady has not yet sung on living matter on us, and on exotic forms of matter prepared in the laboratory and possibly in the moment of inflation creation of our observable universe that are pumped off "sub-quantal thermodynamic equilibrium" that results in direct back-reaction of the classical material dynamics (hidden variables) on their "quantum potential" field (Bohm). In other words, quantum theory, as commonly understood, treats the hidden variables as test particles - clearly an approximation.



Measurements do not necessarily disturb the quantum system, e.g. eigenoperator measurements do not. Aharonov introduced new kinds of "weak" and also "protective" measurements.

Are causality and nonlocality sufficient axioms for quantum theory with "passion at a distance" signal locality? Entanglement is used, but not directly as a stand-alone faster-than-light and/or paranormal precognitive retro-causal communication system. This corresponds to Valentini's "sub-quantal thermal equilibrium" or when there is no direct back-reaction of the classical dynamical field hidden variables on their "super" quantum potential (infinite-dimensional configuration space for classical fields). In other words, Wheeler's BIT acts on IT, but IT does not react back directly on its BIT - like a docile horse.

In seeming contradiction to Bohr's principle of complementarity the Aharonov "Tel Aviv" School claims that in a post-selected "weak measurement ... of an expectation value on a PPS ensemble ... it shows through which slit each particle goes and it shows an interference pattern (p. 265)" Would Feynman be shocked? Would Bohr come back from the grave like Hamlet's Ghost? Sir Fred Hoyle in his 1984 "Intelligent Universe" clearly has the key idea implicate in Bohr's turgid prose refined by Wheeler in his delayed choice gedankenexperiment that it is the Omega Point observer at the Conformal End Time that must be taken into account including all other previous measurements in a self-consistent web.


See eq.18.1 p. 266 for the math.

http://stardrive.org (Tamara Davis, 2004 PhD University New South Wales)

"On the one hand, the previous thought experiments confirmed the intuition that 'it is impossible to design an apparatus to determine which hole the electron passes through, that will not at the same time disturb the electrons enough to destroy the interference pattern'. On the other hand, ... a weak measurement on a Pre-Post Selected ensemble tells us about both the pre- and the postselected states - here, about the interference pattern and about the final direction of each electron ..."

Aharonov invokes a Rube Goldbergian excess axiom of "stable collapse" without which the Born probability rule is violated opening Pandora's Box to controllable  spooky paranormal actions at a distance across spacelike separated senders and receivers and, worse, even precognitive remote viewing as reported by Russell Targ at the June 2006 AAAS "Retrocausality Workshop" at USD in San Diego that I attended. The game here is to see what kinds of shackles must bind Dame Nature to avoid the unavoidable occurrence of Aleister Crowley's "Magick without Magic" sought by Jack Parsons and L. Ron Hubbard in the California desert as reported in the book "Strange Angel" on the creation of NASA's JPL and DOD contractor Aerojet General. Remember Newton's passion for Alchemy did not detract from his mechanical equations for gravity.

to be continued

From: JACK SARFATTI Date: June 27, 2010 2:39:21 PM PDT

Scientists see evidence that rules of particle physics may need a rewrite
www.physorg.com
(PhysOrg.com) -- Two separate collaborations involving Indiana University scientists have reported new results suggesting unexpected differences between neutrinos and their antiparticle brethren. These results could set the stage for what one IU physicist calls a 'radical modification of our understanding...


One sunrise ago at 1:01 in the evenin' · Weigh in  · Blabber t' yer mates
Michael Eugene Patton, Nicole Tedesco, Daniel Piersee an' 4 other mateys be likin' this ere' scroll.

Jack Sarfatti
The particle-antiparticle symmetry is built into the Dirac square root equation from the Klein-Gordon equation. Anti-particles are the negative energy solutions basically from the mass-shell condition

E =+- [(pc)^2 + (mc^2)^2]^1/2

same "m" a negative energy particle moving backward in time is same as a positive energy anti-particle moving forward in time. Also need the discrete CPT symmetry - it get's tricky why the "m" of the anti-particle is not the "m" of the particle - the "m" for leptons and quarks comes from the Higgs-Goldstone post-inflation vacuum superconductor (see Frank Wilzcek's "Lightness of Being")

Gareth Lee Meredith
Is this because in the Hamiltonian, the best way to describe a particle under E=Mc^2. is better described as E= +- Mc^2 in a Dirac Sea Model?

Jack Sarfatti
Since P, C, T need not be conserved conserved individually in the weak force, I suppose that is relevant to the difference in the "m"s (there are radiative corrections to the bare masses - very messy).

Jack Sarfatti
Special relativity demands E = +-Mc^2 - need both roots.

Gareth Lee Meredith
They have just as much REAL effects in the world - i know this.

Jack Sarfatti
Virtual particles are inside the vacuum. Real particles are outside the vacuum (analogous to excited states of a ground state). Jack Sarfatti
Real particles correspond to poles of the Feynman propagator in the complex energy plane - they are on mass-shell i.e.


E^2 = (pc)^2 + (mc^2)^2

the product of their conjugate uncertainties is greater than the quantum of action (neglecting dimensionless factor like 1/2 - matter of definition of "quantum of action") Virtual particles are off-mass-shell and the product of their conjugate uncertainties is less than the quantum of action.
Jack Sarfatti
PS of course virtual particles have observable effects


1) Casimir force


2) Lamb shift, anomalous magnetic moment of electron - all radiative corrections in quantum field theory

Jack Sarfatti
The vacuum is a coherent superfluid with geodesic motion analogous to superflow without viscosity

Jack Sarfatti
The geodesic equation for non-accelerating test particles is analogous to v = (h/m)Grad(coherent phase) of superflow.

Jack Sarfatti
Technically the Einstein gravity tetrad fields are functions of gradients of 8 coherent Higgs-Goldstone phases that emerge in the moment of inflation and are essentially the 8 strong force massless gluon phases, i.e. a unification of the compact internal SU3 gauge force with the non-compact T4 curved spacetime universal gravity coupling to all non-gravity fields.

 

Chapter 6 has several seemingly paranormal orthodox quantum effects that seem magical to the strictly classical Newtonian mind such as the average member of the Committee to Investigate Claims of the Paranormal. This is even before we put in post-quantum signal nonlocality, the real OTO  Crowleyan Magick without Magic that L. Ron Hubbard and Jack Parsons were looking for in the desert according to Robert Anton Wilson's "Illuminatus Trilogy "and other weird writings. See also the book "Strange Angel."

"What matters is that the ball has changed the energy of the particle in the cylinder even when the particle is far from the piston. Thus a change in the energy of the ball immediately shows that there is a particle in the cylinder even if the particle is nowhere near the piston! Now that is action at a distance." p. 79 Aharonov & Rohrlich

This is most easily understood as the action of the entangled Bohm quantum potential Q separate from the matter hidden variables "ball" and particle. However, all of Chapter 6 assumes Abner Shimony's "passion at a distance", there is always a Catch 22 even with the "modular observables" in which the spooky action-at-a-distance is uncontrollably random outside the future light cones of the entangled subsystems and there is no faster-than-light and retro-causal precognition in any of the effects Aharonov dares to tackle. In Valentini's terms Aharonov restricts his book to the limiting case of thermodynamic equilibrium of the matter classical dynamical degrees of freedom (aka "hidden variables). So Aharonov does not ask the really important question in my opinion.

I agree with his 6.4 on the analogy between quantum theory and Einstein's 1905 Special Relativity (SR), indeed, just as 1905 SR is the limiting case of Einstein's 1916 General Relativity (GR) in the limit of zero global curvature, so, too, is quantum theory the limiting case of post-quantum theory in the limit of zero direct back-action of the material hidden variables on the qubit Bohm quantum potential Q. One can use Antony Valentini's sub-quantal non-equilibrium of the hidden variables as equivalent to back-action. Therefore, pumped dissipative structures should generally allow signal-nonlocality outside the local light cones if this line of argument is correct - a testable Popper falsifiable proposition.

Note on Aharonov's 6.4: local signal quantum theory uniquely derives from two axioms

i) All physical interactions respect causality.

ii) Some physical interactions are nonlocal ("modular observables" have nonlocal dynamics)

analogous to Einstein's 1905 SR that derives uniquely from

i') The laws of physics are the same in all inertial reference frames.

ii') The speed of light in vacuum is invariant c.

Note that 1916 GR replaces i) by

i") The laws of physics are the same in all locally coincident frames of reference whether inertial or non-inertial. (general covariance)

ii") The laws of 1905 SR work inside zero g-force local inertial frames (Einstein's Equivalence Principle)

Similarly, in post-quantum theory

iii") Some physical interactions do not respect restrictions to the local forward light cones of the detectors. "Causality" is violated.

ii''') same as ii).

 

 

Aharonov's post-selection need not be done literally in the future and does not necessarily imply the kind of real retro-causality that is in the Wheeler-Feynman idea. Thus, Aharonov's use of "destiny" for post-selection and "history" for pre-selection" is misleading. All Aharonov seems to be saying is to use a protocol set in advance in which only conditional probabilities in which a subset of the set of distinguishable final outputs is considered. The more interesting issue is a real retro-causation closely allied to real signal nonlocality in which entanglement is used as a real communication channel outside the light cones in the sense John S. Bell explained. This would be a gross violation of quantum theory of course, and it is my conjecture that such gross violation is necessary for living matter.

"The uncertainty principle implies a loss of quantum interference whenever we can detect through which slit a particle passes ... applied the uncertainty principle to the detector to explain the loss of interference, but applying the uncertainty principle to the particle tells us nothing about loss of interference ... " Quantum Paradoxes p. 61

It needs to be understood that Aharonov's new book with Rohrlich only deals with the paradoxical thinking of orthodox quantum theory that has Shimony's "passion at a distance" where the classical matter dynamical "hidden variables" are in "sub-quantal thermodynamic equilibrium" in which the Bohm quantum pilot wave "potential Q" acts on these hidden variables without any direct back-reaction of them on their Q to close the loop of action and reaction. This is essentially what in general relativity we recognize as the "test particle approximation" i.e. the hidden variables are test objects not sources of the quantum potential Q (or super-potential in classical field theory). Everything in Aharonov's book breaks down fundamentally in the post-quantum theory where the hidden variables of the classical matter fields are not only not in thermal equilibrium but are prevented from relaxing to thermal equilibrium by an external pump of some kind. All living matter is post-quantum in this sense with direct back-reaction of the hidden variables on their Q forming a "more is different" self-organizing emergen creative loop controllably nonlocal in both space and time. Our inner consciousness is the key emergent property of sustained thermal non-equilibrium of the hidden variables.

Jun 19

"De Broglie-Bohm theory is a 'hidden variables' formulation of quantum
 mechanics initially developed by de Broglie from 1923-1927 and clarified and
extended by Bohm beginning in 1952. In non-relativistic quantum theory it
differs from the orthodox viewpoint in that the notion of 'probability' refers to the
probability that a particle is at some position, rather than to its probability of
being found there in a suitable measurement. From this seemingly subtle
difference it is easy to show that - contrary to popular belief - QM can be
interpreted as a dynamical theory of particle trajectories rather than as a
statistical theory of observation. In such a formalism the standard paradoxes
related to measurement, observation and wave function collapse (Schroedinger's
cat, and so on) largely evaporate. The classical limit does not have to be
presupposed and emerges from the theory in a relatively clear way. All the 'talk'
is replaced by sharply-defined mathematics, it becomes possible to 'visualize' the
reality of most quantum events, and - most importantly - the theory is completely
consistent with the full range of QM predictive-observational data. While some
believe the study of interpretational questions to be mere semantics or 'just
philosophy', it is often forgotten that the location of the boundary between
philosophy and physics is unknown, and that one's philosophical perspective can
guide mathematical developments. For many people it is clear that de Broglie-
Bohm theory should be studied, not only because it is beginning to make
apparently testable predictions, but also because it has the potential to suggest
possible directions towards the next generation of ideas in theoretical physics.

...

Quantum non-equilibrium and 'signal non-locality'.

Dynamical relaxation to quantum equilibrium.

Potential instabilities in the Bohm dynamics.

Possible deeper interpretations of de Broglie-Bohm theory (such as Basil
Hiley's new quantum algebra work).

Pilot-wave field theories and relativistic generalizations

De Broglie-Bohm quantum cosmology

Deconstructing' the wave function. Can the theory be reduced to 3-space
waves? Norsen's 'theory of exclusively local beables'.

Proposed experimental tests (Valentini, Riggs, etc..)

The ontological status of the theory. First or Second order?

Energy.

Empty waves.

The arguments for and against psi-epistemic hidden-variables theories.

Alternative formulations of deterministic hidden-variables theories.

Non-Markovian trajectory theories.


Comparison with the consistent histories formulation


Use of trajectories for efficient numerical simulations in quantum chemistry.

Spin, antisymmetry, the exclusion principle and the 'quantum force.'

Responses to common objections (it's not possible for particles to exist; particles going round corners ought to radiate etc.)."

-- Michael Towler, Canbridge University, Web page