You are here:
Home Jack Sarfatti's Blog Blog (Full Text Display)

Yes, here is the free version http://arxiv.org/pdf/1004.2507

I will comment in detail in due course. The obvious question arises, what happens if we use a negative refracting meta-material super-lens plus Yakir Aharonov's weak measurement super-oscillation technique to resolve details smaller than the smallest wavelength in the probe pulse?

Even more revolutionary is getting signal nonlocality violating the no-cloning theorem, punching a hole in the t'Hooft-Susskind solution to the black hole information paradox? Perhaps not. These are all interesting questions.

If we could violate Heisenberg's uncertainty principle would quantum mechanics show signal nonlocality? Perhaps not. I don't know off-hand at this moment.

If quantum mechanics were more non-local it would violate the uncertainty principle

Jonathan Oppenheim1, ∗ and Stephanie Wehner2, †

1DAMTP, University of Cambridge, CB3 0WA, Cambridge, UK 2Institute for Quantum Information, California Institute of Technology, Pasadena, CA 91125, USA (Dated: April 16, 2010)

"The two defining elements of quantum mechanics are Heisenberg’s uncertainty principle, and a subtle form of non-locality which Einstein famously called “spooky action at a distance”. The first principle states that there are measurements whose results cannot be simultaneously predicted with certainty. The second that when performing measurements on two or more separated systems the outcomes can be correlated in a way that defies the classical world. These two fundamental features have thus far been separate and distinct concepts. Here we show that they are inextricably and quantitatively linked. Quantum mechanics cannot be more non-local without violating the uncertainty principle. In fact, the link between uncertainty and non-locality holds for all physical theories. More specifically, the degree of non-locality of any theory is solely determined by two factors – one being the strength of the uncertainty principle, and the second one being the strength of a property which Schro ?dinger called “steering”. The latter determines which states can be prepared at one location given a measurement at another, and in most theories of nature this is determined by causality alone."

On Nov 20, 2010, at 4:22 PM, Nicole C. Tedesco wrote:

Have you read this paper from Oppenheim and Wehner? (I don’t have a Science subscription.)

http://www.sciencemag.org/content/330/6007/1072.abstract

Thoughts? Suggestions? Gripes?

The only thing I can say at this point is a non sequitur: I find it interesting to watch computer science and physics increasingly converging on each other, leaving physics looking more and more like a problem of information processing and leaving computing and information theory looking more and more like physics! I know this is old news, especially since t’Hooft, but interesting to watch nevertheless.

Nicole C. Tedesco

http://www.Facebook.com/Nicole.Tedesco

"Here we show that not only do quantum correlations not undermine the uncertainty principle, they are determined by it."

This, so far, is nothing new. The original 1935 EPR paper showed that without a real physical nonlocal action at a distance the Heisenberg principle would be violated. That is:

Locality implies violation of Heisenberg uncertainty at one end for one subsystem of the entangled pair of subsystems.

Therefore: Heisenberg uncertainty implies nonlocality.

The key issue, however, is control of the nonlocality. In orthodox quantum theory, the no-cloning a quantum in an arbitrary state theorem, presupposes uncontrollable randomness of the physical nonlocal action, what Bohm and Hiley called the "fragility of the quantum potential" in their book "The Undivided Universe."

"Here, we take a very different approach and relate the strength to non-local correlations to two inherent properties of any physical theory.

We first describe what are widely considered the central phenomena of quantum mechanics – uncertainty and non-locality, but in a general setting, so that they can be quantified for all theories, not just quantum mechanics. We also define a generalised notion of steering, an aspect of quantum mechanics which was considered central by Schro ?dinger [9, 10], but which has received less attention. We then show that they are all linked by a single equation. In particular, we find that two basic principles inherent to any theory, namely our ability to steer, and the existence of uncertainty relations, completely determine the strength of non-local correlations. The steering properties of quantum mechanics itself are only restricted by causality and thus we find that the uncertainty relation and causality are the only limitations which determine quantum non-locality. ...

Quantum mechanics as well as classical mechanics obeys the no-signalling principle, meaning that information cannot travel faster than light. This means that the probability that Bob obtains outcome b when performing measurement t cannot depend on Alice’s choice of measurement setting (and vice versa)."

Nicole, I'm not sure, but I think they assume this no-signalling as a postulate. If that is the case, the paper will not be of interest for me because the important issue is violation of that postulate in a more general post-quantum theory in which the quantum theory they consider is a limiting case in the same sense that global special relativity is the limit of general relativity when the curvature tensor is zero in a finite region of 4D spacetime.

The issue of course is how to prevent the hidden variables from approaching sub-quantal equilibrium by external pumping like in a laser or in Frohlich's model of a biological membrane of electric dipoles.

Einladung der Chemisch Physikalischen Gesellschaft zum Vortrag von Dr. Michael D. Towler, TCM Group, Cavendish Laboratory, Cambridge University, UK

Abstract:

"De Broglie-Bohm theory is a 'hidden variables' formulation of quantum mechanics initially developed by Louis de Broglie from 1923-1927 and clarified and extended by American physicist David Bohm beginning in1952. Just by the simple expedient of refusing to believe that particles cease to exist if you stop looking at them, it is easy to show that - contrary to popular belief - quantum mechanics can be interpreted as adynamical theory of particle trajectories rather than as a statistical theory of observation. In such a formalism the standard paradoxes related to measurement, observation and wave function collapse (Schroedinger's cat, and so on) largely evaporate. The classical limit does not have to be presupposed and emerges from the theory in a relatively clear way. All the 'talk' is replaced by sharply-defined mathematics, it becomes possible to 'visualize' the reality of most quantum events, and - most importantly - the theory is completely consistent with the full range of QM predictive-observational data. The theory also gives rise to the possibility of new physics - and of new mathematical and philosophical ideas - and is currently undergoing a major resurgence. In this talk I review the structure of the theory and its consequences, and present some recent non-equilibrium trajectory calculations which demonstrate the origin of the Born rule and may hold the key to possible experimental tests of the de Broglie-Bohm framework."

Event date: 23.11.2010, 17:00 to 23.11.2010, 18:00

Einladung der Chemisch Physikalischen Gesellschaft zum Vortrag von Dr. Michael D. Towler, TCM Group, Cavendish Laboratory, Cambridge University, UK

Abstract:

"De Broglie-Bohm theory is a 'hidden variables' formulation of quantum mechanics initially developed by Louis de Broglie from 1923-1927 and clarified and extended by American physicist David Bohm beginning in1952. Just by the simple expedient of refusing to believe that particles cease to exist if you stop looking at them, it is easy to show that - contrary to popular belief - quantum mechanics can be interpreted as adynamical theory of particle trajectories rather than as a statistical theory of observation. In such a formalism the standard paradoxes related to measurement, observation and wave function collapse (Schroedinger's cat, and so on) largely evaporate. The classical limit does not have to be presupposed and emerges from the theory in a relatively clear way. All the 'talk' is replaced by sharply-defined mathematics, it becomes possible to 'visualize' the reality of most quantum events, and - most importantly - the theory is completely consistent with the full range of QM predictive-observational data. The theory also gives rise to the possibility of new physics - and of new mathematical and philosophical ideas - and is currently undergoing a major resurgence. In this talk I review the structure of the theory and its consequences, and present some recent non-equilibrium trajectory calculations which demonstrate the origin of the Born rule and may hold the key to possible experimental tests of the de Broglie-Bohm framework."

Event date: 23.11.2010, 17:00 to 23.11.2010, 18:00

Nov
20

Tagged in:

However, if A-D-E is not Popper falsifiable then it's not physics - just a framework.

Same problem as string theory according to some.

So, Saul-Paul is there a way to falsify A-D-E?

What I don't get is how after 40 years and so many highly paid string theorists in top universities world-wide they can still claim that they don't know what string theory is? e.g. David Gross's video http://www.youtube.com/watch?v=Mo05DBiCrLc

No organizing principle like the equivalence principle in GR and the local gauge principle in QFT. What am I missing here?

Some say string theory fell from the 21st Century- but it's already the 21st Century. :-)

What is good physics?

Good physics makes a difference that makes a difference experimentally.

On Nov 20, 2010, at 8:28 PM, Saul-Paul and Mary-Minn Sirag wrote:

Jack,

String theory is falsifiable. See the final paragraph of Gordon Kane's paper, "String Theory and the Real World" Phys. Today, Nov.2010 You sent it to us. Did you read it?

Not yet.

Here's that last paragraph:

Some of those who talk about testing string theory, and most critics of theory, are assuming the 10D or 11D approach and want somehow to test the theory without applying it to a world where tests exist. That is analogous to asking a Lagrangian to be falsifiable without applying it to any physical system. Is 10D string theory falsifiable? That is not the relevant question. What matters is that the predictions of the 10D theory for the 4D world are demonstrably testable and falsifiable. If no compactified string theory emerges that describes the real world, physicists will lose interest in string theory. But perhaps one or more will describe and explain what is observed and relate various phenomena that previously seemed independent. Such a powerful success of science would bring us close to an ultimate theory.

It seems Kane is saying there is no test of the current version of string theory? What am I missing?

[end of quote from Kane]

What I call ADEX theory is not falsifiable, because I defined it as the study and application of all the A-D-E Coxeter graphs. The appropriate issue here is not falsifiability but usefulness.

Fine, but how is it useful for physics is the question? I agree it's good math and perhaps will be useful in the future.

I'm attaching my my "ADEX dimensions" (2008) paper:

See especially page 2, where I list the many mathematical objects classified (and thus unified) by the A-D-E graphs. These graphs provide a way to transform between these objects. What is difficult (or impossible) to see (and calculate) via one object may be easy for an alternative object. Each object is a different window into a vast underlying structure, which I take to be Reality in all its complexity.

There is hardly any area of physics which does not use one or more of these mathematical objects calculate consequences of various models and theories.

This is both pure and applied mathematics, in the same sense that Newton's calculus was both pure and applied mathematics. Newton's derivatives were derivatives with respect to time; and time is physical. Newton's theory of gravity was indeed (in the 20th century) falsified, yet it continues to be useful -- extremely so.

All for now;-)

Saul-Paul

Fine, but it seems that quantum chromodynamics for example is much more useful at present e.g. supercomputer computation of the hadronic masses.

On Nov 20, 2010, at 5:54 PM, Tony Smith wrote:

Saul-Paul,

you are correct that LHC results could sort wheat from chaff in unified theories.

1 - If LHC finds no standard supersymmetry partners, then the Kane Superstring approach is dead.

Yes. Popper falsifiable - good physics.

2 - If LHC finds no indication (direct or indirect) of Garrett Lisi mirror fermions, then the Lisi E8 approach is dead.

Yes. Popper falsifiable - good physics.

3 - If the LHC finds either standard supersymmetry partners or mirror fermions, then my approach to E8 (no mirrors and no conventional supersymmetry) is dead.

Yes. Popper falsifiable - good physics.

Also if the LHC finds WIMPS or other real dark matter particles then my theory of dark matter as positive pressure quantum vacuum is dead.

However, as you say Saul-Paul, all those approaches are connected to A-D-E stuff, so no matter what, it is likely that A-D-E will remain alive.

Tony

Begin forwarded message:

From: Tony Smith Date: November 19, 2010 7:46:46 PM PST

To: JACK SARFATTI Subject: GUT and E8

Jack, with respect to the Lubos Motl blog entry that you quoted and propagated:

Lubos said: "... The speedy proton decay was obviously a wrong prediction...[ of ]... grand unification ... based on the gauge group SU(5) ..."

That may or may not be true. The conventional interpretation of experimental results is that what Lubos said about SU(5) GUT is true, but there are reasonable alternative interpretations that indicate that SU(5) GUT may not be ruled out by experimental results. See, for example, a recent (within past 5 years) paper at http://arxiv.org/abs/hep-ph/0601023

by Pran Nath (Northeastern University Boston) and Pavel Fileviez Perez (Lisboa, Portugal) published in Physics Reports as Phys.Rpt..441:191-317, 2007 (needless to say, Phys. Rpt. is a very highly regarded journal of detailed reviews)

says at page 72:

"... a majority of non-supersymmetric extensions of the Georgi-Glashow SU(5) model yield a GUT scale which is slightly above 10^14 GeV. Hence, as far as the experimental limits on proton decay are concerned, these extensions still represent viable scenarios ... it is possible to satisfy all experimental bounds on proton decay in the context of non-supersymmetric grand unified theories. For example in a minimal non-supersymmetric GUT based on SU(5) the upper bound on the total proton decay lifetime is ... 1.4 × 10^36 years ...".

Further, you quoted Lubos as saying:

"... there are five exceptional Lie groups, G2, F4, E6, E7, E8. Only the last three are large enough to play the role of a grand unified group. But among these five groups ... E6 is the only viable grand unified group ... The other groups are inconsistent with the parity violation in Nature - e.g. with the fact that the neutrinos have to be left-handed. ... anyone who claims that he has a grand unified theory based e.g. on E8 is a hack who misunderstands exceptional Lie groups in physics ...".

It is NOT true that E8 cannot be the basis for realistic physics models that are consistent with chirality (i.e. "neutrinos have to be left-handed"). Garrett Lisi's model does that by the possibility that the neutrinos of opposite chirality (i.e., an anti-family of mirror fermions) might be suppressed by interaction with axions and/or some of the Higgs scalars in his model, as Garret states in his recent paper at http://arxiv.org/abs/1006.4908 which paper is a revision of his earlier work that was criticized

by Skip Garibaldi with respect to its math. You might say that Skip Garibaldi helped Garrett Lisi to improve the math structure of his E8 model to its current state, which seems to have been approved by most of the many experts in E8 math who studied it at a Banff workshop http://www.birs.ca/events/2010/5-day-workshops/10w5039

Even I have been able to construct a physically realistic E8 model that is consistent with chirality. As to the mathematical soundness of my E8 model, Skip Garibaldi (a strong opponent of Garrett Lisi's E8 physics) told me by email on 26 April 2010:

"... mathematically what you write looks like completely standard and unremarkable examples of representations of complex semisimple Lie groups. But of course your point is about the physics interpretations of these things, and unfortunately I have no hope of understanding that side of it because of my lack of knowledge of physics. I only got to the point of writing something about Lisi's thing because of the false mathematical statements in his arxiv note ..."

I short, even Skip Garibaldi agrees that my math is sound, and does not go one way or the other on the physics. As I said above, Skip Garibaldi's criticism led Garrett Lisi to clean up the math in his E8 model. An outline of my E8 model is on the first page of http://www.valdostamuseum.org/hamsmith/E8CCTS12a.pdf with many further details in the rest of that paper

and in other papers on my web site.

It is pathetic that Lubos is unaware of work such as that of Pran Nath which is available in the respected journal Physics Reports, and that Lubos fails to understand the subtleties of how E8 models can actually be constructed consistently with physical chirality, and it is tragically sad that he seems to be unaware of the extent of his own ignorance.

Tony

On Nov 20, 2010, at 5:07 PM, Saul-Paul Sirag wrote:

Jack, Tony, et alia:

It's a good thing that the LHC is up and running. It may be that we will know in a few years which approach to TOE is the right one. I am partial to the string theory approach, as most recently described in Gordon Kane's article, "String Theory and the Real World" (Phys. Today, Nov 2010). This theory requires (among other things) supersymmetry partners, some of which might show up at the LHC. The article by Garrett Lisi and James Weatherall, "A Geometric Theory of Everything" (Sci. Am., Dec 2010), mentions (among other things) the prediction of "mirror fermions" (which would have to be more massive than their mirror partners). Will these show up at the LHC? These mirror fermions are Lisi's main reply to the Distler&Garibaldi critique:

Given this explicit embedding of gravity and the Standard Model inside E8(−24), one might wonder how to interpret the paper “There is no ‘Theory of Everything’ inside E8.”[7] In their work, Distler and Garibaldi prove that, using a direct de- composition of E8, when one embeds gravity and the Standard Model in E8, there are also mirror fermions. They then claim this prediction of mirror fermions (the existence of “non-chiral matter”) makes E8 Theory unviable. However, since there is currently no good explanation for why any fermions have the masses they do, it is overly presumptuous to proclaim the failure of E8 unification – since the detailed mechanism behind particle masses is unknown, and mirror fermions with large masses could exist in nature. Nevertheless, it was helpful of Distler and Garibaldi to emphasize the difficulty of describing the three generations of fermions, which remains an open problem.

arXiv:1006.4908v1 [gr-qc] 25 Jun 2010

Of course string theory has since 1985 been using E8 x E8 as the symmetry of one of the two Heterotic string theories. In this theory spacetime is higher dimensional, i.e. with 10 + 16 = 26 dimensions. The compactification of the extra dimensions (in order to arrive at a 4-d spacetime) entails a symmetry breaking of E8 to E6 (which avoids mirror fermions). As you know my own approach is what I call ADEX theory (the study and application of all the A-D-E Coxeter graphs). So whatever approach to E8 is favored by nature, this will be only a piece of the A-D-E complex.

All for now;-)

Saul-Paul

----------------------------

On Nov 20, 2010, at 2:22 PM, JACK SARFATTI wrote:

Saul-Paul

What is your assessment of the situation?

I predict it won't be seen because dark matter is simply vacuum with positive zero point pressure.

On Nov 20, 2010, at 4:44 AM, Robert Park wrote:

4. WIMPS: THE UNIVERSE WE CAN’T SEE.

When they were building the Large Hadron Collider it seemed to be all about

finding the Higgs boson. But there seems to be increasing interest in

using the LHC to to learn something about the 85% of the universe we can't

see. We know it's there because it has gravity, but that's about all it

has. The betting is that it's a particle, and the leading candidate is the

WIMP (weakly interacting massive particle). Gianfranco Bertone in

yesterday's issue of Nature predicts that if there is such a ghostly

particle it will be exposed by LHC in the next few years.

On Nov 20, 2010, at 4:44 AM, Robert Park wrote:

4. WIMPS: THE UNIVERSE WE CAN’T SEE.

When they were building the Large Hadron Collider it seemed to be all about

finding the Higgs boson. But there seems to be increasing interest in

using the LHC to to learn something about the 85% of the universe we can't

see. We know it's there because it has gravity, but that's about all it

has. The betting is that it's a particle, and the leading candidate is the

WIMP (weakly interacting massive particle). Gianfranco Bertone in

yesterday's issue of Nature predicts that if there is such a ghostly

particle it will be exposed by LHC in the next few years.

Hi Menas

Yes, this is very exciting indeed!

In addition to the super-oscillation we have the meta-material superlens, both seem to lead in the same direction in this regard.

:-)

On Nov 20, 2010, at 4:58 AM, Kafatos, Menas wrote:

Hi Jack:

*Yes, what you are saying is very exciting, these super oscillations can give the power to go beyond the usual limits. Yakir and I have been discussing this. But at a price, exponentially weak tails (which is the usual problem, for example you can get values of variables through weak measurements outside the orthodox expectations and with better S/N but quite rarely). Anyway, it is worth pursuing.*

Menas Kafatos

Fletcher Jones Professor of Computational Physics

Dean

Schmid College of Science, and

Vice Chancellor for Special Projects

Chapman University

Orange, CA 92866

-----Original Message-----

From: JACK SARFATTI [mailto:sarfatti@pacbell.net]

Sent: Fri 11/19/2010 11:32 AM

To: david kaiser

Subject: looking inside molecules, metamaterial superlens, weak measurement superoscillation, nonlocality (Dr. Quantum)

Memorandum for the Historical Record

http://prl.aps.org/abstract/PRL/v105/i21/e217402

btw I got a flash yesterday that using Heisenberg microscope made by a metamaterial super-lens combined with Aharonov's weak measurement superoscillation may shake the foundations of quantum theory in view of the close connection of nonlocality with the Heisenberg gedankenexperiment. This idea is still half-baked, but basically the premise that we cannot resolve details smaller than the smallest wavelength of the probe pulse is now suspect - very suspect.

http://stardrive.org

http://tinyurl.com/23hgapk

http://tinyurl.com/22w6n6m

http://stardrive.org/index.php?option=com_content&view=category&id=43&Itemid=82

Yes, this is very exciting indeed!

In addition to the super-oscillation we have the meta-material superlens, both seem to lead in the same direction in this regard.

:-)

On Nov 20, 2010, at 4:58 AM, Kafatos, Menas wrote:

Hi Jack:

Menas Kafatos

Fletcher Jones Professor of Computational Physics

Dean

Schmid College of Science, and

Vice Chancellor for Special Projects

Chapman University

Orange, CA 92866

-----Original Message-----

From: JACK SARFATTI [mailto:sarfatti@pacbell.net]

Sent: Fri 11/19/2010 11:32 AM

To: david kaiser

Subject: looking inside molecules, metamaterial superlens, weak measurement superoscillation, nonlocality (Dr. Quantum)

Memorandum for the Historical Record

http://prl.aps.org/abstract/PRL/v105/i21/e217402

btw I got a flash yesterday that using Heisenberg microscope made by a metamaterial super-lens combined with Aharonov's weak measurement superoscillation may shake the foundations of quantum theory in view of the close connection of nonlocality with the Heisenberg gedankenexperiment. This idea is still half-baked, but basically the premise that we cannot resolve details smaller than the smallest wavelength of the probe pulse is now suspect - very suspect.

http://stardrive.org

http://tinyurl.com/23hgapk

http://tinyurl.com/22w6n6m

http://stardrive.org/index.php?option=com_content&view=category&id=43&Itemid=82

"However, physics would be fundamentally different. If we break the uncertainty principle, there is really no telling what our world would look like."

Magick without magic.

I announce the conjecture:

Nature is as weird as it can be.

The nonlocal action principle of maximal weirdness, e.g. consciousness.

Post-quantum theory has maximum weirdness (aka signal nonlocality) beyond the minimal weirdness of orthodox quantum theory.

On Nov 18, 2010, at 6:55 PM, JACK SARFATTI wrote:

v3- expanded to include part 3

*It's a surprising and perhaps ironic twist," said Oppenheim, a Royal Society University Research Fellow from the Department of Applied Mathematics & Theoretical Physics at the University of Cambridge. Einstein and his co-workers discovered non-locality while searching for a way to undermine the uncertainty principle. "Now the uncertainty principle appears to be biting back."*

Non-locality determines how well two distant parties can coordinate their actions without sending each other information. Physicists believe that even in quantum mechanics, information cannot travel faster than light. Nevertheless, it turns out that quantum mechanics allows two parties to coordinate much better than would be possible under the laws of classical physics. In fact, their actions can be coordinated in a way that almost seems as if they had been able to talk. Einstein famously referred to this phenomenon as "spooky action at a distance".

"Quantum theory is pretty weird, but it isn't as weird as it could be. We really have to ask ourselves, why is quantum mechanics this limited? Why doesn't nature allow even stronger non-locality?" Oppenheim says.However, quantum non-locality could be even spookier than it actually is. It's possible to have theories which allow distant parties to coordinate their actions much better than nature allows, while still not allowing information to travel faster than light. Nature could be weirder, and yet it isn't – quantum theory appears to impose an additional limit on the weirdness.

The surprising result by Wehner and Oppenheim is that the uncertainty principle provides an answer. Two parties can only coordinate their actions better if they break the uncertainty principle, which imposes a strict bound on how strong non-locality can be.

"It would be great if we could better coordinate our actions over long distances, as it would enable us to solve many information processing tasks very efficiently," Wehner says. "However, physics would be fundamentally different. If we break the uncertainty principle, there is really no telling what our world would look like."

http://www.eurekalert.org/pub_releases/2010-11/cfqt-rus111210.php

But it appears we can beat the usual Heisenberg uncertainty limit that assumes no resolution better than the mean wavelength of the photon probe's wave packet i.e. a super-oscillating weak measurement Heisenberg microscope enhanced with negative index of refraction meta-material super-lens.

Search Results

New Superlens is Made of Metamaterials - Ingenious lens ten times ...

Apr 25, 2008 ... New Superlens is Made of Metamaterials - Ingenious lens ten times as powerful as conventional ones.

news.softpedia.com/.../New-Superlens-is-Made-of-Metamaterials-84359. shtml -Cached - Similar

?

More on Metamaterials and Superlens over 5 times better than ...

Jun 28, 2006 ... Powerpoint tutorial, by G Shvets of the Univeristy of Texas at Austin, on meta- materials and applying superlenses to laser plasma ...

nextbigfuture.com/.../more-on-metamaterials-and-superlens.html - Cached - Similar

Metamaterials for magnifying superlenses | IOM3: The Global ...

Apr 30, 2007 ... Array of superlenses Advances in the field of magnifying superlenses have been reported by two separate US research teams.

www.iom3.org/news/mega-magnifiers - Cached - Similar

[PDF] Photonic Meta Materials, Nano-scale plasmonics and Super Lens ...

File Format: PDF/Adobe Acrobat - Quick View

Photonic Meta Materials, Nano-scale plasmonics and Super Lens. Xiang Zhang. Chancellor's Professor and Director. NSF Nano-scale Science and Engineering ...

boss.solutions4theweb.com/Zhang_talk_abs_with_pictures__1.pdf - Similar

Nano-Optics, Metamaterials, Nanolithography and Academia: 3D ...

3D Metamaterials Nanolens: The best superlens realized so far! My paper was published online 2 days ago, in Applied Physics Letters: ...

nanooptics.blogspot.com/2010/.../3d-metamaterials-nanolens-best.html - Cached

Magnifying Superlens based on Plasmonic Metamaterials

by II Smolyaninov - 2008 - Related articles

Magnifying Superlens based on Plasmonic Metamaterials. Igor I. Smolyaninov, Yu- Ju Hung, and Christopher C. Davis. Electrical and Computer Engineering ...

ieeexplore.ieee.org/iel5/4422221/4422222/04422540.pdf?arnumber...

Superlens from complementary anisotropic metamaterials—[Journal of ...

Metamaterials with isotropic property have been shown to possess novel optical properties such as a negative refractive index that can be used to design a ...

link.aip.org/link/JAPIAU/v102/i11/p116101/s1

[PDF] Surface resonant states and superlensing in acoustic metamaterials

File Format: PDF/Adobe Acrobat - Quick View

by M Ambati - 2007 - Cited by 16 - Related articles

May 31, 2007 ... This concept of acoustic superlens opens exciting opportunities to design acoustic metamaterials for ultrasonic imaging. ...

xlab.me.berkeley.edu/publications/pdfs/57.PRB2007_Murali.pdf - Similar

Superlens imaging theory for anisotropic nanostructured ...

by WT Lu - 2008 - Cited by 18 - Related articles

Superlens imaging theory for anisotropic nanostructured metamaterials with broadband all-angle negative refraction. WT Lu, S Sridhar ...

link.aps.org/doi/10.1103/PhysRevB.77.233101 - Similar

[0710.4933] Superlens imaging theory for anisotropic ...

by WT Lu - 2007 - Cited by 18 - Related articles

Oct 25, 2007 ... Title: Superlens imaging theory for anisotropic nanostructuredmetamaterials with broadband all-angle negative refraction ...

arxiv.org › cond-mat - Cached

On Nov 18, 2010, at 4:54 PM, JACK SARFATTI wrote:

The probabilistic nature of quantum events comes from integrating out all the future advanced Wheeler-Feynman retro-causal measurements. This is why past data and unitary retarded past-to-present dynamical evolution of David Bohm's quantum potential is not sufficient for unique prediction as in classical physics. Fred Hoyle knew this a long time ago. Fred Alan Wolf and I learned it from Hoyle's papers back in the late 60's at San Diego State and also from I. J. Good's book that popularized Hoyle's idea. So did Hoyle get it from Aharonov 1964 or directly from Wheeler-Feynman 1940 --> 47?

Note however, that in Bohm's theory knowing the pre-selected initial condition on the test particle trajectory does seem to obviate the necessity for an independent retro-causal post-selection in the limit of sub-quantal thermodynamic equilibrium with consequent signal locality, i.e. no remote viewing possible in this limit for dead matter. However, there may be a hidden retro-causal tacit assumption in Bohm's 1952 logic. Remember Feynman's action principle is nonlocal in time. One also must ultimately include back-reaction fluctuations of Bohm's quantum potential Q. The test particle approximation breaks down when the particle hidden variables are no longer in sub-quantal equilibrium caused by some external pumping of them like the excited atoms in a laser that is lasing above threshold, or like in H. Frohlich's toy model of a biological membrane of electric dipoles.

Yakir et-al says that by 1964 "the puzzle of indeterminism ... was safely marginalized" to the Gulag. ;-)

John Bell's locality inequality for quantum entanglement of 1964 changed all that. I had already gotten into a heated argument with Stanley Deser and Sylvan Schweber on this very issue back in 1961 at Brandeis University. I had independently seen the problem Bell had a few years later from reading David Inglis's Tau Theta Puzzle paper on Rev Mod Phys. As a mere grad student I was shouted down by Deser and told to "learn how to calculate" - one of the reasons I quit by National Defense Fellowship and went to work for Tech/Ops at Mitre in Lexington, Mass on Route 2 an Intelligence Community Contractor under Emil Wolfs student George Parrent Jr.

?

Optics InfoBase - Imaging of Extended Polychromatic Sources and ...

by GB PARRENT JR - 1961 - Cited by 2 - Related articles

GEORGE B. PARRENT JR., "Imaging of Extended Polychromatic Sources and Generalized Transfer Functions," J. Opt. Soc. Am. 51, 143-151 (1961) ...

www.opticsinfobase.org/abstract.cfm?uri=josa-51-2-143

In 1964 Aharonov and two colleagues (Peter Bergmann & Lebowitz) announce that the result of a measurement at t not only influences the future, but also influences the past. Of course, Wheeler-Feynman knew that 25 years earlier. Did they precog Aharonov? ;-)

OK we pre-select at t0, we measure at t and we post-select at t1

t0 < t < t1

We then have a split into sub-ensembles that correspond to the procedures of scattering measurements described by the unitary S-Matrix.

The statistics of the present measurements at t is different for different joint pre t0 and post t1 selected sub-ensembles, and different still from the total pre selected ensemble integral over all the joint pre-post sub-ensembles.

Note we still have unitary S-Matrix signal locality here. It's not possible to decode a retrocausal message from t1 at t for example.

No spooky uncanny paranormal Jungian synchronicities, no Destiny Matrix is possible in this particular model.

Weak Measurements

We can partially beat Heisenberg's microscope with metamaterial tricks of negative refractive index, we can also beat it if we tradeoff precision for disturbance. Even less precise weak simultaneous measurements of non-commuting observables can still be sufficiently precise when N^1/2 << N for N qubits all in the same single-qubit state. "So at the cost of precision, one can limit disturbance." Indeed, one can get a weak measurement far outside the orthodox quantum measurement eigenvalues, indeed

S(45 degrees) ~ N/2^1/2

i.e. 2^1/2 xlargest orthodox eigenvalue for N un-entangled qubits pre-selected for z along + 1/2 and post-selected along x at +1/2 with error ~ N^1/2.

"It's all a game of errors."

"Sometimes the device's pointer ... can point, in error, to a range far outside the range of possible eigenvalues."

Larger errors than N^1/2 must occur, but with exponentially decreasing probability.

When ultra-rarely the post selection measures Sx = N/2, the intermediate measurement is N/2^1/2 +- N^1/2 > N/2.

This is not a random error.

Superoscillation

The present measurement at t entangles the pointer device with the measured qubit. Future post-selection at t1 destroys that entanglement. The pointer device is then left in a superposition of its legitimate orthodox quantum eigenstates. The superoscillation coherence among the device's eigenstates boost it to the non-random error outside of its orthodox eigenvalue spectrum to N/2^1/2 > N/2.

Indeed, this beats the limits of Heisenberg's microscope http://www.aip.org/history/heisenberg/p08b.htm

Superposing waves with different wavelengths, one can construct features with details smaller than the smallest wavelength in the superposition. Example

f(x) = [(1 + a)exp(i2pix/N)/2 + (1 - a)exp(-i2pix/N)/2]^N

a > 1 is a real number

Expand the binomial, take the limit x ---> 0

f(x) ~ exp(i2piax)

with an effective resolution of 1/a << 1

so much for Heisenberg's uncertainty principle in a weak measurement?

to be continued in Part 2

Bear in mind that the ultimate post-selection for every measurement in our observable universe is at our total absorber future event horizon.

Magick without magic.

I announce the conjecture:

Nature is as weird as it can be.

The nonlocal action principle of maximal weirdness, e.g. consciousness.

Post-quantum theory has maximum weirdness (aka signal nonlocality) beyond the minimal weirdness of orthodox quantum theory.

On Nov 18, 2010, at 6:55 PM, JACK SARFATTI wrote:

v3- expanded to include part 3

Non-locality determines how well two distant parties can coordinate their actions without sending each other information. Physicists believe that even in quantum mechanics, information cannot travel faster than light. Nevertheless, it turns out that quantum mechanics allows two parties to coordinate much better than would be possible under the laws of classical physics. In fact, their actions can be coordinated in a way that almost seems as if they had been able to talk. Einstein famously referred to this phenomenon as "spooky action at a distance".

"Quantum theory is pretty weird, but it isn't as weird as it could be. We really have to ask ourselves, why is quantum mechanics this limited? Why doesn't nature allow even stronger non-locality?" Oppenheim says.However, quantum non-locality could be even spookier than it actually is. It's possible to have theories which allow distant parties to coordinate their actions much better than nature allows, while still not allowing information to travel faster than light. Nature could be weirder, and yet it isn't – quantum theory appears to impose an additional limit on the weirdness.

The surprising result by Wehner and Oppenheim is that the uncertainty principle provides an answer. Two parties can only coordinate their actions better if they break the uncertainty principle, which imposes a strict bound on how strong non-locality can be.

"It would be great if we could better coordinate our actions over long distances, as it would enable us to solve many information processing tasks very efficiently," Wehner says. "However, physics would be fundamentally different. If we break the uncertainty principle, there is really no telling what our world would look like."

http://www.eurekalert.org/pub_releases/2010-11/cfqt-rus111210.php

But it appears we can beat the usual Heisenberg uncertainty limit that assumes no resolution better than the mean wavelength of the photon probe's wave packet i.e. a super-oscillating weak measurement Heisenberg microscope enhanced with negative index of refraction meta-material super-lens.

Search Results

New Superlens is Made of Metamaterials - Ingenious lens ten times ...

Apr 25, 2008 ... New Superlens is Made of Metamaterials - Ingenious lens ten times as powerful as conventional ones.

news.softpedia.com/.../New-Superlens-is-Made-of-Metamaterials-84359. shtml -Cached - Similar

?

More on Metamaterials and Superlens over 5 times better than ...

Jun 28, 2006 ... Powerpoint tutorial, by G Shvets of the Univeristy of Texas at Austin, on meta- materials and applying superlenses to laser plasma ...

nextbigfuture.com/.../more-on-metamaterials-and-superlens.html - Cached - Similar

Metamaterials for magnifying superlenses | IOM3: The Global ...

Apr 30, 2007 ... Array of superlenses Advances in the field of magnifying superlenses have been reported by two separate US research teams.

www.iom3.org/news/mega-magnifiers - Cached - Similar

[PDF] Photonic Meta Materials, Nano-scale plasmonics and Super Lens ...

File Format: PDF/Adobe Acrobat - Quick View

Photonic Meta Materials, Nano-scale plasmonics and Super Lens. Xiang Zhang. Chancellor's Professor and Director. NSF Nano-scale Science and Engineering ...

boss.solutions4theweb.com/Zhang_talk_abs_with_pictures__1.pdf - Similar

Nano-Optics, Metamaterials, Nanolithography and Academia: 3D ...

3D Metamaterials Nanolens: The best superlens realized so far! My paper was published online 2 days ago, in Applied Physics Letters: ...

nanooptics.blogspot.com/2010/.../3d-metamaterials-nanolens-best.html - Cached

Magnifying Superlens based on Plasmonic Metamaterials

by II Smolyaninov - 2008 - Related articles

Magnifying Superlens based on Plasmonic Metamaterials. Igor I. Smolyaninov, Yu- Ju Hung, and Christopher C. Davis. Electrical and Computer Engineering ...

ieeexplore.ieee.org/iel5/4422221/4422222/04422540.pdf?arnumber...

Superlens from complementary anisotropic metamaterials—[Journal of ...

Metamaterials with isotropic property have been shown to possess novel optical properties such as a negative refractive index that can be used to design a ...

link.aip.org/link/JAPIAU/v102/i11/p116101/s1

[PDF] Surface resonant states and superlensing in acoustic metamaterials

File Format: PDF/Adobe Acrobat - Quick View

by M Ambati - 2007 - Cited by 16 - Related articles

May 31, 2007 ... This concept of acoustic superlens opens exciting opportunities to design acoustic metamaterials for ultrasonic imaging. ...

xlab.me.berkeley.edu/publications/pdfs/57.PRB2007_Murali.pdf - Similar

Superlens imaging theory for anisotropic nanostructured ...

by WT Lu - 2008 - Cited by 18 - Related articles

Superlens imaging theory for anisotropic nanostructured metamaterials with broadband all-angle negative refraction. WT Lu, S Sridhar ...

link.aps.org/doi/10.1103/PhysRevB.77.233101 - Similar

[0710.4933] Superlens imaging theory for anisotropic ...

by WT Lu - 2007 - Cited by 18 - Related articles

Oct 25, 2007 ... Title: Superlens imaging theory for anisotropic nanostructuredmetamaterials with broadband all-angle negative refraction ...

arxiv.org › cond-mat - Cached

On Nov 18, 2010, at 4:54 PM, JACK SARFATTI wrote:

The probabilistic nature of quantum events comes from integrating out all the future advanced Wheeler-Feynman retro-causal measurements. This is why past data and unitary retarded past-to-present dynamical evolution of David Bohm's quantum potential is not sufficient for unique prediction as in classical physics. Fred Hoyle knew this a long time ago. Fred Alan Wolf and I learned it from Hoyle's papers back in the late 60's at San Diego State and also from I. J. Good's book that popularized Hoyle's idea. So did Hoyle get it from Aharonov 1964 or directly from Wheeler-Feynman 1940 --> 47?

Note however, that in Bohm's theory knowing the pre-selected initial condition on the test particle trajectory does seem to obviate the necessity for an independent retro-causal post-selection in the limit of sub-quantal thermodynamic equilibrium with consequent signal locality, i.e. no remote viewing possible in this limit for dead matter. However, there may be a hidden retro-causal tacit assumption in Bohm's 1952 logic. Remember Feynman's action principle is nonlocal in time. One also must ultimately include back-reaction fluctuations of Bohm's quantum potential Q. The test particle approximation breaks down when the particle hidden variables are no longer in sub-quantal equilibrium caused by some external pumping of them like the excited atoms in a laser that is lasing above threshold, or like in H. Frohlich's toy model of a biological membrane of electric dipoles.

Yakir et-al says that by 1964 "the puzzle of indeterminism ... was safely marginalized" to the Gulag. ;-)

John Bell's locality inequality for quantum entanglement of 1964 changed all that. I had already gotten into a heated argument with Stanley Deser and Sylvan Schweber on this very issue back in 1961 at Brandeis University. I had independently seen the problem Bell had a few years later from reading David Inglis's Tau Theta Puzzle paper on Rev Mod Phys. As a mere grad student I was shouted down by Deser and told to "learn how to calculate" - one of the reasons I quit by National Defense Fellowship and went to work for Tech/Ops at Mitre in Lexington, Mass on Route 2 an Intelligence Community Contractor under Emil Wolfs student George Parrent Jr.

?

Optics InfoBase - Imaging of Extended Polychromatic Sources and ...

by GB PARRENT JR - 1961 - Cited by 2 - Related articles

GEORGE B. PARRENT JR., "Imaging of Extended Polychromatic Sources and Generalized Transfer Functions," J. Opt. Soc. Am. 51, 143-151 (1961) ...

www.opticsinfobase.org/abstract.cfm?uri=josa-51-2-143

In 1964 Aharonov and two colleagues (Peter Bergmann & Lebowitz) announce that the result of a measurement at t not only influences the future, but also influences the past. Of course, Wheeler-Feynman knew that 25 years earlier. Did they precog Aharonov? ;-)

OK we pre-select at t0, we measure at t and we post-select at t1

t0 < t < t1

We then have a split into sub-ensembles that correspond to the procedures of scattering measurements described by the unitary S-Matrix.

The statistics of the present measurements at t is different for different joint pre t0 and post t1 selected sub-ensembles, and different still from the total pre selected ensemble integral over all the joint pre-post sub-ensembles.

Note we still have unitary S-Matrix signal locality here. It's not possible to decode a retrocausal message from t1 at t for example.

No spooky uncanny paranormal Jungian synchronicities, no Destiny Matrix is possible in this particular model.

Weak Measurements

We can partially beat Heisenberg's microscope with metamaterial tricks of negative refractive index, we can also beat it if we tradeoff precision for disturbance. Even less precise weak simultaneous measurements of non-commuting observables can still be sufficiently precise when N^1/2 << N for N qubits all in the same single-qubit state. "So at the cost of precision, one can limit disturbance." Indeed, one can get a weak measurement far outside the orthodox quantum measurement eigenvalues, indeed

S(45 degrees) ~ N/2^1/2

i.e. 2^1/2 xlargest orthodox eigenvalue for N un-entangled qubits pre-selected for z along + 1/2 and post-selected along x at +1/2 with error ~ N^1/2.

"It's all a game of errors."

"Sometimes the device's pointer ... can point, in error, to a range far outside the range of possible eigenvalues."

Larger errors than N^1/2 must occur, but with exponentially decreasing probability.

When ultra-rarely the post selection measures Sx = N/2, the intermediate measurement is N/2^1/2 +- N^1/2 > N/2.

This is not a random error.

Superoscillation

The present measurement at t entangles the pointer device with the measured qubit. Future post-selection at t1 destroys that entanglement. The pointer device is then left in a superposition of its legitimate orthodox quantum eigenstates. The superoscillation coherence among the device's eigenstates boost it to the non-random error outside of its orthodox eigenvalue spectrum to N/2^1/2 > N/2.

Indeed, this beats the limits of Heisenberg's microscope http://www.aip.org/history/heisenberg/p08b.htm

Superposing waves with different wavelengths, one can construct features with details smaller than the smallest wavelength in the superposition. Example

f(x) = [(1 + a)exp(i2pix/N)/2 + (1 - a)exp(-i2pix/N)/2]^N

a > 1 is a real number

Expand the binomial, take the limit x ---> 0

f(x) ~ exp(i2piax)

with an effective resolution of 1/a << 1

so much for Heisenberg's uncertainty principle in a weak measurement?

to be continued in Part 2

Bear in mind that the ultimate post-selection for every measurement in our observable universe is at our total absorber future event horizon.

Of course gravity is ubiquitous because classical level gravity is the curvature in the four-dimensional (4D) spacetime continuum.

Gravity is not a force in the way electromagnetism is. Pure gravity motion of sourceless test particles is along the force-free straightest path "geodesics" in curved 4D spacetime. The curvature is from very massive bodies or even from the self-interaction of gravity as in Wheeler's geons and in primordial black holes (pure vacuum solutions with the source tensor Tuv(matter) = 0).

In contrast, the electromagnetic force pushes charged test particles off these straightest geodesic paths.

Yes, you can certainly try to think in terms of gravity vacuum current densities. That is one way of looking at Einstein's Guv tensor.

Simply replace "T" below by "G" and "mass" by "GeoMetroDynamic (GMD) field" in

http://en.wikipedia.org/wiki/Stress-energy_tensor

e.g.

G00 = local density of the GMD field.

G0i = GMD field flux across the xi surface is equivalent to the density of the ith component of GMD field's linear momentum

etc.

The problem with all this, however, is that locally Guv = 0 in the vacuum.

Guv = Ruv - (1/2)Rguv

each term on the RHS is separately zero in a classical vacuum.

However, when the quantum zero point fluctuations are retained we do get

Guv ~ /guv =/= 0

/ > 0 is dark energy de Sitter (dS) anti-gravity repulsive stretching of spacetime above and beyond the cosmological expansion of the Hubble flow.

/ < 0 is dark matter anti de Sitter (AdS) gravity attractive compression of spacetime etc.

What / is locally is scale-dependent.

virtual bosons make / > 0

virtual fermion-antifermion closed loops make / < 0

the two compete against each other

the / term is usually too small to detect.

if we can amplify it we have warp drive wormhole time travel super-technology.

Identifying the components of the tensor (Wiki)

In the following i and k range from 1 through 3.

The time-time component T00 ss the density of relativistic mass, i.e. the energy density divided by the speed of light squared,

The flux of relativistic mass across the xi surface T0i is equivalent to the density of the ith component of linear momentum,

The components Tij

represent flux of i momentum across the xk surface. In particular,

Tii

(not summed) represents normal stress which is called pressure when it is independent of direction. Whereas

represents shear stress (compare with the stress tensor).

Warning: In solid state physics and fluid mechanics, the stress tensor is defined to be the spatial components of the stress-energy tensor in the comoving frame of reference. In other words, the stress energy tensor in engineering differs from the stress energy tensor here by a momentum convective term.

On Nov 18, 2010, at 11:32 AM, JACK SARFATTI wrote:

On Nov 18, 2010, at 11:06 AM, Fhuroy@aol.com wrote:

"Jack;

Isaac Newton thought his falling Apple was pulled to the ground, that modern science speaks of attracting interactive forces between the Apple and the earths center of gravity, a pull force."

Yes, that's historically correct Roy, but that was more than three hundred years ago. We now know better. Einstein's theory has replaced Newton's.

Newton's equations and Einstein's are very similar under ordinary conditions we encounter, however the conceptual picture is very different. Paul Zielinski and Jonathan Post tried to explain this difference to you. I suggest you carefully read

http://www.amazon.com/Universe-Nutshell-Stephen-William-Hawking/dp/055380202X

and

http://www.amazon.com/Black-Holes-Time-Warps-Commonwealth/dp/0393312763

Your mind needs to make a quantum leap to Einstein's way of thinking. You are stuck in Newton's way of thinking. You are inside a smaller box very hard to break out of even for some physicists and especially for engineers because Newton's force picture works very well practically speaking - but it is an illusion.

2010 10:29:00 P.M. Pacific Standard Time

From: jvospost3@gmail.com writes;

“Matter tells Spacetime how to curve, and Spacetime tells matter how to move.”

"I interpret this as reflecting the conjecture that Spacetime is not

simply a mathematically convenient tool for calculating and graphing

the effects of relativity; but that it also is the actual physical

mechanism by which gravity operates. That is, gravity actually changes

the physical geometry of local space and time."* This explanation is close enough for me Jack. If this is true, gravity is ubiquitous. If it tells matter how to move then it is some kind of force infinitely long, smooth and uniform. A metaphor for this would be similar to a flowing current (a constant) curving around a small rock and pushing it along a river bottom. The same (universal equivalent) current would exert different curvature pressures (space-time equivalent) differently against bigger rocks. You see Jack equations describe something. Universe is not made of equations they simply help us understand to profit from and build upon what God has created.*

Roy

"To conclude, we see that although the renormalization procedure has not evolved much these last thirty years, our interpretation of renormalization has drasti-cally changed10: the renormalized theory was assumed to be fundamental, while it is now believed to be only an effective one; Λ was interpreted as an artificial parameter that was only useful in intermediate calculations, while we now believe that it corresponds to a fundamental scale where new physics occurs; non-renormalizable couplings were thought to be forbidden, while they are now interpreted as the remnants of interaction terms in a more fundamental theory. Renormalization group is now seen as an efficient tool to build effective low energy theories when large fluctuations occur between two very different scales that change qualitatively and quantitatively the physics.

We know now that the invisible hand that creates divergences in some theories is actually the existence in these theories of a no man’s land in the energy (or length) scales for which cooperative phenomena can take place, more precisely, for which fluctuations can add up coherently.10 In some cases, they can destabilize the physical picture we were relying on and this manifests itself as divergences. Renormalization, and even more renormalization group, is the right way to deal with these fluctuations. ...

Let us draw our first conclusion. Infinities occur in the perturbation expansion of the theory because we have assumed that it was not regularized. Actually, these divergences have forced us to regularize the expansion and thus to introduce a new scale Λ. Once regularization has been performed, renormalization can be achieved by eliminating g0. The limit Λ → ∞ can then be taken. The process is recursive and can be performed only if the divergences possess, order by order, a very precise structure. This structure ultimately expresses that there is only one coupling constant to be renormalized. This means that imposing only one prescription at x = μ is enough to subtract the divergences for all x. In general, a theory is said to be renormalizable if all divergences can be recursively subtracted by imposing as many prescriptions as there are independent parameters in the theory. In QFT, these are masses, coupling constants, and the normalization of the fields. An important and non-trivial topic is thus to know which parameters are independent, because symmetries of the theory (like gauge symmetries) can relate different parameters (and Green functions).

Let us once again recall that renormalization is nothing but a reparametrization in terms of the physical quantity gR. The price to pay for renormalizing F is that g0 becomes infinite in the limit Λ → ∞, see Eq. (12). We again emphasize that if g0 is a non-measurable parameter, useful only in intermediate calculations, it is indeed of no consequence that this quantity is infinite in the limit Λ → ∞. That g0 was a divergent non-physical quantity has been common belief for decades in QFT. The physical results given by the renormalized quantities were thought to be calculable only in terms of unphysical quantities like g0 (called bare quantities) that the renormalization algorithm could only eliminate afterward. It was as if we had to make two mistakes that compensated each other: first introduce bare quantities in terms of which everything was infinite, and then eliminate them by adding other divergent quantities. Undoubtedly, the procedure worked, but, to say the least, the interpretation seemed rather obscure. ...

A very important class of field theories corresponds to the situation where g0 is dimensionless, and x, which in QFT represents coordinates or momenta, has dimensions (or more generally when g0 and x have independent dimensions). In four-dimensional space-time, quantum electrodynamics is in this class, because the fine structure constant is dimensionless; quantum chromodynamics and the Weinberg-Salam model of electro-weak interactions are also in this class. In four space dimensions the φ^4 model relevant for the Ginzburg-Landau-Wilson approach to critical phenomena is in this class too. This particular class of renormalizable theories is the cornerstone of renormalization in field theories. ...

Note that we have obtained logarithmic divergences because we have studied the renormalization of a dimensionless coupling constant. If g0 was dimensional, we would have obtained power law divergences. This is for instance what happens in QFT for the mass terms ...

there should exist an equivalence class of parametrizations of the same theory and that it should not matter in practice which element in the class is chosen. This independence of the physical quantity with respect to the choice of prescription point also means that the changes of parametrizations should be a (renormalization) group law ...

if we were performing exact calculations: we would gain no new physical information by implementing the renormalization group law. This is because this group law does not reflect a symmetry of the physics, but only of the parametrization of our solution. This situation is completely analogous to what happens for the solution of a differential equation: we can parametrize it at time t in terms of the initial conditions at time t0 for instance, or we can use the equation itself to

calculate the solution at an intermediate time τ and then use this solution as a new initial condition to parametrize the solution at time t. The changes of initial conditions that preserve the final solution can be composed thanks to a group law. ...

Our main goal in this section is to show that, independently of the underlying physical model, dimensional analysis together with the renormalizability con- straint, determine almost entirely the structure of the divergences. This underlying simplicity of the nature of the divergences explains that there is no combinatorial miracle of Feynman diagrams in QFT as it might seem at first glance.

8) The cut-off Λ, first introduced as a mathematical trick to regularize integrals, has actually a deep physical meaning: it is the scale beyond which new physics occur and below which the model we study is a good effective description of the physics. In general, it involves only the renormalizable couplings and thus cannot pretend to be an exact description of the physics at all scales. However, if Λ is very large compared with the energy scale in which we are interested, all non-renormalizable couplings are highly suppressed and the effective model, retaining only renormalizable couplings, is valid and accurate (the Wilson RG formalism is well suited to this study, see Refs. 25 and 26). In some models — the asymptotically free ones — it is possible to formally take the limit Λ → ∞ both perturbatively and non-perturbatively, and there is therefore no reason to invoke a more fundamental theory taking over at a finite (but large) Λ. ...

V. SUMMARY

(1) The long way of renormalization starts with a theory depending on only one parameter g0, which is the small parameter in which perturbation series are expanded. In particle physics, this parameter is in general a coupling constant like an electric charge involved in a Hamiltonian (more precisely the fine structure constant for electrodynamics). This parameter is also the first order contribution of a physical quantity F. In particle/ statistical physics, F is a Green/correlation function. The first order of perturbation theory neglects fluctuations — quantum or statistical — and thus corresponds to the classical/mean field approximation. The parameter g0 also is to this order a measurable quantity because it is given by a Green function. Thus, it is natural to interpret it as the unique and physical coupling constant of the problem. If, as we suppose in the following, g0 is dimensionless, so is F. Moreover, if x is dimensional — it represents momenta in QFT — it is natural that F does not depend on it as is found in the classical theory, that is, at first order of the perturbation expansion.

(2) If F does depend on x, as we suppose it does at second order of perturbation theory, it must depend on another dimensional parameter, through the ratio of

x and /\ . If we have not included this parameter from the beginning in the model, the x-dependent terms are either vanishing, which is what happens at first order, or infinite as they are at second and higher orders. This is the very origin of divergences (from the technical point of view).

(3) These divergences require that we regularize F. This requirement, in turn, requires the introduction of the scale that was missing. In the context of field

theory, the divergences occur in Feynman diagrams for high momenta, that is, at short distances. The cut-off suppresses the fluctuations at short distances compared with /\^−1. "

Note with quantum gravity tiny black holes form at short distances giving a natural cut off at Lp at least where

delta(x) ~ h/delta(p) + Lp^2delta(p)/h

the second term on RHS is added to the Heisenberg uncertainty principle.

"In statistical physics, this scale, although introduced for formal reasons, has a natural interpretation because the theories are always effective theories built at a given microscopic scale. It corresponds in general to the range of interaction of the constituents of the model, for example, a lattice spacing for spins, the average intermolecular distance for fluids. In particle physics, things are less simple. At least psychologically. It was indeed natural in the early days of quantum electrodynamics to think that this theory was fundamental, that is, not derived from a more fundamental theory. More precisely, it was believed that QED had to be mathematically internally consistent, even if in the real world new physics had to occur at higher energies. Thus, the regulator scale was introduced only as a trick to perform intermediate calculations. The limit /\ → ∞ was supposed to be the right way to eliminate this unwanted scale, which anyway seemed to have no interpretation. We shall see in the following that the community now interprets the renormalization process differently (4) Once the theory is regularized, F can be a nontrivial function of x. The price is that different values of x now correspond to different values of the coupling constant (defined as the values of F for these x). Actually, it does no longer make sense to speak of a coupling constant in itself. The only meaningful concept is the pair (μ, gR(μ)) of coupling constants at a given scale. The relevant question now is, “What are the physical reasons in particle/statistical physics that make the coupling constants depend on the scale while they are constants in the classical/mean field approximation?” As mentioned, for particle physics, the answer is the existence of new quantum fluctuations corresponding to the possibility of creating (and annihilating) particles at energies higher than mc^2. What was scale independent in the classical theory becomes scale dependent in the quantum theory because, as the available energy increases, more and more particles can be created. The pairs of (virtual) particles surrounding an electron are polarized by its presence and thus screen its charge. As a consequence, the charge of an electron depends on the distance (or equivalently the energy) at which it is probed, at least for distances smaller than the Compton wavelength. Note that the energy scale mc^2 should not be confused with the cut-off scale . mc^2 is the energy scale above which quantum fluctuations start to play a significant role while /\ is the scale where they are cut-off. Thus, although the Compton wave length is a short distance scale for the classical theory, it is a long distance scale for QFT, the short one being /\^−1.

There are thus three domains of length scales in QFT: above the Compton wave length where the theory behaves classically (up to small quantum corrections coming from high energy virtual processes), between the Compton wave length and the cut-off scale /\^−1 where the relativistic and quantum fluctuations play a great role, and below /\^−1 where a new, more fundamental theory has to be invoked.10

In statistical physics, the analogue of the Compton wave length is the correlation length which is a measure of the distance at which two microscopic constituents of the system are able to influence each other through thermal fluctuations.38 For the Ising model for instance, the correlation length away from the critical point is the order of the lattice spacing and the corrections to the mean-field approximation due to fluctuations are small. Unlike particle physics where the masses and therefore the Compton wavelengths are fixed, the correlation lengths in statistical mechanics can be tuned by varying the temperature. Near the critical temperature where the phase transition takes place, the correlation length becomes extremely large and fluctuations on all length scales between the microscopic scale of order /\^−1, a lattice spacing, and the correlation length add up to modify the mean-field behavior (see Refs. 21, 22 and also Ref. 23 for a bibliography in this subject). We see here a key to the relevance of renormalization: two very different scales must exist between which a non-trivial dynamics (quantum or statistical in our examples) can develop. This situation is a priori rather unnatural as can be seen for phase transitions, where a fine tuning of temperature must be implemented to obtain correlation lengths much larger than the microscopic scale. Most of the time, physical systems have an intrinsic scale (of time, energy, length, etc) and all the other relevant scales of the problem are of the same order. All phenomena occurring at very different scales are thus almost completely suppressed. The existence of a unique relevant scale is one of the reasons why renormalization is not necessary in most physical theories. In QFT it is mandatory because the masses of the known particles are much smaller than a hypothetical cut-off scale , still to be discovered, where new physics should take place. This is a rather unnatural situation, because, contrary to phase transitions, there is no analogue of a temperature that could be fine-tuned to create a large splitting of energy, that is, mass, scales. The question of naturalness of the models we have at present in particle physics is still largely open, although there has been much effort in this direction using supersymmetry.

(5) The classical theory is valid down to the Compton/correlation length, but cannot be continued naively beyond this scale; otherwise, when mixed with the quantum formalism, it produces divergences. Actually, it is known in QFT that the fields should be considered as distributions and not as ordinary functions. The need for considering distributions comes from the non-trivial structure of the theory at very short length scale where fluctuations are very important. At short distances, functions are not sufficient to describe the field state, which is not smooth but rough, and distributions are necessary. Renormalizing the theory consists actually in building, order by order, the correct “distributional continuation” of the classical theory. The fluctuations are then correctly taken into account and depend on the scale at which the theory is probed: this non-trivial scale dependence can only be taken into account theoretically through the dependence of the (analogue of the) function F with x and thus of the coupling with the scale μ.

(6) If the theory is perturbatively renormalizable, the pairs (μ, g(μ)) form an equivalence class of parametrizations of the theory. The change of parametrization from (μ, g(μ)) to (μ′, g(μ′)), called a renormalization group transformation, is then performed by a law which is self-similar, that is, such that it can be iterated several times while being form-invariant.19,20 This law is obtained by the integration of

... In particle physics, the β-function gives the evolution of the strength of the interaction as the energy at which it is probed varies and the integration of the β-function resums partially the perturbation expansion. First, as the energy increases, the coupling constant can decrease and eventually vanish. This is what happens when α > 0 in Eqs. (65) and (66). In this case, the particles almost cease to interact at very high energies or equivalently when they are very close to each other. The theory is then said to be asymptotically free in the ultraviolet domain.3,5 Reciprocally, at low energies the coupling increases and perturbation theory can no longer be trusted. A possible scenario is that bound states are created at a sufficiently low energy scale so that the perturbation approach has to be reconsidered in this domain to take into account these new elementary excitations. Non-abelian gauge theories are the only known theories in four spacetime dimensions that are ultraviolet free, and it is widely believed that quantum chromodynamics — which is such a theory — explains quark confinement. The other important behavior of the scale dependence of the coupling constant is obtained for α < 0 in which case it increases at high energies. This corresponds for instance to quantum

electrodynamics. For this kind of theory, the dramatic increase of the coupling at high energies is supposed to be a signal that the theory ceases to be valid beyond a certain energy range and that new physics, governed by an asymptotically free theory (like the standard model of electro-weak interactions) has to take place at short distances.

(7) Renormalizability, or its non-perturbative equivalent, self-similarity, ensures that although the theory is initially formulated at the scale μ, this scale together

with g0 can be entirely eliminated for another scale better adapted to the physics we study. If the theory was solved exactly, it would make no difference which parametrization we used. However, in perturbation theory, this renormalization lets us avoid calculating small numbers as differences of very large ones. It would indeed be very unpleasant, and actually meaningless, to calculate energies of order 100GeV, for instance — the scale μ of our analysis — in terms of energies of order of the Planck scale ? 10^19 GeV, the analogue of the scale /\ . In a renormalizable theory, the possibility to perturbatively eliminate the large scale has a very deep meaning: it is the signature that the physics is short distance insensitive or equivalently that there is a decoupling of the physics at different scales. The only memory of the short distance scale lies in the initial conditions of the renormalization group flow, not in the flow itself: the β-function does not depend on /\. We again emphasize that, usually, the decoupling of the physics at very different scales is trivially related to the existence of a typical scale such that the influence of all phenomena occurring at different scales is almost completely suppressed. Here, the decoupling is much more subtle because there is no typical length in the whole domain of length scales that are very small compared with the Compton wave length and very large compared with /\^−1. Because interactions among particles correspond to non-linearities in the theories, we could naively believe that all scales interact with each others — which is true — so that calculating, for instance, the low energy behavior of the theory would require the detailed calculation of all interactions occurring at higher energies. Needless to say that in a field theory, involving infinitely many degrees of freedom — the value of the field at each point — such a calculation would be hopeless, apart from exactly solvable models. Fortunately, such a calculation is not necessary for physical quantities that can be calculated from renormalizable couplings only. Starting at very high energies, typically , where all coupling constants are naturally of order 1, the renormalization group flow drives almost all of them to zero, leaving only, at low energies, the renormalizable couplings. This is the interpretation of non-renormalizable couplings. They are not terrible monsters that should be forgotten as was believed in the early days of QFT. They are simply couplings that the RG flow eliminates at low energies. If we are lucky, the renormalizable couplings become rather small after their RG evolution between /\ and the scale μ at which we work, and perturbation theory is valid at this scale. We see here the phenomenon of universality: among the infinitely many coupling constants that are a priori necessary to encode the dynamics of the infinitely many degrees of freedom of the theory, only a few ones are finally relevant.25 All the others are washed out at large distances. This is the reason why, perturbatively, it is not possible to keep these couplings finite at large distance, and it is necessary to set them to zero.39 The simplest non-trivial example of universality is given by the law of large numbers (the central limit theorem) which is crucial in statistical mechanics.21 In systems where it can be applied, all the details of the underlying probability

distribution of the constituents of the system are irrelevant for the cooperative phenomena which are governed by a gaussian probability distribution.24 This drastic reduction of complexity is precisely what is necessary for physics because it lets us build effective theories in which only a few couplings are kept.10 Renormalizability in statistical field theory is one of the non-trivial generalizations of the central limit theorem.

(8) The cut-off , first introduced as a mathematical trick to regularize integrals, has actually a deep physical meaning: it is the scale beyond which new physics occur and below which the model we study is a good effective description of the physics. In general, it involves only the renormalizable couplings and thus cannot pretend to be an exact description of the physics at all scales. However, if is very large compared with the energy scale in which we are interested, all non-renormalizable couplings are highly suppressed and the effective model, retaining only renormalizable couplings, is valid and accurate (the Wilson RG formalism is well suited to this study, see Refs. 25 and 26). In some models — the asymptotically free ones — it is possible to formally take the limit /\ → ∞ both perturbatively and non-perturbatively, and there is therefore no reason to invoke a more fundamental theory taking over at a finite (but large) . Let us

emphasize here several interesting points.

(i) For a theory corresponding to the pair (μ, gR(μ)), the limit /\ → ∞ must be taken within the equivalence class of parametrizations to which (μ, gR(μ)) belongs.40 A divergent non-regularized perturbation expansion consists in taking = ∞ while keeping g0 finite. From this viewpoint, the origin of the divergences is that the pair ( u = ∞, g0) does not belong to any equivalence class of a sensible theory. Perturbative renormalization consists in computing g0 as a formal powers series in gR (at finite ), so that (μ0, g0) corresponds to a mathematically consistent theory; we then take the limit /\ → ∞.

(ii) Because of universality, it is physically impossible to know from low energy data if /\ is very large or truly infinite.

(iii) Although mathematically consistent, it seems unnatural to reverse the RG process while keeping only the renormalizable couplings and thus to imagine that even at asymptotically high energies, Nature has used only the couplings that we are able to detect at low energies. It seems more natural that a fundamental theory does not suffer from renormalization problems. String theory is a possible candidate. 27

To conclude, we see that although the renormalization procedure has not evolved much these last thirty years, our interpretation of renormalization has drastically changed10: the renormalized theory was assumed to be fundamental, while it is now believed to be only an effective one; was interpreted as an artificial parameter that was only useful in intermediate calculations, while we now believe that it corresponds to a fundamental scale where new physics occurs; non-renormalizable couplings were thought to be forbidden, while they are now interpreted as the remnants of interaction terms in a more fundamental theory. Renormalization group is now seen as an efficient tool to build effective low energy theories when large fluctuations occur between two very different scales that change qualitatively and quantitatively the physics."

complete paper http://arxiv.org/pdf/hep-th/0212049v3

e.g. the spin 1 tetrad vector e^I theory of gravity is to Einstein's spin 2 metric tensor guv theory of gravity as the SU2 gauge theory of the weak force is to the 4spinor model of Fermi that was not renormalizable.

We know now that the invisible hand that creates divergences in some theories is actually the existence in these theories of a no man’s land in the energy (or length) scales for which cooperative phenomena can take place, more precisely, for which fluctuations can add up coherently.10 In some cases, they can destabilize the physical picture we were relying on and this manifests itself as divergences. Renormalization, and even more renormalization group, is the right way to deal with these fluctuations. ...

Let us draw our first conclusion. Infinities occur in the perturbation expansion of the theory because we have assumed that it was not regularized. Actually, these divergences have forced us to regularize the expansion and thus to introduce a new scale Λ. Once regularization has been performed, renormalization can be achieved by eliminating g0. The limit Λ → ∞ can then be taken. The process is recursive and can be performed only if the divergences possess, order by order, a very precise structure. This structure ultimately expresses that there is only one coupling constant to be renormalized. This means that imposing only one prescription at x = μ is enough to subtract the divergences for all x. In general, a theory is said to be renormalizable if all divergences can be recursively subtracted by imposing as many prescriptions as there are independent parameters in the theory. In QFT, these are masses, coupling constants, and the normalization of the fields. An important and non-trivial topic is thus to know which parameters are independent, because symmetries of the theory (like gauge symmetries) can relate different parameters (and Green functions).

Let us once again recall that renormalization is nothing but a reparametrization in terms of the physical quantity gR. The price to pay for renormalizing F is that g0 becomes infinite in the limit Λ → ∞, see Eq. (12). We again emphasize that if g0 is a non-measurable parameter, useful only in intermediate calculations, it is indeed of no consequence that this quantity is infinite in the limit Λ → ∞. That g0 was a divergent non-physical quantity has been common belief for decades in QFT. The physical results given by the renormalized quantities were thought to be calculable only in terms of unphysical quantities like g0 (called bare quantities) that the renormalization algorithm could only eliminate afterward. It was as if we had to make two mistakes that compensated each other: first introduce bare quantities in terms of which everything was infinite, and then eliminate them by adding other divergent quantities. Undoubtedly, the procedure worked, but, to say the least, the interpretation seemed rather obscure. ...

A very important class of field theories corresponds to the situation where g0 is dimensionless, and x, which in QFT represents coordinates or momenta, has dimensions (or more generally when g0 and x have independent dimensions). In four-dimensional space-time, quantum electrodynamics is in this class, because the fine structure constant is dimensionless; quantum chromodynamics and the Weinberg-Salam model of electro-weak interactions are also in this class. In four space dimensions the φ^4 model relevant for the Ginzburg-Landau-Wilson approach to critical phenomena is in this class too. This particular class of renormalizable theories is the cornerstone of renormalization in field theories. ...

Note that we have obtained logarithmic divergences because we have studied the renormalization of a dimensionless coupling constant. If g0 was dimensional, we would have obtained power law divergences. This is for instance what happens in QFT for the mass terms ...

there should exist an equivalence class of parametrizations of the same theory and that it should not matter in practice which element in the class is chosen. This independence of the physical quantity with respect to the choice of prescription point also means that the changes of parametrizations should be a (renormalization) group law ...

if we were performing exact calculations: we would gain no new physical information by implementing the renormalization group law. This is because this group law does not reflect a symmetry of the physics, but only of the parametrization of our solution. This situation is completely analogous to what happens for the solution of a differential equation: we can parametrize it at time t in terms of the initial conditions at time t0 for instance, or we can use the equation itself to

calculate the solution at an intermediate time τ and then use this solution as a new initial condition to parametrize the solution at time t. The changes of initial conditions that preserve the final solution can be composed thanks to a group law. ...

Our main goal in this section is to show that, independently of the underlying physical model, dimensional analysis together with the renormalizability con- straint, determine almost entirely the structure of the divergences. This underlying simplicity of the nature of the divergences explains that there is no combinatorial miracle of Feynman diagrams in QFT as it might seem at first glance.

8) The cut-off Λ, first introduced as a mathematical trick to regularize integrals, has actually a deep physical meaning: it is the scale beyond which new physics occur and below which the model we study is a good effective description of the physics. In general, it involves only the renormalizable couplings and thus cannot pretend to be an exact description of the physics at all scales. However, if Λ is very large compared with the energy scale in which we are interested, all non-renormalizable couplings are highly suppressed and the effective model, retaining only renormalizable couplings, is valid and accurate (the Wilson RG formalism is well suited to this study, see Refs. 25 and 26). In some models — the asymptotically free ones — it is possible to formally take the limit Λ → ∞ both perturbatively and non-perturbatively, and there is therefore no reason to invoke a more fundamental theory taking over at a finite (but large) Λ. ...

V. SUMMARY

(1) The long way of renormalization starts with a theory depending on only one parameter g0, which is the small parameter in which perturbation series are expanded. In particle physics, this parameter is in general a coupling constant like an electric charge involved in a Hamiltonian (more precisely the fine structure constant for electrodynamics). This parameter is also the first order contribution of a physical quantity F. In particle/ statistical physics, F is a Green/correlation function. The first order of perturbation theory neglects fluctuations — quantum or statistical — and thus corresponds to the classical/mean field approximation. The parameter g0 also is to this order a measurable quantity because it is given by a Green function. Thus, it is natural to interpret it as the unique and physical coupling constant of the problem. If, as we suppose in the following, g0 is dimensionless, so is F. Moreover, if x is dimensional — it represents momenta in QFT — it is natural that F does not depend on it as is found in the classical theory, that is, at first order of the perturbation expansion.

(2) If F does depend on x, as we suppose it does at second order of perturbation theory, it must depend on another dimensional parameter, through the ratio of

x and /\ . If we have not included this parameter from the beginning in the model, the x-dependent terms are either vanishing, which is what happens at first order, or infinite as they are at second and higher orders. This is the very origin of divergences (from the technical point of view).

(3) These divergences require that we regularize F. This requirement, in turn, requires the introduction of the scale that was missing. In the context of field

theory, the divergences occur in Feynman diagrams for high momenta, that is, at short distances. The cut-off suppresses the fluctuations at short distances compared with /\^−1. "

Note with quantum gravity tiny black holes form at short distances giving a natural cut off at Lp at least where

delta(x) ~ h/delta(p) + Lp^2delta(p)/h

the second term on RHS is added to the Heisenberg uncertainty principle.

"In statistical physics, this scale, although introduced for formal reasons, has a natural interpretation because the theories are always effective theories built at a given microscopic scale. It corresponds in general to the range of interaction of the constituents of the model, for example, a lattice spacing for spins, the average intermolecular distance for fluids. In particle physics, things are less simple. At least psychologically. It was indeed natural in the early days of quantum electrodynamics to think that this theory was fundamental, that is, not derived from a more fundamental theory. More precisely, it was believed that QED had to be mathematically internally consistent, even if in the real world new physics had to occur at higher energies. Thus, the regulator scale was introduced only as a trick to perform intermediate calculations. The limit /\ → ∞ was supposed to be the right way to eliminate this unwanted scale, which anyway seemed to have no interpretation. We shall see in the following that the community now interprets the renormalization process differently (4) Once the theory is regularized, F can be a nontrivial function of x. The price is that different values of x now correspond to different values of the coupling constant (defined as the values of F for these x). Actually, it does no longer make sense to speak of a coupling constant in itself. The only meaningful concept is the pair (μ, gR(μ)) of coupling constants at a given scale. The relevant question now is, “What are the physical reasons in particle/statistical physics that make the coupling constants depend on the scale while they are constants in the classical/mean field approximation?” As mentioned, for particle physics, the answer is the existence of new quantum fluctuations corresponding to the possibility of creating (and annihilating) particles at energies higher than mc^2. What was scale independent in the classical theory becomes scale dependent in the quantum theory because, as the available energy increases, more and more particles can be created. The pairs of (virtual) particles surrounding an electron are polarized by its presence and thus screen its charge. As a consequence, the charge of an electron depends on the distance (or equivalently the energy) at which it is probed, at least for distances smaller than the Compton wavelength. Note that the energy scale mc^2 should not be confused with the cut-off scale . mc^2 is the energy scale above which quantum fluctuations start to play a significant role while /\ is the scale where they are cut-off. Thus, although the Compton wave length is a short distance scale for the classical theory, it is a long distance scale for QFT, the short one being /\^−1.

There are thus three domains of length scales in QFT: above the Compton wave length where the theory behaves classically (up to small quantum corrections coming from high energy virtual processes), between the Compton wave length and the cut-off scale /\^−1 where the relativistic and quantum fluctuations play a great role, and below /\^−1 where a new, more fundamental theory has to be invoked.10

In statistical physics, the analogue of the Compton wave length is the correlation length which is a measure of the distance at which two microscopic constituents of the system are able to influence each other through thermal fluctuations.38 For the Ising model for instance, the correlation length away from the critical point is the order of the lattice spacing and the corrections to the mean-field approximation due to fluctuations are small. Unlike particle physics where the masses and therefore the Compton wavelengths are fixed, the correlation lengths in statistical mechanics can be tuned by varying the temperature. Near the critical temperature where the phase transition takes place, the correlation length becomes extremely large and fluctuations on all length scales between the microscopic scale of order /\^−1, a lattice spacing, and the correlation length add up to modify the mean-field behavior (see Refs. 21, 22 and also Ref. 23 for a bibliography in this subject). We see here a key to the relevance of renormalization: two very different scales must exist between which a non-trivial dynamics (quantum or statistical in our examples) can develop. This situation is a priori rather unnatural as can be seen for phase transitions, where a fine tuning of temperature must be implemented to obtain correlation lengths much larger than the microscopic scale. Most of the time, physical systems have an intrinsic scale (of time, energy, length, etc) and all the other relevant scales of the problem are of the same order. All phenomena occurring at very different scales are thus almost completely suppressed. The existence of a unique relevant scale is one of the reasons why renormalization is not necessary in most physical theories. In QFT it is mandatory because the masses of the known particles are much smaller than a hypothetical cut-off scale , still to be discovered, where new physics should take place. This is a rather unnatural situation, because, contrary to phase transitions, there is no analogue of a temperature that could be fine-tuned to create a large splitting of energy, that is, mass, scales. The question of naturalness of the models we have at present in particle physics is still largely open, although there has been much effort in this direction using supersymmetry.

(5) The classical theory is valid down to the Compton/correlation length, but cannot be continued naively beyond this scale; otherwise, when mixed with the quantum formalism, it produces divergences. Actually, it is known in QFT that the fields should be considered as distributions and not as ordinary functions. The need for considering distributions comes from the non-trivial structure of the theory at very short length scale where fluctuations are very important. At short distances, functions are not sufficient to describe the field state, which is not smooth but rough, and distributions are necessary. Renormalizing the theory consists actually in building, order by order, the correct “distributional continuation” of the classical theory. The fluctuations are then correctly taken into account and depend on the scale at which the theory is probed: this non-trivial scale dependence can only be taken into account theoretically through the dependence of the (analogue of the) function F with x and thus of the coupling with the scale μ.

(6) If the theory is perturbatively renormalizable, the pairs (μ, g(μ)) form an equivalence class of parametrizations of the theory. The change of parametrization from (μ, g(μ)) to (μ′, g(μ′)), called a renormalization group transformation, is then performed by a law which is self-similar, that is, such that it can be iterated several times while being form-invariant.19,20 This law is obtained by the integration of

... In particle physics, the β-function gives the evolution of the strength of the interaction as the energy at which it is probed varies and the integration of the β-function resums partially the perturbation expansion. First, as the energy increases, the coupling constant can decrease and eventually vanish. This is what happens when α > 0 in Eqs. (65) and (66). In this case, the particles almost cease to interact at very high energies or equivalently when they are very close to each other. The theory is then said to be asymptotically free in the ultraviolet domain.3,5 Reciprocally, at low energies the coupling increases and perturbation theory can no longer be trusted. A possible scenario is that bound states are created at a sufficiently low energy scale so that the perturbation approach has to be reconsidered in this domain to take into account these new elementary excitations. Non-abelian gauge theories are the only known theories in four spacetime dimensions that are ultraviolet free, and it is widely believed that quantum chromodynamics — which is such a theory — explains quark confinement. The other important behavior of the scale dependence of the coupling constant is obtained for α < 0 in which case it increases at high energies. This corresponds for instance to quantum

electrodynamics. For this kind of theory, the dramatic increase of the coupling at high energies is supposed to be a signal that the theory ceases to be valid beyond a certain energy range and that new physics, governed by an asymptotically free theory (like the standard model of electro-weak interactions) has to take place at short distances.

(7) Renormalizability, or its non-perturbative equivalent, self-similarity, ensures that although the theory is initially formulated at the scale μ, this scale together

with g0 can be entirely eliminated for another scale better adapted to the physics we study. If the theory was solved exactly, it would make no difference which parametrization we used. However, in perturbation theory, this renormalization lets us avoid calculating small numbers as differences of very large ones. It would indeed be very unpleasant, and actually meaningless, to calculate energies of order 100GeV, for instance — the scale μ of our analysis — in terms of energies of order of the Planck scale ? 10^19 GeV, the analogue of the scale /\ . In a renormalizable theory, the possibility to perturbatively eliminate the large scale has a very deep meaning: it is the signature that the physics is short distance insensitive or equivalently that there is a decoupling of the physics at different scales. The only memory of the short distance scale lies in the initial conditions of the renormalization group flow, not in the flow itself: the β-function does not depend on /\. We again emphasize that, usually, the decoupling of the physics at very different scales is trivially related to the existence of a typical scale such that the influence of all phenomena occurring at different scales is almost completely suppressed. Here, the decoupling is much more subtle because there is no typical length in the whole domain of length scales that are very small compared with the Compton wave length and very large compared with /\^−1. Because interactions among particles correspond to non-linearities in the theories, we could naively believe that all scales interact with each others — which is true — so that calculating, for instance, the low energy behavior of the theory would require the detailed calculation of all interactions occurring at higher energies. Needless to say that in a field theory, involving infinitely many degrees of freedom — the value of the field at each point — such a calculation would be hopeless, apart from exactly solvable models. Fortunately, such a calculation is not necessary for physical quantities that can be calculated from renormalizable couplings only. Starting at very high energies, typically , where all coupling constants are naturally of order 1, the renormalization group flow drives almost all of them to zero, leaving only, at low energies, the renormalizable couplings. This is the interpretation of non-renormalizable couplings. They are not terrible monsters that should be forgotten as was believed in the early days of QFT. They are simply couplings that the RG flow eliminates at low energies. If we are lucky, the renormalizable couplings become rather small after their RG evolution between /\ and the scale μ at which we work, and perturbation theory is valid at this scale. We see here the phenomenon of universality: among the infinitely many coupling constants that are a priori necessary to encode the dynamics of the infinitely many degrees of freedom of the theory, only a few ones are finally relevant.25 All the others are washed out at large distances. This is the reason why, perturbatively, it is not possible to keep these couplings finite at large distance, and it is necessary to set them to zero.39 The simplest non-trivial example of universality is given by the law of large numbers (the central limit theorem) which is crucial in statistical mechanics.21 In systems where it can be applied, all the details of the underlying probability

distribution of the constituents of the system are irrelevant for the cooperative phenomena which are governed by a gaussian probability distribution.24 This drastic reduction of complexity is precisely what is necessary for physics because it lets us build effective theories in which only a few couplings are kept.10 Renormalizability in statistical field theory is one of the non-trivial generalizations of the central limit theorem.

(8) The cut-off , first introduced as a mathematical trick to regularize integrals, has actually a deep physical meaning: it is the scale beyond which new physics occur and below which the model we study is a good effective description of the physics. In general, it involves only the renormalizable couplings and thus cannot pretend to be an exact description of the physics at all scales. However, if is very large compared with the energy scale in which we are interested, all non-renormalizable couplings are highly suppressed and the effective model, retaining only renormalizable couplings, is valid and accurate (the Wilson RG formalism is well suited to this study, see Refs. 25 and 26). In some models — the asymptotically free ones — it is possible to formally take the limit /\ → ∞ both perturbatively and non-perturbatively, and there is therefore no reason to invoke a more fundamental theory taking over at a finite (but large) . Let us

emphasize here several interesting points.

(i) For a theory corresponding to the pair (μ, gR(μ)), the limit /\ → ∞ must be taken within the equivalence class of parametrizations to which (μ, gR(μ)) belongs.40 A divergent non-regularized perturbation expansion consists in taking = ∞ while keeping g0 finite. From this viewpoint, the origin of the divergences is that the pair ( u = ∞, g0) does not belong to any equivalence class of a sensible theory. Perturbative renormalization consists in computing g0 as a formal powers series in gR (at finite ), so that (μ0, g0) corresponds to a mathematically consistent theory; we then take the limit /\ → ∞.

(ii) Because of universality, it is physically impossible to know from low energy data if /\ is very large or truly infinite.

(iii) Although mathematically consistent, it seems unnatural to reverse the RG process while keeping only the renormalizable couplings and thus to imagine that even at asymptotically high energies, Nature has used only the couplings that we are able to detect at low energies. It seems more natural that a fundamental theory does not suffer from renormalization problems. String theory is a possible candidate. 27

To conclude, we see that although the renormalization procedure has not evolved much these last thirty years, our interpretation of renormalization has drastically changed10: the renormalized theory was assumed to be fundamental, while it is now believed to be only an effective one; was interpreted as an artificial parameter that was only useful in intermediate calculations, while we now believe that it corresponds to a fundamental scale where new physics occurs; non-renormalizable couplings were thought to be forbidden, while they are now interpreted as the remnants of interaction terms in a more fundamental theory. Renormalization group is now seen as an efficient tool to build effective low energy theories when large fluctuations occur between two very different scales that change qualitatively and quantitatively the physics."

complete paper http://arxiv.org/pdf/hep-th/0212049v3

e.g. the spin 1 tetrad vector e^I theory of gravity is to Einstein's spin 2 metric tensor guv theory of gravity as the SU2 gauge theory of the weak force is to the 4spinor model of Fermi that was not renormalizable.

On Nov 12, 2010, at 2:49 PM, Russell Anderson wrote:

Inertia is solely related to NET CHARGE, NOT MASS, or amount or density of material.

Interesting issue.

Leptons and quarks all carry internal charges (EM, weak, strong).

We need a non-gravity charge to move massive test particles off zero g-force timelike geodesics.

A purely neutral massive test particle cannot be pushed off a timelike geodesic. Therefore, its "inertia" = "rest mass" could not be measured in principle in the test particle approximation. Of course, even a test particle is a source of curvature, but by definition, it's too small to be measured with current technology.

Q. Can we have a zero rest mass charge?

A. Yes, quarks and leptons, for example, are zero rest mass in the pre-inflation false vacuum with zero Higgs field vacuum superconductivity.

Q. What about the classical charge static energy?

A. It must be cancelled by the negative gravity static energy - maybe.

Date: Fri, 12 Nov 2010 10:47:16 -0800

Subject: Re: Amateur's confusions on the meanings of "gravity" and "inertia" (Dr. Quantum)

From: iksnileiz@gmail.com

To: sarfatti@pacbell.net

So what you try to call my "amateur's confusion"

I did not say you were the "amateur" Z. ;-)

is in fact the contemporary understanding of the GTR, as adopted

by leading specialists in the field. It was Einstein who was confused about this, not me. Einstein initially held

that according to GTR, inertia would not operate in the absence of gravitational sources. Einstein's own "amateur

confusion" on this point was cleared up by de Sitter in 1916-17.

Did he? He may have been thinking of what Wheeler later called "geons" i.e. test particles as singularities in the metric guv field. Today we call them point topological defects with quantized surrounding areas similar to vortices with quantized circulations and vorticity fluxes (Stoke's theorem). There was the work with Infeld deriving the geodesic test particle equation from the field equation with a point singularity in guv.

If anyone is interested, a short article by Michael Jannsen on the Einstein-de Sitter debate is available here:

http://www.lorentz.leidenuniv.nl/~vanbaal/ART/E-dS.pdf

On Fri, Nov 12, 2010 at 10:24 AM, Paul Zielinski <iksnileiz@gmail.com> wrote:

But spacetime curvature *does* determine the inertial trajectories of free test objects,

Yes, that's what I said.

which are the reference trajectories (spacetime geodesics) for the dynamics.

Yes, that's what I said.

So while what you say is correct, it

doesn't alter the fact that in the GTR, even in the complete absence of a gravitational field, the resistance

of a test object to forced deviations from the inertial trajectories determined by the spacetime geometry

*is* proportional to the inertial mass (subject to the appropriate SR corrections); inertia still operates

*even in the complete absence of matter-induced gravitation*.

Of course it doesn't alter that fact, indeed THAT was my main point! Inertia has nothing to do with gravity.

When you step on a scale, your weight is really the electrical force preventing you from free-falling off the ladder (so to speak).

Even though you are electrically neutral there are the Van der Waals forces etc from charge separations etc.

Definitions of Van der Waals forces on the Web:

In physical chemistry, the van der Waals force (or van der Waals interaction), named after Dutch scientist Johannes Diderik van der Waals, is the attractive or repulsive force between molecules (or between parts of the same molecule) other than those due to covalent bonds or to the electrostatic ...

en.wikipedia.org/wiki/Van_der_Waals_forces

intermolecular attractions.

matse1.mse.uiuc.edu/polymers/glos.html

after JD van der Waals, 1873): atomic and molecular attraction forces arising from dipole effects (between permanent dipoles), induction effects (between permanent and induced dipoles) and dispersion effects (between induced dipoles); see also Israelachvili (1991): Chap. ...

www.dataphysics.de/english/service_gloss.htm

a weak attraction between molecules.

www.saskschools.ca/curr_content/chem30_05/appendix/glossary.htm

This interaction is due to induced dipole-dipole interaction. Even if the particles don't have a permanent dipole, fluctuations of the electron density gives rise to a temporary dipole, meaning that van der Waals forces are always present, although possibly at a much lower magnitude than others.

ocw.kfupm.edu.sa/user/CE37001/CHEMICAL%20CONCEPTS.ppt

Of course one could expand the meaning of "gravitational field" such that even the Minkowski metric

qualifies, but isn't that like redefining "vegetable" to include ketchup?

Rovelli says exactly that. "Zero" is a number and a "Zero gravity curvature field" is still a solution of Ruv = 0 and it has non-zero quantum gravity fluctuations. Also it is unstable and can quantum tunnel into a geon with curvature and still Tuv = 0.

About 55,400 results (0.27 seconds)

Search Results

Phys. Rev. 97, 511 (1955): Wheeler - Geons

by JA Wheeler - 1955

Geons. John Archibald Wheeler Palmer Physical Laboratory, Princeton University, Princeton, New Jersey. Received 8 September 1954 ...

link.aps.org/doi/10.1103/PhysRev.97.511 - Similar

?

Rev. Mod. Phys. 29, 480 (1957): Power and Wheeler - Thermal Geons

by EA Power - 1957 - Cited by 21 - Related articles

Thermal Geons. Edwin A. Power Department of Mathematics, University College ...

link.aps.org/doi/10.1103/RevModPhys.29.480 - Similar

Show more results from aps.org

Amazon.com: Geons, Black Holes, and Quantum Foam: A Life in ...

12 reviews - $10.85 - In stock

Geons, Black Holes & Quantum Foam, in John Wheeler's science autobiography. To the rest of us, getting excited over the properties of atomic nuclei and the ...

www.amazon.com › ... › Professionals & Academics › Scientists - Cached - Similar

Geon (physics) - Wikipedia, the free encyclopedia

Nonetheless, Wheeler speculated that there might be a relationship between microscopicgeons and elementary particles. This idea continues to attract some ...

en.wikipedia.org/wiki/Geon_(physics) - Cached - Similar

Inertia is solely related to NET CHARGE, NOT MASS, or amount or density of material.

Interesting issue.

Leptons and quarks all carry internal charges (EM, weak, strong).

We need a non-gravity charge to move massive test particles off zero g-force timelike geodesics.

A purely neutral massive test particle cannot be pushed off a timelike geodesic. Therefore, its "inertia" = "rest mass" could not be measured in principle in the test particle approximation. Of course, even a test particle is a source of curvature, but by definition, it's too small to be measured with current technology.

Q. Can we have a zero rest mass charge?

A. Yes, quarks and leptons, for example, are zero rest mass in the pre-inflation false vacuum with zero Higgs field vacuum superconductivity.

Q. What about the classical charge static energy?

A. It must be cancelled by the negative gravity static energy - maybe.

Date: Fri, 12 Nov 2010 10:47:16 -0800

Subject: Re: Amateur's confusions on the meanings of "gravity" and "inertia" (Dr. Quantum)

From: iksnileiz@gmail.com

To: sarfatti@pacbell.net

So what you try to call my "amateur's confusion"

I did not say you were the "amateur" Z. ;-)

is in fact the contemporary understanding of the GTR, as adopted

by leading specialists in the field. It was Einstein who was confused about this, not me. Einstein initially held

that according to GTR, inertia would not operate in the absence of gravitational sources. Einstein's own "amateur

confusion" on this point was cleared up by de Sitter in 1916-17.

Did he? He may have been thinking of what Wheeler later called "geons" i.e. test particles as singularities in the metric guv field. Today we call them point topological defects with quantized surrounding areas similar to vortices with quantized circulations and vorticity fluxes (Stoke's theorem). There was the work with Infeld deriving the geodesic test particle equation from the field equation with a point singularity in guv.

If anyone is interested, a short article by Michael Jannsen on the Einstein-de Sitter debate is available here:

http://www.lorentz.leidenuniv.nl/~vanbaal/ART/E-dS.pdf

On Fri, Nov 12, 2010 at 10:24 AM, Paul Zielinski <iksnileiz@gmail.com> wrote:

But spacetime curvature *does* determine the inertial trajectories of free test objects,

Yes, that's what I said.

which are the reference trajectories (spacetime geodesics) for the dynamics.

Yes, that's what I said.

So while what you say is correct, it

doesn't alter the fact that in the GTR, even in the complete absence of a gravitational field, the resistance

of a test object to forced deviations from the inertial trajectories determined by the spacetime geometry

*is* proportional to the inertial mass (subject to the appropriate SR corrections); inertia still operates

*even in the complete absence of matter-induced gravitation*.

Of course it doesn't alter that fact, indeed THAT was my main point! Inertia has nothing to do with gravity.

When you step on a scale, your weight is really the electrical force preventing you from free-falling off the ladder (so to speak).

Even though you are electrically neutral there are the Van der Waals forces etc from charge separations etc.

Definitions of Van der Waals forces on the Web:

In physical chemistry, the van der Waals force (or van der Waals interaction), named after Dutch scientist Johannes Diderik van der Waals, is the attractive or repulsive force between molecules (or between parts of the same molecule) other than those due to covalent bonds or to the electrostatic ...

en.wikipedia.org/wiki/Van_der_Waals_forces

intermolecular attractions.

matse1.mse.uiuc.edu/polymers/glos.html

after JD van der Waals, 1873): atomic and molecular attraction forces arising from dipole effects (between permanent dipoles), induction effects (between permanent and induced dipoles) and dispersion effects (between induced dipoles); see also Israelachvili (1991): Chap. ...

www.dataphysics.de/english/service_gloss.htm

a weak attraction between molecules.

www.saskschools.ca/curr_content/chem30_05/appendix/glossary.htm

This interaction is due to induced dipole-dipole interaction. Even if the particles don't have a permanent dipole, fluctuations of the electron density gives rise to a temporary dipole, meaning that van der Waals forces are always present, although possibly at a much lower magnitude than others.

ocw.kfupm.edu.sa/user/CE37001/CHEMICAL%20CONCEPTS.ppt

Of course one could expand the meaning of "gravitational field" such that even the Minkowski metric

qualifies, but isn't that like redefining "vegetable" to include ketchup?

Rovelli says exactly that. "Zero" is a number and a "Zero gravity curvature field" is still a solution of Ruv = 0 and it has non-zero quantum gravity fluctuations. Also it is unstable and can quantum tunnel into a geon with curvature and still Tuv = 0.

About 55,400 results (0.27 seconds)

Search Results

Phys. Rev. 97, 511 (1955): Wheeler - Geons

by JA Wheeler - 1955

Geons. John Archibald Wheeler Palmer Physical Laboratory, Princeton University, Princeton, New Jersey. Received 8 September 1954 ...

link.aps.org/doi/10.1103/PhysRev.97.511 - Similar

?

Rev. Mod. Phys. 29, 480 (1957): Power and Wheeler - Thermal Geons

by EA Power - 1957 - Cited by 21 - Related articles

Thermal Geons. Edwin A. Power Department of Mathematics, University College ...

link.aps.org/doi/10.1103/RevModPhys.29.480 - Similar

Show more results from aps.org

Amazon.com: Geons, Black Holes, and Quantum Foam: A Life in ...

12 reviews - $10.85 - In stock

Geons, Black Holes & Quantum Foam, in John Wheeler's science autobiography. To the rest of us, getting excited over the properties of atomic nuclei and the ...

www.amazon.com › ... › Professionals & Academics › Scientists - Cached - Similar

Geon (physics) - Wikipedia, the free encyclopedia

Nonetheless, Wheeler speculated that there might be a relationship between microscopicgeons and elementary particles. This idea continues to attract some ...

en.wikipedia.org/wiki/Geon_(physics) - Cached - Similar

Nov
12

Tagged in:

On Nov 11, 2010, at 11:06 AM, Paul Zielinski wrote in quotes:

As I said, gravity as curvature is independent of inertia of test particles.

All slower-than-light test particles follow the same timelike geodesic paths - their inertia = rest mass cancels out of the problem.

The curvature is geodesic deviation - key word is geodesic.

Inertia only is important in off-geodesic motions from non-gravity forces.

The origin of inertia is given in the standard model.

How, precisely in terms of the formalism?

There is no theory of "m" in GR. It is a primitive and it is irrelevant except for the source Tuv tensor for a gas of particles.

There are two problems in classical GR.

Generation of curvature via

Guv + 8piG/c^4Tuv = 0

Geodesic motion of neutral test particles (including null geodesics geometrical optics limit of light rays).

D^2x^u/ds^2 = 0

note there is no "m" in the geodesic equation - nor in the curvature geodesic deviation equation.

D denotes covariant tensor producing derivative relative to the torsionless Levi-Civita connection - in the 1916 version of GR.

of the GTR as a causal explanatory hypothesis, once all gravitating matter is removed from

spacetime, according to the GTR objects in unforced motion still follow inertial trajectories,

and one must still apply non-gravitational forces to move them off those trajectories. In which

case it is hard to see how such a situation can be explained by long range physical interactions

with distant matter which is not even present."

You mean if Tuv = 0 globally so that

Ruv = 0

one can still have geon solutions - but they are not very physical.

Yes, globally flat Minkowski spacetime is an unstable solution of GR and same rules of inertia for off geodesics apply.

Inertia is the response of the test particle to applied non-gravity forces - even when Tuv = 0 - only test particles.

As you say -- if it ain't broke, why try to fix it with redundant hypotheses of the Machian type?

My retrocausal hologram conjecture is a new form of post-selected Mach's Principle in Aharonov's sense.

A retarded photon leaving us now will infinitely redshift in a finite time at our future horizon along our idealized world line because dark energy virtual bosons accelerate the rate of expansion of space. In contrast an advanced photon will infinitely blue shift from the horizon back to us. In Cramer's transaction, these two infinite shifts exactly cancel - the confirmation wave comes back at exactly the frequency the offer wave was emitted with.

In this sense our future horizon is the Wheeler-Feynman perfect absorber allowing the net retardation and the correct Arrow of Time.

Note that virtual quanta at the horizon are converted to real quanta at the Planck temperature.

We are at r = 0, the virtual quanta act like static LNIF detectors - this is the reason for Kip Thorne's electrical membrane model of the horizon.

Our future de Sitter metric is g00 = 1 - /\r^2 ---> 0 at the future horizon r = /\^-1/2.

The effective Unruh temperature of the converted virtual ---> blackbody quanta is

T = (hc/\^1/2/kB)(1 - /\r^2)^-1/2

remember we are at r = 0 in this static LNIF representation.

Of course that is not to say that we can't look for deeper explanations that could account for

the success of the 1916 theory, and its Riemannian spacetime model, at the phenomenological

level.

On Thu, Nov 11, 2010 at 9:23 AM, JACK SARFATTI <sarfatti@pacbell.net> wrote:

exactly

gravity is a field that is the very fabric of space-time

if it ain't broke don't try to fix it

it ain't broke

indeed Einstein's theory is the Jewel in The Crown of God.

In any case Roy needs to understand the hard reality that physicist's minds are closed on this and rightly so

We have a beautiful theory that works.

On Nov 11, 2010, at 8:28 AM, Jonathan Post wrote:

> Einstein was influenced by Mach, who held that gravity was NOT a

> property of individual bodies.

>

> Mach's Principle states that: "mass there influences inertia here."

>

> Isaac Newton had, of course, suggested that space was an absolute

> backdrop. Newton’s vision of space contained an engraved set of

> coordinates and he mapped all motions as movements with respect to

> those coordinates. Ernst Mach disagreed with this foundationally,

> believing rather that motion was only meaningful if measured with

> respect to another object, not to Newton’s coordinates. Mach believed

> that motion ultimately depended on the distribution of matter, or its

> mass, not on the properties of space itself.

>

> Mach did not put this, so far as I know, in terms of flow. Nor is

> this, in my humble opinion, best addressed through poetic metaphor,

> even though I am an award-winning Science Poet whose poems have

> appeared in venues such as "Science", Caltech's alumni magazine

> "Engineering & Science", and as the invited frontispiece of a NASA

> proceedings of a conference on New Physics and Spacecraft Propulsion.

>

> I am often outrageous in papers outside the consensus paradigm, even in

> widely read venues such as Scientific American an the arXiv. But it

> helps to understand the protocols of academic publication.

>

> To have refereed publication in mainstream Physics journals or

> proceedings, one must prove to the referees and editor that one is

> facile with the existing literature, and especially the Mathematical

> Physics as such, equation by equation, as Jack Sarfatti keeps

> patiently summarizing and citing.

>

> -- Prof. Jonathan Vos Post

>

> On Thu, Nov 11, 2010 at 8:07 AM, <Fhuroy@aol.com> wrote:

>>

>> Jack;

>>

>> Gravity as a flowing field is not well understood, if it was you could generate a kind of "hydroelectric" kind of power. It is impossible to get antigravity or electricity from any point in space as long as "people in the field" hold fast to gravity as a property of individual bodies. It is not true that gravity is well understood and signed should not be satisfied with what they think is well understood. My 70 page thesis on the unified field with a new beginning has all the answers you're looking for -- just read it.

>>

>> Inside the box thinkers in the field who have no interest in alternative ideas will not advance one iota thinking gravity is a property of individual bodies. I would challenge any physicist to an open debate on this subject.

>>

>> Roy

>>

>> In a message dated 11/10/2010 7:12:01 P.M. Pacific Standard Time, sarfatti@pacbell.net writes:

>>

>> Hi Roy

>> What you have is a poetic metaphor, but that is not the way real theoretical physics is done.

>> It's poetry but not physics the way professional university academics do it.

>> One needs equations with interpretations and predictions that can be tested.

>> Einstein's theory of gravity is precise, it is beautiful to the people who use it and it passes every test with amazing accuracy and precision.

>> People in the field have no interest in alternative ideas about what is already well understood.

>> :-)

>> On Nov 10, 2010, at 7:42 PM, fhuroy@aol.com wrote:

>>

>> Jack;

>>

>> Thank you for opening your web site to me, we are very much in agreement. What if space is not actually space but rather it is gravity flowing and curving around particles spinning from it like eddies in a stream? In that case, universal gravity transforms into the pressures of local gravitation that is inseparable from space-time curvature. In other words, gravitation curvature as a force could even be the driving power of photons and other particles.

>>

>> I reject the idea of gravity as being force free. Gravity flows from a singularity infinitely long, smooth and uniform creating a three-dimensional shadow realm like the winding of a ball of string -- a three-dimensional universe before matter and time began. Where there is no mass there can be no time. The vast distances between celestial objects is really the dimensions of gravity that we call space.

>>

>> Roy

>>

>>

>>

>> In a message dated 11/10/2010 3:59:08 P.M. Pacific Standard Time, sarfatti@pacbell.net writes:

>>

>> Sarfatti Lectures on Physics 2012

>>

>> Einstein’s theory of gravity (General Relativity)[1]

>>

>> Einstein eliminated Newton’s gravity force and replaced it by force-free motion in curved space-time. This is the meaning of the Einstein equivalence principle and it explains why physicists are confused and stumped when they naively try to unify gravity with electromagnetism and the weak-strong forces.

>>

>> The theory of symmetry called group theory in mathematics shows that gravity is universal. It is the compensating local gauge field of the symmetries of spacetime. In contrast, electromagnetism and the weak-strong forces are the compensating local gauge fields of the symmetries of the extra dimensions of string-brane theory.

>>

>> The Meaning of Einstein’s 1905 Special Theory of Relativity (SR)

>>

>> The basic idea of relativity is to compute local invariant observable numbers that are the same for different measurements of the actual same events.

>>

>> SR only works for the above measurements of the same actual events by detectors in unaccelerated motions in a globally flat 4D spacetime. Each detector will not feel g-forces. Of course, the test particles measured may be accelerating.

>>

>> The Meaning of Einstein’s 1916 General Theory of Relativity (GR)

>>

>> GR works now for all detectors that are now allowed to accelerate feeling g-forces. However, the local invariant observable numbers are limited to locally coincident detectors. “Coincident” here means that the spacetime separations of a pair of detectors measuring the same actual events must be small compared to the scale of radii of curvature.

>>

>> The tetrad fields describe the relationships between an unaccelerated “local inertial frame” (LIF) and a coincident accelerating “local non-inertial frame” (LNIF).

>>

>> Einstein’s “general coordinate transformations” called “diffeomorphisms”[2] by the mathematicians are the relationships between two coincident accelerating LNIFs.

>>

>>

>>

>>

>>

>>

>>

>> ________________________________

>> [1] “Relativity DeMystified” David McMahon (McGraw Hill) - basic reference for background

>> [2] The mathematician's idea here has excess formal baggage not needed by experimental physicists.

>> =

>>

>> =