You are here:
Home Jack Sarfatti's Blog Blog (Full Text Display)

Nov
21

Tagged in:

Klee wrote on Nov 21, 2011

*Hi, Jack. We met briefly via email a couple months ago. I'm a friend of Tony Smith and Simon, as you might recall. I also know Laura and Ark, so it's a small world.Your comment, "If I am right about entanglement signals it means direct contact with future alien intelligence. Indeed a portal is opening." is interesting. Certainly the future exists now concurrent with the present and past. And certainly there is much evidence of non-local information exchange in studies on psychic phenomena, (e.g. SRI remote viewing study and various entanglement type experiments). So communication with the future is plausible.I met Simon in Palo Alto at Singularity University, where Ray Kurtzweil and a bunch of other geeks like me, Larry Page, the director of the AMES Research Center and many others believe that the human race is approaching an evolutionary crescendo in terms of our collective knowledge, specifically our technology. But I sense that there's something else at play, more similar to what mystics talk about in terms of something related to galactic alignments or a harmonic convergence of subtle energies or something along those lines. *

I don't think galactic alignment or harmonic convergence have anything to do with it. That's New Age not-even-wrong pseudo-physics in my opinion.

Childhood's End - Arthur C. Clarke.

Right- see Daryl Bem's "feeling the future" - unfortunately we seldom act on our precognitions.

Agreed.

A long story. See my 2002 Autobiography Destiny Matrix (on Amazon et-al)

From: JACK SARFATTI [mailto:adastra1@me.com]

Sent: Sunday, November 06, 2011 6:12 PM

To: caryn anscomb

Subject: Re: 11.11.11...New film on this topic, due for release on.....I'm sure you can guess.

Curious I am giving my paper to SLAC American Physical Society that day. If I am right about entanglement signals it means direct contact with future alien intelligence. Indeed a portal is opening.

On Nov 06, 2011, at 02:16 PM, caryn anscomb <nyracum@yahoo.com> wrote:

Session C1: AMO & HEP Theory

4:00 PM5:24 PM, Friday, November 11, 2011

Bldg 48 - ROB Room: Redwood A/B

Chair: Virginia Trimble, University of California, Irvine

Abstract: C1.00002 : Is entanglement signaling really impossible?

4:12 PM4:24 PM

Preview Abstract

Author:

Jack Sarfatti

(ISEP)

Quantum information theory is based on the premise that entanglement cannot be used as a stand-alone communication channel without a classical signal key decoder. The proof depends on linearity of observables, orthogonal base states, and unitary time evolution between measurements of the Schrodinger equation in configuration space. Spontaneous symmetry breakdown giving a Higgs-Goldstone condensate macro-quantum coherent Glauber ground state has a nonlinear non-unitary Landau-Ginzburg equation in ordinary physical space. The Glauber coherent states are non-orthogonal. The conditions for no-entanglement signaling are not satisfied in this case and it may mean the need for a generalized quantum theory that is to orthodox quantum theory as general relativity is to special relativity.

VIdeos of this talk are on my Facebook page - public access.

Nov
20

Tagged in:

I highlighted some key statements in Glashow's paper.

a few remarks

1) the paper is short - detailed calculations not shown, but we can trust that Glashow knows how to calculate correctly as he got a Nobel Prize for this sort of calculation.

2) it's obvious that mainstream quantum field theory of the electro-weak force will not allow a superluminal neutrino as claimed by OPERA in the light of several other observations.

Radical conservatism dictates we cannot overthrow a huge body of theory battle tested by many experiments and observations. It's obvious there is an error in the OPERA experiment, but exactly what that error is we don't yet know - probably the GPS.

Glashow's arguments below seem conclusive to me at this time.

Nov
18

Tagged in:

Saul-Paul Sirag wrote on 11-18-11

Hi Jack,

In case you missed it, Cohen and Glashow deny the validity of the FTL OPERA neutrinos.

They base their analysis on the absence of a kind of Cerenkov effect.

I'll attach a description of Cohen & Glashow's paper here:

*"What Cohen and Glashow did last week was to generalize this idea to point out a new physical phenomenon (new at least to me) and use it to argue that OPERA’s result is self-inconsistent. They argue that the very effect of fasterthan-light travel that OPERA claims to observe would have caused distortions in its neutrino beam that clearly were not observed. Moreover, Cohen and Glashow also pointed out that at least two other experiments studying higher energy neutrinos put even stronger constraints on the possibility of anything similar to what OPERA observed. ...*

*What Cohen and Glashow did last week was to generalize this idea to point out a new physical phenomenon (new at least to me) and use it to argue that OPERA’s result is self-inconsistent. They argue that the very effect of faster-than-light travel that OPERA claims to observe would have caused distortions in its neutrinobeam that clearly were not observed. Moreover, Cohen and Glashow also pointed out that at least two other experiments studying higher energy neutrinos put even stronger constraints on the possibility of anything similar to what OPERA observed. ...*

*You might wonder whether the supernova neutrinos are of one type, and might have one speed, while those at OPERA might be a different neutrino-type and travel at a different speed. This possibility, as Coleman and Glashow pointed out in their papers from the 1990s, is excluded by the various experiments on neutrino oscillations, whose internal self-consistency would be badly broken if different types of neutrinos traveled at speeds as different as would be required to reconcile the supernova neutrinos with OPERA’s neutrinos. In fact, the constraints on the differences between the speeds of different neutrino types is at the level of ten parts per billion trillion! We can forget about that possibility being important. From these two paragraphs we conclude that if OPERA is right, the speed of all three types of neutrinos must increase with energy, differing from light speed by a few parts per billion or less for neutrinos with 0.01-0.04 GeV, and differing at a few parts per hundred thousand for OPERA’s neutrinos, with 10-40 GeV of energy. ...*

*The basic idea is that a neutrino that travels fast enough can potentially lose energy by emitting a particle and a corresponding antiparticle through effects of the weak nuclear force. Specifically, if a neutrino could travel faster than electrons could travel, then with sufficient energy the neutrino could emit an electron and a positron. Well, are the OPERA neutrinos traveling faster than electrons can travel? Yes, because of what we know about electrons from Cerenkov radiation. We know that electrons at energies below a TeV cannot travel faster than the speed of light in vacuum by more than one part in a thousand trillion, because if they could, they would Cerenkov radiate even in vacuum. In this case we would never observe the high-energy electrons that we do in fact see in experiments. And therefore, if neutrinos travel faster than light by a few parts per hundred thousand, then they likewise travel faster than the maximum speed of electrons by about the same amount. It follows that for sufficiently high energy, neutrinos can spit off an electron-positron pair."*

*Of Particular SignificanceConversations About Science with Theoretical Physicist Matt Strassler*

There are theories in which the speed of light changes with energy from vacuum dispersion caused by the quantization of space-time and non-commutativity (A. Connes) of spacetime as I recall. These are extensions of c-number relativity not fundamental violations of it.

The quantum state cannot be interpreted statistically

Matthew F. Pusey, Jonathan Barrett, Terry Rudolph

(Submitted on 14 Nov 2011)

"Quantum states are the key mathematical objects in quantum theory. It is therefore surprising that physicists have been unable to agree on what a quantum state represents. There are at least two opposing schools of thought, each almost as old as quantum theory itself. One is that a pure state is a physical property of system, much like position and momentum in classical mechanics. Another is that even a pure state has only a statistical significance, akin to a probability distribution in statistical mechanics. Here we show that, given only very mild assumptions, the statistical interpretation of the quantum state is inconsistent with the predictions of quantum theory. This result holds even in the presence of small amounts of experimental noise, and is therefore amenable to experimental test using present or near-future technology. If the predictions of quantum theory are confirmed, such a test would show that distinct quantum states must correspond to physically distinct states of reality."

"But the new paper, by a trio of physicists led by Matthew Pusey at Imperial College London, presents a theorem showing that if a quantum wavefunction were purely a statistical tool, then even quantum states that are unconnected across space and time would be able to communicate with each other. As that seems very unlikely to be true, the researchers conclude that the wavefunction must be physically real after all."

Something screwy about the above Nature quote. It seems to say that orthodox quantum theory requires entanglement signals. This contradicts all sorts of no-signal theorems based on linearity of observables, unitarity etc., e.g. papers of Adrian Kent et-al. In Bohm's theory, the quantum potential is real, but its fragility, i.e. no direct back reaction of the hidden variable on it, gives no entanglement signaling. This is no longer the case using distinguishable non-orthogonal macro-quantum coherent Glauber states seen in lasers and also in the Higgs-Goldstone spontaneous broken symmetry of virtual quanta in vacua and real quanta in condensed matter systems with ODLRO in the Feynman propagators.

Nov
18

http://www.symmetrymagazine.org/breaking/2011/11/17/faster-than-light-neutrino-measurement-withstands-new-test/

November 17, 2011 | 7:29 pm

"Scientists modified the beam of neutrinos traveling through the Earth from CERN to INFN. Image: Jean-Luc Caron

The OPERA experiment’s surprising superluminal neutrino result is holding fast after a new measurement designed to eliminate a possible source of systematic error from their previous tests.

OPERA scientists reported the new results in a press release and a paper released on the arXiv today.

“The positive outcome of the test makes us more confident in the result,” said Fernando Ferroni, president of the Italian Institute for Nuclear Physics. But “a final word can only be said by analogous measurements performed elsewhere in the world.”

On Nov 17, 2011, at 5:33 PM, JACK SARFATTI wrote:

http://www.nature.com/news/quantum-theorem-shakes-foundations-1.9392

“I don't like to sound hyperbolic, but I think the word 'seismic' is likely to apply to this paper,” says Antony Valentini, a theoretical physicist specializing in quantum foundations at Clemson University in South Carolina.

On Nov 17, 2011, at 11:48 AM, JACK SARFATTI wrote:

On Nov 17, 2011, at 11:18 AM, art wagner wrote:

http://www.science20.com/alpha_meme/future_influence_quantum_physics_precognition_or_pseudoscience-84265

"From time to time and against strong resistance from the scientific establishment, inspired scientists come out of the closet and dare to publicly consider whether the future can influence the present. Is it in principle possible that we may be able to partially perceive the future say via evolved emotional responses? Future influence has been proposed by Roger Penrose in order to explain how certain crystals grow. So called quasi-periodic crystals avoid additions of atoms into places and orientations that are perfectly allowed while growing, but whose occupation would lead to future mismatches in the resulting, larger crystal (this is due to quasi-periodic crystals not having periodic order over large distances like usual crystals). The idea involves backward causation, also called retrocausality: The consistency of the future state guides the present growth via quantum interference."

Nov
12

Tagged in:

http://www.youtube.com/watch?v=gy7Jn9-SBNg

Part 1 of 2

Refuting Adrian Kent et-al on no entanglement signaling

Part 2 of 2

http://www.youtube.com/user/JSarfatti#p/a/u/0/5tRCo1g4uMU

Nov
10

Tagged in:

On Wed, Nov 9, 2011 at 5:18 PM, JACK SARFATTI <sarfatti@pacbell.net> wrote:

Thanks Art this is good. However, it's our retro-causal future horizon at its intersection with our future light cone that is the most important. This is the modern form of Mach's principle combined with Wheeler-Feynman.

Jim Woodward has a paper making this connection as I recall. It's also explicit in my Journal of Cosmology paper Vol 14, April 2011. The question is WHEN is the boundary?

Quantum entanglement from the holographic principle

Jae-Weon Lee

(Submitted on 16 Sep 2011)

It is suggested that quantum entanglement emerges from the holographic principle stating that all of the information of a region (bulk bits) can be described by the bits on its boundary surface. There are redundancy and information loss in the bulk bits that lead to the nonlocal correlation among the bulk bits. Quantum field theory overestimates the independent degrees of freedom in the bulk. The maximum entanglement in the universe increases as the size of the cosmic horizon and this could be related with the arrow of time and dark energy.

On Nov 9, 2011, at 4:10 PM, art wagner wrote:

http://arxiv.org/abs/1109.3542

Max Tegmark (MIT)

(Submitted on 15 Aug 2011 (v1), last revised 22 Sep 2011 (this version, v2))

We analyze cosmology assuming unitary quantum mechanics, using a tripartite partition into system, observer and environment degrees of freedom. This generalizes the second law of thermodynamics to "The system's entropy can't decrease unless it interacts with the observer, and it can't increase unless it interacts with the environment." We show that because of the long-range entanglement created by cosmological inflation, the cosmic entropy decreases exponentially rather than linearly with the number of bits of information observed, so that a given observer can reduce entropy by much more than the amount of information her brain can store. Indeed, we argue that as long as inflation has occurred in a non-neglible fraction of the volume, almost all sentient observers will find themselves in a post-inflationary low-entropy Hubble volume, and we humans have no reason to be surprised that we do as well, which solves the so-called inflationary entropy problem. An arguably worse problem for unitary cosmology involves gamma-ray-burst constraints on the "Big Snap", a fourth cosmic doomsday scenario alongside the "Big Crunch", "Big Chill" and "Big Rip", where an increasingly granular nature of expanding space modifies our life-supporting laws of physics.

Our tripartite framework also clarifies when it is valid to make the popular quantum gravity approximation that the Einstein tensor equals the quantum expectation value of the stress-energy tensor, and how problems with recent attempts to explain dark energy as gravitational backreaction from super-horizon scale fluctuations can be understood as a failure of this approximation.

Sarfatti Comment: What may be wrong with Max Tegmark's whole theory, also that of t'Hooft and Susskind is the assumption of unitarity on all scales. Unitarity conflicts with P.W. Anderson's "More is different." The retro-causal hologram theory explains why the initial entropy/future horizon area of the universe is relatively low without unitarity.

Nov
09

Paul you are quibbling over small points as far as a pop program like NOVA is concerned. It's not a graduate seminar in the history of physics where your distinctions would be perhaps appropriate.

On the substantial point. Modern picture (e.g. Rovelli's Quantum Gravity):

You can picture curved spacetime as a geometrodynamical set of four tetrad Cartan 1-forms e^I that live on a fictitious Minkowksi spacetime in which e^I is a 4-vector.

Mach's Principle is then a kind of Green's theorem in which the interior BULK e^I fields in 3D+1 are determined from anyonic fields that live on the past and future fractal horizons that are pixelated hologram screen non-bounding surrounding surfaces.

So you get Mach's Principle as the Hologram Principle.

On Nov 8, 2011, at 2:48 PM, Paul Zielinski wrote:

I like your review of Greene's book, which I found at

http://www.google.com/url?sa=t&rct=j&q=woodward%20greene%20fabric%20of%20the%20cosmos&source=web&cd=6&ved=0CEEQFjAF&url=http%3A%2F%2Fforum.nasaspaceflight.com%2Findex.php%3Faction%3Ddlattach%3Btopic%3D13020.0%3Battach%3D257690&ei=j6m5Ts6-Ao3YiQK53Y3BBA&usg=AFQjCNF71xW6Sg-IYdz7XAWZG3R_TmkYCg&cad=rja

but there are a couple of points that don't seem to add up here.

From the first Nova program:

"Space. It separates you from me, one galaxy from the next, and atoms from one another. It is everywhere in the universe. But to most of us, space is nothing, an empty void. Well, it turns out space is not what it seems. From the passenger seat of a New York cab driving near the speed of light, to a pool hall where billiard tables do fantastical things, Brian Greene reveals space as a dynamic fabric that can stretch, twist, warp, and ripple under the influence of gravity. Stranger still is a newly discovered ingredient of space that actually makes up 70 percent of the universe. Physicists call it dark energy, because while they know it's out there, driving space to expand ever more quickly, they have no idea what it is."

So, so-called "empty space" is not what it seems, it is not physically empty; it is an objective physical system, exactly as Einstein said in 1920.

I can't see how Greene could argue that this fits in with the "relationalist" paradigm that you allude to below. So I can only conclude that far from *adopting* the original relationalist version of Mach's principle, he is merely *considering* Mach's original relationalist version of what Einstein later called "Mach's principle", and is rejecting it as inconsistent with GR. Is that what you meant?

Of course I understand that you are proposing a *physical* version of "Mach's principle" which doesn't imply a relationalist view of spacetime, and which you argue is fully consistent with GR.

Interestingly, in his book Greene attributes an abstract conception of spacetime to Einstein, whereas in the 1920 Leyden address he says the exact opposite. Will the real Einstein please stand up?

Also I note that in the first Nova program the discovery of the objective physical unity of space and time (in "spacetime") is erroneously attributed to Einstein, instead of Minkowski. This is a very

common misrepresentation. In fact there was no invariant spacetime metric in Einstein's 1905 theory of relativity, and Einstein even rejected the concept as physically redundant when it was

first proposed by Minkowski.

On 11/8/2011 12:30 AM, jfwoodward@juno.com wrote:

Gentlefolk,

I've decided to collect a little data on a new device of slightly different design before sending anything out with more results. In a few days.

Wes Kelly has noted that Brian Greene is doing a new series on Nova. It is titled "Fabric of the Cosmos", the title of a book by him published back in 2004. A choice I find a bit odd as his latest book, "The Hidden Reality", was published this past year. The Fabric of the Cosmos is also a bit odd -- because one of its central themes is Mach's principle. Something one finds almost nowhere in the popular scientific literature. [So odd, I wrote a review of the book back in 2004: Found. Phys. vol. 34, pp 1267-1273 (2004). The only other popular book I can think of that deals extensively with Mach's principle is John Gribbin's Schrodinger's Kittens and the Search for Reality published in the mid-'90s.] If you find the Mach's principle stuff unfamiliar, Greene's book does an excellent job on it -- though he opts finally for the "relationalist" version of the principle (which is physically uninteresting).

The first episode aired last week. It was promising. So, if you've got nothing better to do when Nova airs in your neck of the woods. . . .

Best,

Jim

Nov
07

Tagged in:

This one is important and relatively clear. Think of it's implications if I turn out to be right that post-quantum entanglement signals exist when distinguishable non-orthogonal Glauber coherent sender states entangled with ordinary receiver qubits are used.

No Signalling and Quantum Key Distribution

Jonathan Barrett,1, 2, ? Lucien Hardy,3, † and Adrian Kent4, ‡

1Physique Th ?eorique, Universit ?e Libre de Bruxelles,

CP 225, Boulevard du Triomphe, 1050 Bruxelles, Belgium

2Centre for Quantum Information and Communication, CP 165/59, Universit ?e Libre de Bruxelles, Avenue F. D. Roosevelt 50, 1050 Bruxelles, Belgium 3Perimeter Institute, 35 King Street North, Waterloo ON, N2J 2W9, Canada 4Centre for Quantum Computation, DAMTP, Centre for Mathematical Sciences, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA, U.K. (Dated: March 2005 (revised))

Standard quantum key distribution protocols are provably secure against eavesdropping attacks, if quantum theory is correct. It is theoretically interesting to know if we need to assume the validity of quantum theory to prove the security of quantum key distribution, or whether its security can be based on other physical principles. The question would also be of practical interest if quantum mechanics were ever to fail in some regime, because a scientifically and technologically advanced eavesdropper could perhaps use post-quantum physics to extract information from quantum communications without necessarily causing the quantum state disturbances on which existing security proofs rely. Here we describe a key distribution scheme provably secure against general attacks by a post-quantum eavesdropper who is limited only by the impossibility of superluminal signalling. The security of the scheme stems from violation of a Bell inequality.