Breaking News
Loading...
Saturday 8 September 2012

Info Post
Jason Rush sent me a link to a new insane anti-quantum article spread by the BBC:
Quantum test pricks uncertainty

Subtitle: Pioneering experiments have cast doubt on a founding idea of the branch of physics called quantum mechanics.
The subtitle is particularly "cool". In recent months, we "learned" from some would-be scientific journals and the mainstream media that the wave function is "surely" not interpreted statistically. Now we're told that the Heisenberg uncertainty principle has been wrong from the beginning, too. In fact, the subtitle finally makes the "paradigm shift" that all the anti-quantum zealots have been expecting at least from 1925 as simple and clear as possible: quantum mechanics itself has been put in doubt.



What happens when a few assholes acquire access to cool lasers?

Fine. What concepts are these hardcore cranks relying upon?




The crackpot paper that managed to swim in between the anti-pseudoscience filters directly to the prestigious Physical Review Letters makes the answer very clear.

They "prove" that one may violate the uncertainty principle by violating a completely different and physically uninteresting non-principle that has a "weak measurement" instead of a "measurement". But despite the name, a "weak measurement" isn't a measurement at all. It's a bizarre construct expressed by a formula that generalizes the measurement in a certain way but the "mutation" is serious enough so that you know that it's not a measurement at all. If you explained the definition of this "weak measurement" concept to Werner Heisenberg, he would surely need just minutes to show that this "weak measurement" fails to have many properties that a proper measurement possesses, and he would surely add "WTF?", too.
Related: See also an article on non-demolition measurements
Weak measurements have been mentioned on this blog in 2004 and 2011.

When I tell you what a weak measurement is, you should be able to immediately to see what the "trick" is. Indeed, the main reason why "weak measurements" became a popular term in the anti-quantum literature was the suggestion that "we can measure something without disturbing it after all". Except that we obviously can't, a revolutionary insight for which Werner Heisenberg rightfully received his 1932 physics Nobel prize (without sharing it with anyone).

The term "weak measurement" was coined in the 1988 article in Physical Review Letters
How the result of a measurement of a component of the spin of a spin-1/2 particle can turn out to be 100
by Yakir Aharonov, David Z. Albert, and Lev Vaidman who slightly extended some speculative 1932 work by John von Neumann. Everyone who knows this basic historical fact must clearly see that this "new kind of a measurement" isn't a measurement at all. If \(WM(j_z)\) for a spin-2 particle may deviate from the standard set \(j_z=\pm 1/2\) and take values such as \(WM(j_z)=100\), it's very clear that \(WM(j_z)\) isn't \(j_z\) in any sense. It can't even be an average (or expectation) value. If it were \(j_z\), it just couldn't be equal to \(100\).

This criticism – pointing out that there is really nothing new and interesting, nothing that would challenge conventional quantum mechanics in the AAV 1988 paper – was made already in this comparably well-known 1989 paper.

Let me mention that AAV 1988 didn't just introduce a controversial new phrase. They also designed a method to calculate the expectation value of an observable out of many measurements none of which disturbs the measured object too much. The actual expectation value may be obtained from an average of such "weak values". However, the real problem is whether the individual terms in the average should be interpreted as properties of the physical system, generalized values, at all.

To answer this question, I may enthusiastically recommend you an almost unknown but extremely sensible 2009 paper by Stephen Parrott (arXiv) who retired already in 2002 (UMass Boston: younger people may be increasingly insane about QM) and who explains that the "weak measurement" yields a value that doesn't really say anything about the system itself; it always depends on all the details of the measurement. It's like a survey in which all the numbers may be internally distorted by various hidden choices but a survey that still claims to measure the same thing. That's why one can derive nonsensical conclusions such as values outside the interval of allowed eigenvalues.

Fine, what is a weak measurement? The term has been used in a very vague and sloppy way in the literature. It's meant to be some kind of a measurement that tries "not to affect the system too much". But whenever people stick to the proper definitions, a weak measurement is any process to quantify the following ratio known as the "weak value":\[

WM(j_z) = \frac{ \bra{\phi_\text{before}}j_z\ket{\phi_\text{after}} }{ \braket{\phi_\text{before}}{\phi_\text{after}} }.

\] The numerator and the denominator differ by the insertion of the \(j_z\) only. Note that while the ket vectors are the "state vectors after", as they should be, the bra vectors are "state vectors before". That's of course too bad because a genuine measurement must be given by this formula:\[

E(j_z) = \frac{ \bra{\phi_\text{after}}j_z\ket{\phi_\text{after}} }{ \braket{\phi_\text{after}}{\phi_\text{after}} }.

\] All the bra and ket vectors are the "states after" because the very point of a measurement is to acquire some information about what will be happening with the system after the measurement. That's how it must be; we can only measure by affecting the object so the measured value is linked to the "state after". In other words, one can never really measure properties of objects "before the measurement" in quantum mechanics (the "states before" are interfering and have all the other wonderful non-classical properties) and not even the "mixed inner products". Note that I didn't call the ratio simply \(j_z\) because it's just the expectation value \(E(j_z)\). You probably won't get this value during any measurement. You will get random values and \(E(j_z)\) is their average.

So even if those people admitted that the "weak measurement" isn't an actual measurement, there's still another dishonesty hiding in the terminology. The "weak value" doesn't actually generalize the "measured value". Instead, it generalizes the "expectation value".

One may invent mumbo-jumbo explanations why it could be natural to consider both the initial state and the final state in similar ratios (an example is to model the weak measurement as a process that takes some time and requires a time-dependent Hamiltonian). But all this mumbo-jumbo is pure demagogy. The reason is that the information about a physical system in quantum mechanics (information only about the system, not some mixed information relating the systems to lots of extra conventions) may only be found out by "actual measurements", not by their "weak generalizations". The actual measurements may have some extra sources of inaccuracy but as long as they find out something about the system, they always disturb the system at least as much as Heisenberg and quantum mechanics in general states. In particular, if you're measuring the position of a particle with precision \(\Delta x\), you inevitably disturb the momentum at least by \(\Delta p = \hbar/2\Delta x\). Regardless of the claim in PRL, I can rigorously prove this version of Heisenberg's principle involving "disturbances", too.

That's what the new crackpot paper hyped by the BBC wants to deny – and it denies it just by changing the rules of the game and pretending that something that isn't a measurement is a measurement.

If you think about the formula for the weak value for a while, you will notice that it's really inconsistent because it suggests that one may know the final state just by making a procedure that yields a value \(WM(j_z)\) which, as I explained, is a generalized expectation value, not an actual value. This is of course an inconsistent mixture of the quantities that may be measured during one particular repetition of the experiment; and things that can only be extracted from a statistical treatment of many repetitions.

In proper quantum mechanics, you may only learn something about the "after state" if you actually know a quantity you have measured. This measured value gets reprinted into the nearly classical devices and brains as "effectively classical information" and one may show that the new state of the subsystem is the corresponding eigenstate one may exactly reproduce by a quantum analysis of the whole apparatus-small-system composite system.

If you only know some generalized "expectation value", and the "weak value" is an example of that, you don't actually know anything about a particular repetition of the experiment, so you can't determine the "after state".

The claim "one may circumvent the uncertainty principle after all" is of course the most obvious application of the concept of the weak measurement by the pseudoscientists and we were only waiting for someone to make himself "famous" by this obvious piece of crackpot interpretation. However, the "weak measurement" has been misused for more modest pseudoscientific goals in the past, too. For example, the "weak measurement" has been claimed to "solve" Hardy's paradox.

What did that mean?

Well, obviously, there's no Hardy's paradox in quantum mechanics. Quantum mechanics predicts probabilities for combined results of any measurements regardless of the types of measurements that the experiments choose to perform and regardless of their timing. The predictions are unique and the probabilities obey all the logical rules they have to obey (belonging to the interval \((0,1)\), being added when mutually exclusive outcomes are connected by "OR", causality, locality, and so on). So there can't possibly be any paradox! You either know the right prediction or you don't.

Hardy's paradox, much like all other pieces of "quantum recreational mathematics", are only paradoxical if you're trying to think about the world classically. So the claim that the "weak measurement has solved Hardy's paradox" was nothing else than another classical mumbo-jumbo justification for the indefensible, namely that one can construct a "classical model" that gives right predictions for situations such as Hardy's experiment. But this is only "possible" if one throws all standards out of the window and if one uses concepts such as "weak measurement" in such a deliberately vague way that he overlooks the fact that it isn't a measurement at all.

Let me mention that there are various legitimate ways to measure the position "approximately" so that the momentum doesn't get disturbed infinitely strongly. For example, one may ask what is the probability that the system finds itself in a state associated with a particular "fuzzy bump" in the phase space – which is given by just another vector in the Hilbert space. One may measure many such probabilities simultaneously (try to cover the phase space with similar bumps or their modifications) and obtain some approximate information both for \(x\) and \(p\) (phase cells have nonzero width and height) while both of them are only disturbed by a finite amount. In this "Ansatz" for a "not too pushy measurement", Heisenberg's inequality may again be proven rigorously; it's the same proof involving the error of position and the error of momentum, just applied at the "benchmark states" onto which we project instead of the actual state. Well, you must interpret the uncertainty of the position and the momentum in the "benchmark state" to be a part of the disturbance even though you may be lucky and have an actual state equal to a benchmark state (which means that the state won't be disturbed at all). But if you average over lucky and unlucky cases, you will be again able to prove that the averaged disturbances obey the inequality even if the disturbance only counts the change of the actual state vector.

There are other, effectively equivalent yet legitimate ways to limit the "disturbance" of the system for the price of learning the values of \(x\) and \(p\) only approximately but the "weak measurement" isn't one of them because it's an arbitrary "statistic" that always depends on the precise protocol defining which weak measurements we perform and how. A way to distinguish legitimate realizations of "inaccurate measurements of the position" from the illegitimate one is to analyze whether they obey Heisenberg's inequality for the "precision of position" and "disturbance of momentum". The legitimate techniques to measure obey this inequality; the illegitimate ones often violate it.

I never know whether people making the claims – or encouraging the journalists to make claims – that "we have proved that Heisenberg was wrong all along" are extraordinarily dumb or extraordinarily dishonest but the most frequent answer I tend to pick in similar situations is that it is something in between. Those people must subconsciously know that what they're saying is rubbish so they're not "intrinsically stupid"; but they just decide that the most convincing method to deceive other people is to deceive themselves, too. So they actually work hard on themselves so that at least superficially, they believe their own garbage.

At any rate, it's clear that at a deeper level, they must know that this research is pure garbage. It isn't a paper about a shocking experiment in which some physicists did something that happened to lead to surprising and important results. Instead, this new "BBC" paper and all similar papers always start with the conclusions. The authors decide what they want to "achieve", e.g. to prove that Heisenberg has always been wrong, and they just construct a required combination of loaded and distorted terminology and redundant experimental setups that make it look like that there is some science behind the preconceived and wrong conclusion.

Of course that these people "intrinsically" know some basics of quantum mechanics and they used these basics to construct the experiment. Of course that they knew in advance what the result would be and it's the result predicted by quantum mechanics. Of course that there isn't an infinitesimal piece of evidence for the claim that quantum mechanics is in doubt. And in fact, the PRL paper doesn't claim that they cast doubt on quantum mechanics itself (only BBC does); it "only" (still incorrectly) claims that they disprove Heisenberg's "generalized" uncertainty principle for the "error in position" and "disturbance of momentum by the measurement" by dishonestly redefining what the measured position is (a "generalized measurement" that can give you \(100\) for the spin isn't a legitimate measurement of the spin). But by a combination of vague, and distorted redefinitions of concepts, loaded language, and "legitimate simplifications" expected from journalists, it is always possible to write down a paper that will lead to 100.0000% incorrect subtitles such as "pioneering experiments cast doubts on quantum mechanics" and I guess that this has been the goal from the beginning.

It's a nasty junk science and famehunting and all the scientists and journalists around this scandal are assholes.

And that's the memo.

0 comments:

Post a Comment