Friday 30 November 2012

Does the Bigfoot exist?

You must have noticed that two days ago, news sources were hyping a claim of Melba Ketchum, a Texan veterinarian, that she has sequenced the DNA taken from Mr Bigfoot Sasquatch Jr.

After a 5-year-long research, her DNA team in Nacogdoches has allegedly determined that he exists and he is the son of an American woman (because the mitochondrial DNA inherited from the mother matches homo sapiens sapiens) and her hairy male primate partner whose ancestors split from "us" (apologies to TRF readers who are Sasquatch Americans whom I don't count right now) about 15,000 years ago (because they are said to have discovered a new nuDNA related to humans and primates).

Oh, I see. The Register says that they claim that the sex between the human female and the exotic creature took place 15,000 years ago.




I would like some readers with some good enough background in genetics to tell me whether the claims would be possible if the hypothetical hairy 8-feet Gentleman exists and, more generally, what is the most likely explanation of all these claims. Thanks...



Before I am told something more sensible, I find the story very unlikely and the source doesn't look excessively credible but the story is not impossible. It doesn't seem to be banned by the laws of Nature that there could be a primate that's closer to us and whose population is very small – and the remaining ones are dying in car accidents every year. However, it's even more suspicious that this is supposed to be offspring of humans with a primate that is unknown.

Firewalls vs analytic continuation

Two new interesting anti-firewall papers and one pro-firewall paper

The black hole firewall argument by AMPS is probably invalid but it has already led to a significant new wave of research and discussions about the black hole information puzzle which is a good thing.



There are some new papers since my most recent comments on the firewall controversy. But what happened today is kind of remarkable: just today, there are a whopping three papers on the black hole firewalls!




The papers are the following ones:
An Infalling Observer in AdS/CFT by Kyriakos Papadodimas, Suvrat Raju

Black Hole Entanglement and Quantum Error Correction by Erik Verlinde, Herman Verlinde

A Note on (No) Firewalls: The Entropy Argument by Yasunori Nomura, Jaime Varela
The middle paper by Verlinde Squared is the only one that tries to confirm the firewall argument – well, it seems that it's taken as an assumption and they are adding some formalism that is mathematically analogous to the quantum error correction codes (from quantum computation). The remaining two papers think that the black hole firewall argument is invalid although the first paper only claims to show this point "indirectly".

I know all the authors of the first two papers – Kyriakos and Suvrat have been very smart Harvard students and Suvrat has been my excellent T.A. as well.

The authors of the last paper (which is a continuation of a July 2012 and an October paper they wrote with Sean Weinberg) are not known to me, as far as I remember, and their short paper refuting AMPS isn't quite 100% comprehensible to me so far but it's rather likely that they're saying some of the same things that I do. When I read lots of sentences about the degrees of freedom being "the same" and about the bases' being dynamically determined by the relevant Hamiltonian etc. (bottom of page 1, top of page 2), it almost sounds like they copied it from this blog. Indeed, for an observer who may fall inside the black hole, the state evolves into a superposition of macroscopically different states (they say "classical worlds") and different terms in this superposition lead to different natural operators (local operators inside vs. operators defining properties of the Hawking radiation) and therefore different "preferred bases" of states onto which measurements may "project".

I totally agree with that – well, see my description of the same point – and I also think it's an essential point about quantum mechanics that AMPS misunderstand although I don't quite see Nomura's and Varela's explanation why AMPS are wrong in its entirety.
TBBT: Last night (Parking Spot Escalation), Sheldon (plus Amy) fought Howard (plus Bernadette) because of Sheldon's parking spot. Sheldon was also computing some six-loop amplitudes on that spot using twistors. The exchanges between the men were more physical but the women's conversations were nastier.
The Verlinde paper may be kind of relevant but I don't believe it really applies to genuine black hole physics because some of the implications/assumptions aren't right: they seem to uncritically adopt all the AMPS claims as their starting point. It seems kind of "expected" to me that one may write down some toy models that agree with the firewall picture. But they disagree with the equivalence principle near the event horizon, so they are not constrained by this particular strong condition which makes it easier to find "examples". To summarize, I think that they're constructing a toy model for something that isn't equivalent to black holes in quantum gravity. They're really talking about some kind of a star, not black hole, that is claimed to behave as quantum error correction codes (their not-so-correction codes lead to errors with the probability approaching 100%, anyway, but I don't want to discuss the details of their paper here; I also think that the attempts to find links with quantum computation or other fields of science are kind of artificial and driven by the desire to be "cool"; their comments about the "black hole final state" sound meaningless to me because the "black hole final state" may only be discussed if the black hole interior is empty and obeys GR almost up to the singularity which they deny by adopting the firewall assumption).



It's really the Kyriakos-Suvrat paper that I am most intrigued by. I would even say that it's the best "general enough" paper that's been written about the black hole information issues ever – although the authors are of course standing on shoulders of some giants or at least large people, of course. ;-) They use the AdS/CFT correspondence and the 1996 thermofield doubled formalism by Takahashi and Umezawa to show quite explicitly that the black hole complementarity works. Analytic continuation, especially shifts of time by \(i\beta/2\), plays a key role and I feel nearly certain that this is a necessary tool and a right tool to look at the black hole interior properly and to explain why the black holes look nearly thermal but they don't have to be thermal.

In the past, various people have tried to use the AdS/CFT to look inside black holes. In particular, I can't forget those highly thought-provoking yet confusing attempts by Fidkowski, Hubeny, Kleban, and Shenker in 2003. They looked for some saddle points of the path integral corresponding to trajectories visiting the interior, or something like that. But the continuation used by Kyriakos and Suvrat seems much more solid and natural to me. What is it about?

One starts by rewriting thermal expectation values of operators as expectation values of "corresponding" operators acting on a larger (doubled) Hilbert space. The pure state in the doubled Hilbert space that corresponds to the thermal state in the non-doubled state is simply\[

\ket{\Psi_{\rm tfd}} = \frac{1}{\sqrt{Z_\beta}} \sum e^{-\beta E/2} \ket E \otimes \ket E

\] The expectation values of operators \({\mathcal O}\) in the thermal state (and non-doubled Hilbert space) may be rewritten as expectation values of similar operators \({\mathcal O}\) in the state above, one that belongs to the doubled Hilbert space, as long as these "similar operators" only act on the first copy (first tensor product factor) of the Hilbert space above. Look at equations (5.1) etc. of their paper if you don't know what I mean but it's kind of trivial.

An annoying detail is that the individual phases of the energy eigenstates \(\ket E \otimes \ket E\) could be changed arbitrarily without changing the expectation values and I am not sure what these phantom cousins of the pure state mean and which of them are allowed.

At any rate, once you have doubled the Hilbert space, you have the freedom to consider additional operators \(\tilde{\mathcal O}\) that only act on the second tensor factor of the doubled Hilbert space. Consequently, the doubled Hilbert space may accommodate an entirely new region of the spacetime and a new observer. If these operators were totally unconstrained, you would indeed double the number of degrees of freedom. But Kyriakos and Suvrat instruct you to believe something else: the operators \({\mathcal O}\) and \(\tilde{\mathcal O}\) aren't quite independent of each other. Instead, in the non-doubled Hilbert space, you're only allowed to consider expectation values of products of operators in which you replace or identify\[

\tilde{\mathcal O}(t,\vec x) \to {\mathcal O}(t+i\beta/2,\vec x).

\] When I was writing these lines for the first time, I didn't quite understand whether they really meant that the spatial coordinates \(\vec x\) are kept fixed and what it means (the signature of the coordinates gets kind of reversed below the event horizon etc.) but it is clear what they do with the time coordinate \(t\).

This identification gives you a very specific realization of the black hole complementarity. You literally write down the local operators perceived by the infalling observer as some analytic continuation – evolution via complex time – of the operators that are outside. While the tilded and untilded operators commute with each other on the doubled Hilbert space, their representatives on the actual Hilbert space (where we deal with the thermal state) don't effectively commute. However, the entanglement between them that is imposed on you by shrinking the doubled Hilbert space to the normal one is so complicated that it doesn't affect any reasonable-precision correlators of a reasonable number of operators.

(Question for you: does the analytic continuation preserve the reality/Hermiticity conditions for the fields?)

Their construction, if correct, explicitly confirms the equivalence principle and the black hole complementarity in the case of the "large AdS black hole" which is larger than the AdS radius or so and, consequently, doesn't Hawking radiate (unless we count the Poincaré recurrences as a type of the Hawking radiation). But it seems that the general formalism demotivates, to put it mildly, the AMPS claims about the small, and therefore evaporating, black holes as well.

Kyriakos and Suvrat also claim to refute something about fuzzballs. I think they're almost certainly right but I didn't know that Mathur et al. would disagree. The question is whether the infalling observers may see the hypothetical fuzzball structure of the black hole. Kyriakos and Suvrat say No – because it would need to measure things with an exponential accuracy or measure high-\(n\) \(n\)-point functions. I am confident that they're right but I thought that Mathur et al. would agree that the fuzzball structure of a microstate could only be measured by some very accurate measurements done by observers outside the black hole who send some probes into the black hole very cleverly, or it may be practically measured by no one at all.

While I think that the analytic continuation is the most natural approach to the black hole interior, it doesn't seem strictly incompatible with the existence of the fuzzball solutions. If the solutions exist, they exist. They may provide you with another description which may be equivalent.

Of course, I have always thought that the analytic continuation will offer new perspectives to understand the black hole information and quantum gravity issues in general and we may already be seeing the transformation of these sketches to some tangible mechanisms. In this particular paper, I would say that an explicit construction was shown that preserves the equivalence principle accurately up to the exponentially tiny errors but it is still compatible with the proposition that the degrees of freedom that the interior operators act upon are just scrambled versions of some degrees of freedom outside the black hole.

The black hole complementarity works, stupid, and the firewalls don't have to be believed to exist.

Meanwhile, Kyriakos and Suvrat write lots of very crisp and probably true general things about what thermalization means, how it requires a division of the degrees of freedom to the accessible coarse-grained ones and inaccessible fine-grained ones (the environment), how the density matrix in the former (traced over the latter) gets thermal soon, why the division may be done in various ways, and so on. It is simply a wonderful paper showing a lot about how things actually work.

P.S.: I've spent a few hours with the Kyriakos-Suvrat paper and I am sure that many folks who have read it will find some of the sentences above misleading. Incidentally, the Verlinde Squared paper, while classified as pro-firewall, could be kind of compatible with some of the other pictures, including Kyriakos-Suvrat picture. The error correction code may be a different way of describing the construction of the tilded operators. But their discussion is much less accurate when it comes to the GR-like description of the degrees of freedom so they simply can't answer many questions that Kyriakos and Suvrat may. Quite generally, many papers' claims that some principles have to be sacrificed (probably at the leading order or in accessible experiments) are due to sloppy guesses of the kind "it's hard to imagine that this could work, too". But Kyriakos and Suvrat do show that all these things do simultaneously work at an amazing accuracy and there's no contradiction.

Thursday 29 November 2012

Dark matter discovery: behind the corner?

Clara Moskowitz did a pretty good job

Two days ago, I read a popular article about dark matter:
Dark Matter Mystery May Soon Be Solved
It was written by Clara Moskowitz, an important editor of Space.com, and I thought that the article was unusually good.

The article sketched what dark matter is, what it is composed of according to the most convincing theories (WIMP or axion, and speculations on hidden dimensions properly identified as a cherry on top of the pie), the arguments in favor and against of the superpartners as dark matter particle candidates, and the ongoing experiments as well as those that are getting started such as LUX that are trying or will be trying to directly detect the dark matter particle.




In my opinion, the content of the article was attractive yet balanced and the author collected the voices of some highly relevant scientists.

Matt Strassler wrote an unexpectedly critical text about Clara Moskowitz's article, however. His criticism focused on the main proposition of the article from the title, namely that the discovery of the composition of dark matter is probably imminent according to some scientists. He complained:
I have to admit that this kind of phraseology, which one often sees in the press in reports about science, drives me a bit nuts.  Which scientists? How many of them?  You can’t tell from this line whether this is something that a group of three or four mavericks are claiming, or whether it is conventional wisdom shared by most of the community.  
It is understandable he wants the assertions to be accurate enough. However, he suggests what the articles about science should emphasize – some kind of "consensus" – and I completely disagree with that. There is clearly no consensus about questions that aren't quite settled yet and more importantly, science isn't the search for consensus. Whether the dark matter has been detected and if it has not, whether it will be detected soon is a question that no one really knows for sure and Clara Moskowitz didn't try to claim otherwise. It's not important how many scientists would answer Yes or No in a poll. Science is simply not done in this way. A person aligned with majorities may be close to an "average scientist" but good science is primarily done by "good scientists" which is something else.

But she wrote that there's a good case to believe that we will be probing the "last places" of the parameter space where the dark matter particles could be hiding. I think it's a fair appraisal and what I expect from a fair science-oriented article isn't some counting of heads who don't really know what to say and most of whom are affected by their environment but rather a balanced presentation of the arguments in favor or against the propositions that are being made.

So when it is being said that the dark matter is likely to be discovered in years or earlier, the only "catch" could be if someone has good enough arguments that this proposition and the known arguments supporting it are wrong. If such arguments aren't available on the market, the author of the popular article isn't obliged to report any "votes" to the contrary or struggle to count the "supporters" of a particular proposition. In science, where there are no arguments, the counting of heads is simply irrelevant. It's unfortunate if Matt disagrees with this proposition. And I am kind of scared by his recommendation to write about "mavericks". It's absolutely not a science writers' business to label scientists "mainstream people" and "mavericks" or "seers" and "craftsmen" or to attach any other emotionally loaded, propagandist adjectives to scientists and theories, for that matter. Their task is to impartially inform about the known theories and conceivable proposed theories, about the discoveries and exclusions.

I would quantify my subjectively felt probability that the dark matter particle will be discovered in 5 years as 70 percent. It's more likely than not and I think that if you start with this number, it's already a remarkable situation and it justifies the title at Space.com, too.

Matt is right that the crucial property of the dark matter particle that decides about its looming (non)discovery is whether it has nonzero non-gravitational i.e. detectably strong interactions. If it only interacts gravitationally, we will probably not detect the particle directly here on Earth. I would say the odds may be 20% that the dark matter is composed of purely gravitationally interacting particles, 10% that it is not composed of isolated particles at all (including the possibility that there's no dark matter and the right explanation of the galactic curves etc. is totally different).

This still leaves those 70% for the "detectable" scenarios and if you cover some parameter spaces by some expected distributions, you will see that it's extremely likely that the experiments will see it over there. And I think it's legitimate for popular science writers to write about these expectations because they're exciting and science fans in the public should simply be interested in them – and, in fact, many of them are surely interested in them.

Matt Strassler usually writes about the events "half a year after the funeral" so it's unlikely that the people who want to know "what is shaking" are spending too much time with his blog. He didn't understand that the OPERA neutrino claims had been proved invalid even 3 months after everyone knew that they had been invalid, and he had to wait for July 2012 although all insiders knew that because of the (combined) 4+ sigma bumps at the LHC and other arguments, the existence of the 125+ GeV Higgs boson had been known since December 2011.

There's one important point, completely ignored both by Moskowitz and Strassler, namely that several experiments are claiming that they have already detected a dark matter particle. The "Is Dark Matter Seen" war has brought us lots of battles and salvos that were covered on this blog. I think that the probability may be something like 40% that the positive side's claims are right and the great discovery has already been made. Of course that if these claims are right, it's very likely that the claims will get settled by "significantly stronger" experiments that will supersede the existing ones in coming years.

Let me dedicate a few words to the title. Recall that Moskowitz's title was
"Dark Matter Mystery May Soon Be Solved."
Strassler proposes the following alternative with a supportive slogan:
"Searchers for Dark Matter Optimistic About Near Term."  That at least would have been undeniable.
I actually think that Moskowitz's title – and even her original title, "Dark Matter Running Out of Places to Hide", is more informative, fair, and impartial than Matt's proposed replacement. Her title informs us that the following years are expected to have an unusually high probability per year to find the dark matter particle which is true. She isn't claiming it's certain or anything of the sort.

The reason why I find Matt's alternative title problematic is that the word "optimistic" introduces emotional biases. Clearly, he relates "optimism" to the "discovery". But that's not how science should proceed. As Moskowitz's title described more correctly, the scientists' task is to answer questions and solve mysteries and have no a priori "preferences" about what the answers should be. So even though the discovery of a new particle brings more fame to the experimenters than a null result (and meaningful experimental projects should have high enough probabilities of finding a result that isn't null), it's just not a job for the "searchers" to be biased and prefer one result over another. If they find out that the dark matter can't exist in the range of parameters that their experiment may probe, i.e. if they can rule out some precise types of the dark matter particle, it's an insight and a contribution to science, too. So as long as their experiment will work, they should be optimistic, whatever the results of the experiment will be!

If you understand the scientific optimism in the impartial way, Matt's proposed title is vacuous. Moskowitz's title has beef and the beef reproduces the scientific arguments on the market.

Palestinian statehood and the U.N.

Update, vote: 138 Yes, 9 No (including Czechia), 41 Abstain
The State of Judenfrei Palestine was painfully accepted to UNESCO one year ago. Today, another vote of this kind is expected. It's almost certain that a vote will determine that the Israeli Arabs will win the same status in the U.N. that the Vatican enjoys – the status of a non-member state recognized by the U.N.

It doesn't seem fortunate to me that just weeks after the Israeli Arabs launched a missile campaign against Israel, the world's "peace organization" is going to declare by a vote that they're a suppressed group of victims who may be living on an occupied territory. If the opposite of occupation is that they are free to throw rockets to any neighbor they find, then occupation is a necessary condition for peace.




Most U.N. members will support the wannabe "state". Also, almost 1/2 of the EU member states will support it and almost everyone in the rest will abstain.

Israel, the U.S., and Canada have said they will oppose the bid. The list of other supporters sounds like a joke, Micronesia and Guatemala. I hope it's still likely that the Czech Republic (along with one or two Baltic states) will belong to this group, too. The leading politicians as well as most of the population opposes the Israeli Arabs' political projects.

But I can imagine there is some pressure from the West. Until very recently, Germany was thought to vote "No" as well but it may have changed its mind and it will probably abstain. I am sure that there are pressures – including pressures from Czechs for whom the mindless obedience to the EU is the greatest political value - that are trying to align the Czech attitude with the prevailing group think in the EU.



A red heart of Europe is still pulsating at the heart of Europe.

And much like in the 1930s, anti-Jewish sentiments have become fashionable in the EU again. The European Union wants to ban Jewish settlers in Europe because they're "violent" or "aggressive" and their products have to be labeled by a yellow star (or another "politically correct" equivalent; symbols evolve but the basic logic is still exactly the same, namely the opinion that the Jews are a special nation that doesn't have the right to settle anywhere). When I heard about these things, I thought they were jokes but they're apparently and unfortunately not.

I hope that the pressures attempting to make Czechia neutral in this vote will lose. We're surely less potent an ally, but I would say that the Czechia is an even more determined and reliable ally of Israel than the U.S. and Canada.

Update: Around 5 pm our time, foreign minister Karel Schwarzenberg leaked that we will vote against the statehood bid (assuming Mr Schlafenberg won't fall asleep) because we our policy including opposition to all unilateral steps should be consistent. It seems that the official Iranian agency's claim that Czechia would abstain was based on no real data whatsoever (typical shameful propaganda), just someone's vague "guess" that we should always mimic Germany. But despite our proximity and dense relations, since 1945, we don't have to, you know. ;-)

At any rate, I don't expect this vote to be game-changing because of many precedents I remember. Sometime in the late 1980s, during the last months of communism in Czechoslovakia, there was a similar vote in the U.N.

My classmate V.K. – who later became the president of Patria Finance – was immensely excited that we were just living on a special day when a new state, and it was Palestine, was born. Needless to say, the situation remained exactly as muddy for the following 20+ years as it had been previously.

Wednesday 28 November 2012

David Ian Olive: 1937-2012

Clifford Johnson mentioned some sad news.
Off-topic but good news: Mathematica 9 is released today and it's big, offering the first useful "predictive interface" with suggestions in the world of software, analysis of social networks, interactive gauges, system-wide support for 4,500 units, systematic addition of legends, support for Markov and random processes, integration of Steve McIntyre's language R, and more
British field theorist and string theorist David Olive, Commander of the Most Excellent Order of the British Empire, Fellow of the Learned Society of Wales, and Fellow of the Royal Society died on November 7th, at the age of 75.

If I remember well, I've never met him. But I've experienced lots of his key and beautiful insights.




INSPIRE shows that he co-authored 7 articles above 500 citations and another one above 100 citations. Let's look at them; I apologize that the remaining 81 papers that Olive has written won't be discussed below at all.

Montonen-Olive duality

In 1977, he and Claus Montonen of Finland published the Montonen-Olive duality (Wikipedia). It was a fascinating and utterly modern – according to the 2012 criteria – discovery combining (at that time) rather fresh solutions describing the magnetic monopoles (by 't Hooft and Polyakov) and supersymmetry.

In the spirit we know from some very recent "visionary conjectures", they hypothesized that in the \(\NNN=4\) supersymmetric gauge theory, the magnetic monopole states – that are as heavy as \(C/g^2\) and that we know as quantum states localized around nontrivial classical solutions, regulated versions of the Dirac magnetic monopole – come in degenerate families that have the same structure as the electrically charged states. If it's so, their interactions are likely to mimic the electric interactions as well. In fact, the whole gauge theory is invariant under a strong-weak duality \(g\leftrightarrow 1/g\) that, when combined with the shift of the \(\theta\)-angle, extends to the currently well-known \(SL(2,\ZZ)\) S-duality symmetry.

Note that Maxwell's equations \[

\begin{array}{|c|c|c|}
\hline
\nabla\cdot \vec E =\frac{\rho_E}{\epsilon_0} & \nabla\cdot \vec B = [\rho_M]\\
\hline
\nabla\times \vec B =\mu_0 \vec j_E+\mu_0\epsilon_0\pfrac{\vec E}{t}&
\nabla\times \vec E = [-\frac{\vec j_M}{c^2}] -\pfrac{\vec B}{t}
\\
\hline
\end{array}

\] had always have a symmetry between the electric fields and the magnetic fields (in particular, the electromagnetic induction is mutual and works in "both directions" which is important for the existence of oscillating electromagnetic waves), and if Sheldon Cooper had found genuine magnetic monopoles on the North Pole, the symmetry would extend to the electric and magnetic charges, too. In fact, in the real world, magnetic monopoles indicated by the density \(\rho_M\) and the current \(\vec j_M\) probably exist, too, but the lightest ones are very heavy particles, unlike electrically charged particles such as the electron which are light, so there's no symmetry between the masses of electric and magnetic monopoles. Those particles are so inaccessible that we haven't seen them yet and \(\rho_M,\vec j_M\) is omitted in Maxwell's equations.

However, in some other quantum field theories and/or vacua of string theory, the behavior of the sources, i.e. the electric and magnetic charges, may become symmetric, too. The maximally supersymmetric Yang-Mills theory is currently our simplest example of that. (The complete symmetry between the electric and magnetic sources is violated unless \(g=1\).) The high degree of supersymmetry doesn't make the theory contrived, as anti-supersymmetric crackpots of the Woit type love to say, but they make the theory constrained, more fundamental, and simple to calculate because the strong constraints from supersymmetry guarantee that many easily calculable approximations are actually exact. When we do so, we may verify numerous consistency checks.

(The Montonen-Olive results were kind of extended to lower supersymmetry, \(\NNN=2\), by Seiberg and Witten, and to \(\NNN=1\) theories by Seiberg. Lots of new physics and difficulties arise as the less supersymmetric gauge theories are growing less constrained and more general.)

In string theory, the Montonen-Olive duality may be interpreted as a symmetry inherited from some stringy backgrounds, e.g. the type IIB string theory, and it may also be geometrically proved using the (2,0) superconformal field theory compactified on a two-torus. If I see enough readers who want some new fresh perspective on the Montonen-Olive duality and who have a realistic chance to "get it", I will write about it later.

Olive wrote two more famous papers on magnetic monopoles in 1977 and 1978, with Goddard and Nuyts and with Goddard, respectively. And in 1978, he also co-wrote a paper with Edward Witten showing that supersymmetry algebras may include topological (e.g. "magnetic") charges on the right hand side.

In 1985 and 1986, during the (middle and later) first superstring revolution, Olive co-authored three famous papers on Kač-Moody and Virasoro algebras and their representations (with Goddard and, in two cases, also Kent). But let me return to one more key paper he co-authored in his miraculous year 1977.

GSO projection and spacetime SUSY in string theory

Olive's most cited paper (full text) approaching 1,000 citations was written together with Ferdinando Gliozzi and Joel Scherk (who unfortunately tragically died in 1980).

I think that with 50 pages, the paper is incredibly talkative, redundant, and in some aspects confusing (some of the R-symmetry groups seem to be smaller than they are, and so on). To be fair, most of the results were "announced" in an earlier 15-page paper in PLB. And the terminology is surely outdated. If you open the paper, you have to return to the era when string theory was studied by a dozen of heroes in the world, it was called "dual models", and superstring theory (with world sheet fermions) was referred to as the "dual spinor model".

But forget about all the doubts: it was a damn important paper.

It was the first paper that showed that realistic spacetime supersymmetry actually emerges from string theory. For a few years, it's been known from the work by Neveu and Schwarz and the work by Ramond (especially the latter was the first emergence of unbroken supersymmetry in the West – on the two-dimensional world sheet of string theory) that string theories with world sheet fermions may exist. But things had been puzzling.

Neveu and Schwarz talked about closed strings whose fermionic degrees of freedom were antiperiodic:\[

\psi^\mu(\sigma+\pi) = - \psi^\mu(\sigma).

\] But Ramond talked about closed strings with periodic fermions, i.e. the equation above without the minus sign. It wasn't clear which of them was right, whether only one of them was right at all, how to combine them if both of them were right, and what fields in the spacetime actually survive.

GSO clarified those issues. One has to use both the NS antiperiodic boundary conditions as well as the R periodic boundary conditions. But because we "doubled" the number of basis vectors, we have to reduce it to one-half again – in both sectors. That's what the GSO projections are good for. Only states that obey\[

(-1)^{F_L}\ket\psi = (-1)^{F_R}\ket\psi = +\ket\psi

\] are included in the physical spectrum; the remaining 3/4 of states whose eigenvalues of \((-1)^{F_L}\) or \((-1)^{F_R}\) are equal to \(-1\) are labeled as unphysical and filtered out from the spectrum. (Note that the doubling and halving of the basis vectors was done separately both for the left-moving excitations and the right-moving excitations of the closed string).

This is a consistent constraint on the physical spectrum because \((-1)^{F_L}=\pm 1\) which counts whether the excited string state is bosonic or fermionic (even or odd number of fermionic excitations) is a symmetry, and similarly for \((-1)^{F_R}\).

This consistent constraint is actually a natural "flip side" of the fact that we include both the NS and R sectors. To allow the fermions to be periodic or antiperiodic on the closed string (both), we must declare the operator \((-1)^{F_L}\) to be a part of the gauge symmetry group. Note that this operator is what changes the sign of fermions if it acts on them (on these fellow operators) via conjugation (and that's the right way how operators act on other operators)\[

(-1)^{F_L}\psi_L^\mu (-1)^{-F_L} = -\psi_L^\mu

\] which is true because on the left hand side, we count the fermionic excitations, then we add one, and we count them again – so that we get the opposite sign and the excitation itself survives. So the string with the opposite boundary condition is obtained by "twisting" using this operator \((-1)^{F_L}\): the operators on the string are periodic up to a conjugation by this operator.

But we may only say that "it is allowed to twist and conjugate in this way" if this operator \((-1)^{F_L}\) is considered to be "de facto the identity operator", more precisely, if it is treated as an element of the gauge group. But if it is so, the physical states must be invariant under this operator. The latter constraint is the GSO projection.

This GSO projection has lots of desirable consequences. The invariance under the "diagonal" GSO projection \((-1)^{F_L+F_R}\) which follows from the two chiral invariances is nothing else than the condition imposing the correct spin-statistics relationship. It's because you should have been worried that we have world sheet fermions \(\psi_\mu\) that are spacetime Lorentz vectors – vector-like fermions are "bad". And indeed, the "diagonal" GSO projection only allows you to add these spin-statistics-violating excitations in pairs so that the spin-statistics relationship is preserved for the physical states.

However, the GSO projection constraints the left-moving and right-moving excitations separately, as we have said, so it's stronger. (The "superstring" theories with the diagonal GSO projection only are modular-invariant – they obey a certain related consistency condition – but they still predict tachyons so their degree of consistency is pretty much on par with the \(D=26\) bosonic string theory. They're known as type 0A/0B string theories.) It's capable of eliminating the tachyon from the open-string spectrum and the tachyon from the closed-string spectrum, too. It reduces the "Dirac spinor" of states in the Ramond sector to a "Majorana-Weyl spinor" which is also good. And when you look at the remaining bosonic and fermionic states of the superstring, you will find out that their numbers match at each excited or unexcited level due to a nontrivial identity – "aequatio identica satis abstrusa" (a rather obscure formula) – that early string theorist Carl Jacobi discovered in 1829.

In fact, the equal number of states is a symptom of a symmetry – the spacetime supersymmetry – and this supersymmetry holds not only at the level of the free spectrum: it holds nonperturbatively, too. When the "somewhat dull, trivial, and confusing" world sheet supersymmetry was suddenly able to produce this remarkable, realistic, nontrivial, interacting supersymmetry in the spacetime, the physicists had to have a special feeling, indeed.

Note that a few paragraphs above, I mentioned that we declared the fermion-counting parity operators \((-1)^{F_L}\) and \((-1)^{F_R}\) to be elements of the gauge symmetry group so we had to assign all the "tasks" to these operators that elements of gauge groups always have: physical states have to be invariant and we must allow states where the periodicity holds "up to transformations by the gauge group element". There exists an alternative, Feynman-path-integral-based, "more controllable" way to describe why all these tasks have to be fulfilled together. It's called "modular invariance". The torus partition function has to be invariant under the 90-degree rotation of the torus (i.e. on the interpretation which of the two basis cycles of its first homology is which). Equivalent mathematics applies to compactification and orbifolds. In all these cases, we're adding "new sectors" (twisted strings, wound strings...) but we have to project the spectrum so that only the invariant states (invariant under the orbifold group, with quantized momentum in the compact dimensions...) are preserved.

Those things look totally crystal clear today but it had to be very interesting for the first discoverers to find these insights in the shadows of ignorance when the shining light of the truth suddenly emerges. I know this feeling, it's a special one.

RIP, Dr Olive.

Five greatest physicists' sex scandals

This is an extremely, extremely light topic. Popsci.com wrote a new article
5 Of Physics's Greatest Sex Scandals
A TRF guest blogger finds himself in a pretty good company.




Their list is the following:
  • Paul Frampton and his sweetheart in Argentina, a cocaine-equipped fake Czech-born model Denise Milani (a recent news video at YouTube)
  • Albert Einstein had a relationship with his cousin Elsa already when he was married to his hard-working first wife Mileva Marić
  • Marie Curie who fell in love with his freshly dead husband's ex-student, Pierre Langevin. The French media called her a homewrecker and a Jew although she was neither and although the latter couldn't have possibly been insulting, anyway
  • Erwin Schrödinger lived both with his wife and mistress, and he's had several examples of the latter over the years
  • Stephen Hawking frequents sex clubs; any problem with that? And I think that Hawking must also be considered an extraordinary experimenter because he's been capable of having children despite his slightly constrained physical powers
Congratulations to the winners.

Tuesday 27 November 2012

It's wrong to worry about the "fiscal cliff"

Starting from January 2013, America has a chance to restore some balance in its federal budget that's been getting worse since Clinton's surplus years around 2000 and that switched to seemingly permanent astronomical figures after the 2008 recession.

If no bills were changed or added in the following 6 weeks, spending cuts would come into force and some temporary (mainly Bush) tax cuts would be abolished again. The deficits would instantly start to improve, see the brown curve:



However, some people invented the term "fiscal cliff" for the relatively sudden improvement of the U.S. budget deficit in order to suggest that this good thing is actually a bad thing. It's a fiscal cliff but the sign is such that the U.S. may start to climb out of the deep Greek $hit it's been sinking into. But a "hike from the mud in the Mariana Trench towards the summit at Mount Everest on a sunny day" doesn't sound as catchy as the "fiscal cliff".




The advocates of this "evil fiscal cliff paradigm" are mixing some of the usual crackpot left-wing economical misconceptions. They say that a too large part of the reductions of the deficit would be paid by the middle class; and they say it's bad to improve the finances discontinuously.

You may see that the first complaint is just populist demagogy; and the latter is just a wrong model of economics. It's good that at least some writers, e.g. Michael Sivy in the Time Magazine and WSJ's Jonelle Marte, agree that it's wrong to worry about the "fiscal cliff".

Concerning the first objection, about 2/3 of the deficit reduction would be paid by the middle class. One is expected to be upset that it's a high percentage. I am amazed by such reasoning. The middle class is the class that mostly "owns" the political system of America – the class whose a$s must be licked by pretty much every realistic politician in the country – so it's obvious that it must pay the majority of the money, too. The middle class is estimated to represent 45% of the U.S. population but it depends on the definitions.

However, the "poor class" doesn't pay almost anything and consumes a lot of the state expenses. So you only have the middle class and the "rich class", or whatever is the name. The "rich class" clearly can't pay all (and not even most of) the public expenses even though it's clearly a favorite dream of socialist populist demagogues who have forgotten that socialism only works until you run out of other people's money.

So in a working economy, most of the burden simply has to be carried by the middle class, by the "largest productive class". It's almost a tautology.

But it's obvious that there will be differences between left-wing leaders and right-wing leaders. Right-wing leaders will try to introduce some kind of a fair distribution while the left-wing leaders prefer a criminal system that steals the money from the successful and throws them to the unsuccessful, jealous, lazy 47%, so to say. It doesn't work, anyway, because when a high-paid employee is needed in a company, the company – and, therefore, indirectly poorer people in the company – must pay him whatever is needed so that he has a sufficient amount of money after taxes, anyway. Let me not talk about the distribution of the tax burden because it's a matter of political and moral preferences. Quite generally, the progressive tax system is silly. The real competitors should be the flat tax and various types of regressive systems but it's almost politically incorrect to even pronounce this trivial fact these days.

Instead, let's focus on the other objection, namely the opinion that it's unhealthy for the economy to reduce the budget deficit abruptly. It's just silly. The budget deficit is something that should be just a small perturbation of the country's GDP – some random number close to zero or 3 percent of GDP or whatever deficits you find tolerable in the long run. Because and as long as it is a nearly infinitesimal random number, it can't hurt when it's changed abruptly. It's what random numbers do (if they're white noise).

Needless to say, this description doesn't hold for the U.S. budget because it's not a small random number. Instead, it has become predictably huge. And it's a major problem. When the budget deficit is immediately brought close to zero, it's simply a cure to a bad condition so it can't possibly be bad. When your stomach's pH differs from the recommended value (about 2-3), it doesn't hurt when you bring the pH to the right value instantly i.e. discontinuously. Note that when expenses and revenues jump discontinuously, people's "worth" is still continuous. One needs to go to the second derivative of "worth" to see a delta-function and it's normal for a the second derivative of the "worth" to have a similarly singular behavior. Trying to make even higher derivatives smooth would be even more preposterous.

Now, when the government abruptly reduces its large budget deficit, it surely leads to a drop of the GDP. The GDP is the most inclusive measure of the economic activity which is why it's used as the measure of the wealth of countries by those who respect economics as a quantitative science. And so do I. However, the health of the economy is only an increasing function of the GDP assuming that certain imbalances are zero. If they're not zero, it's a subtle question how the most natural definition of the "health of the economy" should treat these imbalances.

Let me propose the following measure to replace the naive GDP:
LMGDP = GDP – Budget_deficit
If there's a surplus, LMGDP will be greater than GDP but I kind of doubt that the U.S. will need this extra comment anytime soon. ;-)

What is the logic behind LMGDP? It's simple. When there's some budget deficit, it means that the government pays for certain services or products that would be unsalable if the deficit were what it is supposed to be, namely zero. Imagine that the most natural way to eliminate the deficit is to reduce the cheques that the government is directly paying to various groups of citizens who almost instantly spend the cheques. If you reduced these cheques to balance the budget, these people would buy less and the production would have to drop by the same amount (let's assume they're buying purely domestic services and products, for the sake of simplicity, although I realize it's not the case – but I don't want to discuss the trade deficit now).

So if the distortion of the market conditions caused by the budget deficit were eliminated, the GDP would drop approximately by the budget deficit itself. That's why LMGDP is a nicer measure of the "undistorted economic activity" in a country. One simply wants to subtract the "fake activity" that was really paid for by subsidies and no one would want it if the conditions weren't distorted.

So the right question isn't what GDP will do when the budget deficit is abruptly reduced; a much better question is what LMGDP does. And I am actually not sure whether the policies that may come into force since January 2013 will increase or decrease the LMGDP. If they will increase LMGDP, i.e. reduce the GDP by less than the amount they subtract from the deficit, then the changes that may apply in January 2013 are clearly a good thing in my understanding of the situation.

Incidentally, it's fun to think about the value and evolution of the Greek LMGDP instead of GDP as a function of time. When the budget deficits become significant, LMGDP gives you a completely new viewpoint from which you may evaluate the situation. In particular, it's useful to quantify the countries' debt as percentage of the annual LMGDP. For countries like Greece, you will get a much larger – and much more realistic – percentage. The Greek "actual" or "undistorted" GDP – and LMGDP is a way to quantify it (although I think that an even lower figure than LMGDP encodes the "corrected equivalent GDP" of Greece) – is actually much smaller than the conventionally used GDP. That's why the percentage we often use – debt at 160% of the GDP, and so on – understates the severity of the actual problem (a greater-than-visual-field problem that will only be seen in its entirety once Greece actually starts to cure the imbalances if it ever does). Whatever currency Greece will use, if it doesn't bankrupt, it will ultimately have to see that the actual "undistorted GDP" – e.g. LMGDP which isn't artificially doped by the budget deficit steroids – is much smaller, and the debt is therefore a significantly higher multiple of the undistorted GDP.

If the U.S. turns out to be unable to bring its deficit to tolerable levels closer to zero – whether or not its economists fabricate a new propaganda "explaining" why such a deficit reduction is a bad thing – it will only mean one thing: that the country's problem is really a long-term one and serious investors should trust the country in the future much less than they have trusted it so far. If America isn't capable of swallowing the "fiscal cliff pill" sufficiently quickly, then it's a country that is addicted to debt and going towards the Greek-like "fiscal chasm" – and be sure that the sign of this chasm is opposite than the sign of the good old fiscal cliff.

And that's the memo.

Martin Rees' center studies 4 worst threats for mankind

Climate change is on par with robot uprising

The cataclysm on December 21st, 2012 is less than a month away and I am regularly asked by people in the real life as well as those on the Internet whether a particular doomsday scenario they read about will happen. They are just polite when they ask; of course that if I explain to them that they don't have to worry, they keep on $hitting into their pants, anyway. ;-)



Nude Socialist, Fox News, BBC, AP, and the rest of the pack told us about CSER.ORG, a center founded by the Lord [Martin] Rees of Ludlow, among others (including a co-founder of Skype), that will study the huge one-time risks that can make us extinct and everyone underestimates. What are they? Well, they are:
  1. robot uprising
  2. Hiroshimas all over the world
  3. artificial germs making all of us sick and die
  4. global warming
As you can see, the global warming hysteria finds itself in a good company of comparably (un)realistic worries.




I am always amazed how disproportionate impact various crazy people decorated by the queen may have on the society.

A senile woman who has lived a materially wealthy life – greetings, Elizabeth – attaches a medal to a Martin. He goes to the pub, gets really high, talks to his friends about four greatest threats for the humanity, and when he collects the answers from his hopelessly drunk buddies, they establish a center that instantly attracts at least millions of dollars to study these four phrases pronounced in the pub.

Note that the identity of the four most dangerous one-time threats for the mankind is "inserted" as a defining description of the new center. So if you found out that there exists a much more serious or much more likely threat that could exterminate the life on Earth, you wouldn't be welcome. Sorry but this is not the scientific approach. It's a corrupt scheme to use money and influence to promote and strengthen predetermined memes, fears, and prejudices.

In hundreds of articles, this website – and many others – has demonstrated that the idea of a threatening "climate change" is a preposterous delusion believed by the uneducated ones and promoted by the ideologically and financially motivated people who don't really believe what they're saying. What about the other three threats?

Nuclear war

Concerning the nuclear holocaust, I think that there's a very limited number of countries that possess the nuclear arsenal capable of a "truly global" destruction. And to activate them in this global way, active and deliberate collaboration of many people would be needed. It can't be quite excluded that weapons could be activated so that almost all of Russia is flattened. But with apologies to our Western Slavic readers, this would be still far from a threat of human extinction. I believe that there are no real plans that would detonate the weapons "everywhere" which is needed for the mankind to go away and it wouldn't be easy for a group of outsiders to launch such a process. And even if you could explode nuclear warheads in every squared mile of the Earth's surface, many people and nations would probably still have and apply tricks to save their skin.

We may see some local usage of nuclear weapons in a foreseeable future but if it's so, we will be reminded how extremely far a single nuclear warhead (or three of them) is from the human extinction. It's powerful but it's just a little bit stronger weapon, not a button able to destroy a planet.

Artificial germs

There are various germs and new ones will be produced both by Mother Nature's evolution processes as well as by biologists. I am actually not sure which of them represents a more genuine threat for us at this moment although I know which of the two threats is growing bigger more quickly. Again, it's hard to imagine how new viruses or bacteria could bring us global extinction. By locality, they're not able to be everywhere. If the new germs act too strongly or quickly, people and nations will immediately introduce harsh measures to protect themselves against the infection (and the infected ones).

Nature is making progress in improving the resiliency of the evil germs but this progress has arguably not sped up too much. Our ability to artificially engineer viruses or bacteria has improved dramatically and will improve even more quickly in the future, I guess. But the ability of biologists to do "good things" and detect and kill the germs and diseases is improving equally quickly. So even though the threats may have gotten more sophisticated, our ability to resist has improved by a more important increment. The total result is, I believe, that the mankind has gotten and is still getting more resilient towards infectious diseases, including the (hypothetical) man-made ones.

I am much more worried about the "gradual" negative developments when it comes to our physical, intellectual, and moral qualities.

Cybernetic revolt

Machines may do lots of things and they're already more intelligent than us according to many somewhat useful measures of intelligence (but clearly not all of them). However, we're still in the regime in which the machines are our slaves.

We must realize that unlike animals, the machines haven't evolved to egotistically protect their interests. They have "evolved" (in the engineering labs) to increasingly efficiently help the interests of some humans – those who built them or those who paid for the construction.

Despite the immense technological progress I am expecting, I don't see a reason why something should change about the previous paragraphs. Machines are, by definition, man-made objects and the reason why people build them is that these machines should bring something good to the people, at least some of them. It's a waterproof logic.

So even though the power of humans is already immensely amplified by technology – and this amplification will get even stronger in the future – the people are still ultimately in charge of things because that's why and how the technology was built and improved.

Of course, it's plausible that there is already a lab that is building robots "who" are trained to protect their own interests – rather than the interests of [some] humans – and prepare some kind of a "robot uprising". That's great but these robots are still tools belonging to the crazy engineer who is building such a thing. So this person and his assets may be considered the "true enemy".

The intent ultimately comes from a human or humans. I can't imagine how it could not be the case. As long as we are not worried about the human rights for robots, we shouldn't be worried about anthropomorphic threats posed by the robots, either. And if we ever wake up in the future to find out that robots are (at least) our peers because their artificial intelligence resembles ours, our logic will be transformed and the feelings about our identity will be blurred, too. We won't think of robots as someone "completely alien".

In fact, I am sure that the "discrimination against robots" will be viewed as a bad thing by many people – in the same sense why many people fight against "discrimination against other races" and similar things. When the artificial intelligence gets this advanced, the problem "how to resist a robot uprising" will be transmuted into a moral problem "should we try to suppress the robots' free behavior", anyway. Martin Rees' center will be viewed as a controversial center defending some kind of "anti-robot racism" and will surely lose the (now undisputed) label of a "center helping every human to fight some threats". After all, all of us – Americans, Chinese, men, women, and Hyundai robots – will be neighbors and fellowmen who deserve dignity. If robots ever take over, the reason won't be our lack of knowledge about the ways how to stop the revolt but our lack of will to do so.

Interdisciplinary centers produce babbling, not hard science

While the doomsday scenarios are lots of fun to think about, I have explained some of the reasons why I think that a center actively investigating these risks is an irrational enterprise. But even if the threats were genuine, I would have serious doubts that Martin Rees' center would attract the most relevant nuclear experts, microbiologists, atmospheric physicists, and artificial intelligence experts who would be the leaders in the "fight against these threats". The center looks like an insanely multidisciplinary institution and I simply don't believe that the most relevant, advanced, and reliable insights on bacteria, nuclear weapons, artificial intelligence, or atmospheric physics would be born in such an environment that is full of other distractions.

It seems much more likely that the most important discoveries relevant for the four threats – and other threats – would be made by scientists who intensely focus on their field and who are trying to find important truths and mechanisms, not necessarily constrained by the predetermined motivation to "save the mankind".

This was my last piece of evidence that Martin Rees' center is a waste of money.

Monday 26 November 2012

Many worlds vs positivism and symmetries

About a dozen of TRF articles mention Hugh Everett and his "many-worlds interpretation" of quantum mechanics. Exactly three months ago, I showed that "many worlds" don't exist as long as one uses the standard rules of quantum mechanics to answer the very question about their existence.

If we use the same rules to answer the question "Do many worlds exist?" as we use for answering questions about the electrons' spins and other questions "obviously accessible to the experiments", the answer of quantum mechanics is a resounding No. There can't be any "multiple worlds". After all, the splitting of the worlds would correspond to a quantum xeroxing machine and that's prohibited by the linearity of the evolution operators in quantum mechanics. Also, the conservation laws would be violated whenever the worlds split, assuming that they were not split before the "measurement" or another critical moment. And if they were split in advance, the interpretation would violate causality because the "Everett multiverse" would have know about the measurements in advance.

Quantum mechanics unambiguously says that the linear superposition of orthogonal states, \(\ket\alpha+\ket\beta\), doesn't mean that "both the things described by \(\ket\alpha\) and \(\ket\beta\) exist at the same time". Instead, the plus sign means "OR", not "AND". The state says "only \(\ket\alpha\) is possible AND only \(\ket\beta\) is possible" but when we want to omit the words "possible", the only right translation is "Nature realized \(\ket\alpha\) OR \(\ket\beta\)". It's the usual probabilistic mixture. Well, there is a difference: in quantum mechanics, we first add the complex probability amplitudes, and then we square the absolute values of the results (the probabilities). In classical statistical physics, we sum up the probabilities directly so the "mixed" or "interference" terms would be absent.

So the "many worlds" are obviously prohibited when the rules of quantum mechanics are being used for all physical questions, including the questions that some people could be religiously prejudiced about. (I really think it's analogous to religious beliefs because many otherwise rational people abandon all rational thinking when it comes to questions that have the potential to unseat or otherwise disturb their God. Their boundary behind which rational thinking is prohibited is as arbitrary and surprising as it is for those who love to refuse the quantum character of quantum mechanics.) In the following text, I will discuss another part of this issue and explain that if you wanted to use some non-quantum, more classical rules in which quantum mechanics would be embedded, you would be forced to defend an indefensible theoretical framework, too.




When Shannon met Brian Greene a few months ago, he mentioned that he had a "little disagreement" with your humble correspondent about the foundations of quantum mechanics. It surely sounds nice and diplomatic and I endorse the diplomatic content of the quote. However, when it comes to the actual beef, I must say: A little disagreement? Galileo and the Pope had a little disagreement. ;-)

Let me discuss several points related to the indefensibility of the types of "many-worlds interpretation" that try to construct a "more realist model" into which quantum mechanics is embedded as an approximation:
  • it opens a previously non-existent question, "Which events may be classified as a good enough measurement so that they have the right to split the world?", one that can't have an objectively meaningful answer; while the Copenhagen quantum mechanics avoids such problematic questions because the presence of a measurement (or "collapse") is purely a subjective matter, any "realist intepretation" makes the conflict included in this question sharp, and therefore requires a qualitatively different treatment of "quantum objects" and "classical objects"
  • too finely grained a tree of the "many worlds" violates the uncertainty principle
  • the very existence of the "other worlds" violates the positivist, empirical approach to science
  • the Lorentz symmetry is inevitably violated in any "explicitly enough well-defined model" of the splitting of the Universe
  • none of the problems or signs of incompleteness of the probabilistic Copenhagen-like interpretation exists.
These are the reasons that force every impartial physicist who understands these things to conclude that the "many worlds" can't exist in any form and a non-realist probabilistic interpretation of the objects in quantum mechanics – the amplitudes are just tools to find probabilistic answers to questions that observers may subjectively ask, not a reflection of a fundamentally objective reality – is the only plausible outcome of a rational evaluation of the empirical evidence, the mathematical framework, and the logical relationships between them.

Proper QM doesn't need an objective definition of what a measurement is; MWI does

Three weeks ago, I argued that quantum mechanics is a tool to probabilistically answer questions that may be "subjectively" asked by observers, not a tool to describe the objective state of reality (which doesn't exist). Despite this fundamental rejection of quantum mechanics, this theory is fully compatible with objective science. It's because the subjective nature of knowledge adds no inconsistency to science – and because the equations of quantum mechanics guarantee correlations in the results that multiple observers obtain when all of them ask the same question.

But the key feature of the fundamentally subjective theory called quantum mechanics is that if no questions are being asked, no questions need to be answered. If the objects in our environments aren't asking any questions for us, we don't need to answer them and we don't need to imagine that the world is doing anything else than evolving the probability amplitudes according to the continuous Schrödinger's equation (or equivalent equations in other pictures). In particular, there's no "collapse" if there's no subject asking questions and learning answers! What is often called the "collapse" is the process of learning and it is a fundamentally subjective process.

In particular, quantum mechanics is compatible with numerous "theories" about the question "who possesses consciousness and is allowed to perceive things". As long as "consciousness" is a totally immaterial, spiritual process (i.e. as long as we don't talk about its measurable, material manifestations, which are clearly as accessible to science – especially neuroscience – as any other material processes), the difference between these different philosophical theories is unphysical. There's no experiment, even principle, that could answer this question. So one may give any answer to the question. In particular, solipsism is fully compatible with quantum mechanics (much like totally opposite philosophies, e.g. one in which you are "one soul" with all the macroscopic objects on Earth and your unified body perceives anything, it just doesn't allow the information to be sent from one part of the body to another). You may consider the whole external world – including other people – to be a collection of "dead machines" that just obediently evolve according to Schrödinger's equation.

Let me emphasize that quantum mechanics doesn't force you to believe that other people are "qualitatively different from you". Clearly, science – especially Darwin's evolution – makes it clear that there's no such qualitative difference between two people as all people share their ancestry. Instead, it allows you to dismiss questions about other people's consciousness – in a purely "spiritual", operationally inconsequential interpretation of the word – as unphysical questions.

The alternative may look more "materialist" which is intriguing for many people but it is a source of a terrible problem. Everett's interpretation suffers from this terrible problem just like any other "realist interpretation" of quantum mechanics, any other theory that tries to present the probabilistic, subjective, non-realist character of quantum mechanics as an illusion following from a hypothetical "realist [classical] theory".

Why is it a terrible problem? Because if you assume that the world is objectively found in a state, you need to prevent it from entering (or staying in) "unfamiliar" complex linear superpositions of macroscopically distinct states. If you assume the world possesses "objective reality" at the fundamental level, the process that does this job – "physical collapse", "splitting of the Universe", or anything of this sort – is a process that simply takes place at a given moment or it doesn't. In principle, all good physicists should ultimately agree whether this process has occurred or not.

But that means a catastrophe. It means that the coherent quantum mechanics has to objectively refuse to hold behind a critical line. One needs to abandon it, modify it, and so on. But such a modification is totally unjustifiable. There doesn't exist a tiniest glimpse of evidence that quantum mechanics could fail to work for some objects larger than \(X\). In fact, it seems obvious that all similar conceivable modifications of quantum mechanics would be either incompatible with the basic observed data or internally inconsistent.

At this point, and in many others, I am flabbergasted how upside-down the explanations by Brian Greene (and others) are. In his book The Hidden Reality, Greene tried to claim that the orthodox probabilistic interpretation of quantum mechanics has to establish an artificial boundary between the world of phenomena described by quantum mechanics and those described by classical physics.

But as we have seen two and three paragraphs above, the orthodox probabilistic interpretation of quantum mechanics is on the contrary the only conceivable explanation that does not have to do anything of the sort. In particular, quantum mechanics is always valid in this approximation. Bohr et al. just correctly said that in addition to quantum physics, the classical picture has to be also (approximately) right for questions whose answers we want to treat as pieces of classical information. But that doesn't mean that there's new physics that invalidates quantum mechanics. It just means that there are two descriptions, quantum and classical, that start to coincide and co-exist for large enough systems.

On the contrary, as I said, the assumption that objective reality exists means that one must believe that Nature has a very particular "critical line" behind which the coherent rules of quantum mechanics that we know from the microscopic world cease to hold. It means that the "objectivist interpretations" of any kind have to assume the existence of new phenomena that isn't observed and that isn't justified by anything whatsoever – except for philosophical prejudices.

When do the worlds split?

The "many worlds interpretation" is a typical attempt to find an objectivist reinterpretation of quantum mechanics – at least that's how e.g. Brian Greene presents it and I totally think that his presentation makes much more sense than any other tirade about Everett's picture (even though it's ultimately totally wrong, too).

It assumes that there must objectively exist "many worlds". In one of them, you measure Stalin to have won the war, in another one, you measure Hitler to have won the war, and so on. How many such universes are there? How finely do you have to divide the universes to make the MWI work? Roughly speaking, you want every small measurement or every small "macroscopic process" to produce new split worlds but what is counted as a measurement or a macroscopic process?

It's a very subtle question and it's easy to see (although some people may prefer not to look) that whatever answer you offer – except for the correct answer, namely that the world never splits – produces insurmountable contradictions. The basic problem is that if you assume that the objective "splitting of the worlds" is the reason why we perceive sharp answers instead of "fuzzy superpositions" (note that the actual reason is that the wave function describes probabilistic distributions with "OR", not a fuzzy shape of objects) and if your "splitting" occurs too rarely (more precisely, less frequently than infinite frequency because any observable may be potentially observed by "someone"), the world will be more fuzzy than the observations indicate. If your splitting will be "too frequent" (more frequent than zero frequency, to be precise), your picture will contradict the uncertainty principle. Note that theories with an "objective collapse of a wave function interpreted as a classical wave" suffer from exactly the same problem. One may say the same things about them.

In the MWI approach, the splitting of the worlds is an objectively important moment. You know, it's not easy to create many new worlds that contain everything that our world does. ;-) An MWI advocate may be tempted to link this splitting to decoherence. However, decoherence is never perfect. In a basis of states we find natural (e.g. because it's local), the density matrix never quite becomes diagonal. There's always a nonzero (although expo-exponentially decreasing) risk of "recoherence" and if you want to avoid this risk, you're never allowed to mechanically divide the worlds.

A related problem – it's actually exactly the same problem – is that when you divide the worlds into "many worlds" too finely, your description will violate the uncertainty principle. Why?

Almost three weeks ago, I discussed consistent histories. Imagine that you observe the trajectory of a particle in a cloud chamber and you decide that the measurement of the position took place every microsecond, at \(t=0.0\), \(t=1.0\times 10^{-3}\), and so on. At each moment, you also measure the momentum of the particle with the best accuracy \(\Delta p\) that is compatible with the uncertainty principle. So far so good.

But the precise moments when the universe splits doesn't seem to be God-given and objective. So another person may think that the splitting occurred \(10^{-16}\) seconds after each splitting according to yourself. Now, there are 10 billion other people who assume the same microsecond spacing but whose "special points" are shifted by multiples of \(10^{-16}\) seconds. Can we reconcile the interpretations of all these people?

You are forced to say that the splitting actually occurs every \(10^{-16}\) seconds. The worlds split every time when at least one person thinks it does. Note that with a larger number of people, I could be forced to accept an arbitrarily fine splitting. But that's a problem because that would imply that in a particular branch of the "many worlds multiverse", the particle's position is measured pretty much at every moment. But if it's so, the uncertainty principle dictates that the momentum can't possibly be determined accurately at all. But each person above actually measured \(p\) rather accurately. By increasing the density of the "split world moments", we violated the uncertainty principle by an arbitrary factor. With 10 billion people, \(\Delta x\cdot \Delta p\) was \(\hbar/10,000,000,000\). Too bad.

So we're allowed to imagine that for macroscopic bodies, the histories are "coarse-grained" and the equivalence classes are treated classically. But we simply can't afford to make the graining too fine. At the same moment, there are clearly no "preferred boundaries" that would separate the space of possible histories into "little cubes" everyone may agree upon (much like the phase space isn't objectively divided to particular "only correct" rectangles of area \(2\pi\hbar\) that everyone must agree upon). It follows that to invent a rule answering the question "When do the worlds split?", you either violate the uncertainty principle or invent some arbtirary rules that contradict the Lorentz symmetry, rotational symmetry, and that simply depend on many totally arbitrary decisions that no one believes may have an objective significance.

Proper quantum mechanics, as pioneered by Heisenberg, Dirac, Pauli, and a few other friends, and as godfathered by Bohr, cleverly avoids all these problems. It doesn't have to divide the phase space to "canonical rectangles with the only right shapes" because it says that such divisions – and the choice of bases – depend on the situation and the questions that an observer asks. They are not objective in character. The same is true for the separation of the "space of possible histories" which is just a harder, infinite-dimensional version of the task to "divide the phase space to the only right rectangular boxes". The reason why we may dismiss the hopeless task of looking for the "precise boundaries in the phase space" is that no objective boundaries exist. They only arise subjectively, in someone's head. But that means that the transition from the linear superpositions to the sharply perceived outcomes of experiments – the "collapse" – is subjective, too.

Positivism

Philosophy is never a good science. But when it comes to philosophies, many laymen in quantum mechanics – and even people not considered laymen in quantum mechanics by the society or by themselves – often think that "realism" is the right philosophy behind modern science. This viewpoint, based on millions of years of our everyday monkey-like experience, has strengthened by the 250 years of successes of classical physics and it was – unfortunately – energized by Marxism that repeated the untrue equation "science = materialist ideology" many times. Marx, Lenin, and related bastards surely belong among those who have encouraged people to never leave the mental framework of classical physics. But it is "positivism" which is the philosophy that is closest to the founders of the modern science, especially relativity and quantum mechanics.

Positivism says that all reliable knowledge – the truth we are allowed to become fans of – has to boil down to empirical observations and mathematical and logical treatments of such empirical data. It sounds uncontroversial among science types but many of them don't realize how dramatically it differs from the "materialist ideology". In particular, positivism assumes nothing about the "existence of objective reality".

I am not going to worship positivism because in the most general sense I have just mentioned, it represents the absence of any knowledge with beef rather than knowledge itself (this absence is important at the beginning of research – which has to begin with a tabula rasa without prejudices – but it's bad if someone's mind stays tabula rasa even a long time after that). That's also why e.g. Werner Heisenberg heavily criticized positivism as a philosophy – it seemed vacuous to him. It's kind of paradoxical because his beliefs in physics were absolutely positivist. But he rightfully criticized people who don't know or believe anything and who make living out of not discovering anything. ;-)

Incidentally, Auguste Comte – despite his being the founder of this "pro-science" positivist philosophy – had crazy opinions about science, too. He once declared that the chemical composition of the stars would forever remain outside science because we couldn't travel to the stars. It took less than a decade after his death before spectroscopy told us everything about the composition of stars (at least their surface). It's fun to be eager to say that "questions such as XY are inevitably outside science" except that many (or most) statements of this kind (but not all of them) are wrong. Whether some question is scientifically meaningful is a subtle question and difficult science research is needed to resolve it (the question is difficult because we don't know all conceivable experiments and all of their conceivable relationships to the "interesting statements" in advance): philosophical prejudices are never enough. They're not enough if your answer is Yes and they're not enough if your answer is No!

But let's return to Everett's picture. It assumes that there objectively exist "other worlds" except that pretty much by definition, the "splitting" is an irreversible process. Because you can't return to the past, you're also unable to return to the "crossroads" from which other worlds originated. But that's the only way to get to the parallel universes, so you can't "get there" (or interact with them) at all.

Now, if this is universally the case, even in principle – and MWI seems to be crucially based on this assumption – then the question whether the other branches exist is outside science, because of the basic positivist definitions of a reliable truth. We can find new ways to study the sunlight but we can't find ways to return to the past or change the facts or events in the past which is why we're sure that the other worlds will remain empirically inaccessible.

It's an OK argument but I still find it more important that there can't be any sensible "set of rules" that would tell you "when the worlds split". Even if we can't observe other worlds, we may still accumulate very strong indirect evidence that they do exist or they don't exist. I need the previous discussions about the "too finely grained histories" and other things to collect data that are relevant for this question whether the other worlds exist. And the answer is No, they can't exist. Whether they're in principle observable or not, the question about their existence turns out to be available to scientific reasoning – one that boils down to the empirical data and their mathematical and logical treatment – and the answer is, inconveniently enough for MWI and other "realist" advocates, that these constructions can't exist.

Lorentz symmetry is doomed in MWI, too

Consider an EPR-style entanglement experiment. We measure properties of two entangled particles. The two events – two measurements – are spacelike-separated. So which of them occurs first depends on the reference frame.

However, if you literally imagine that the number of worlds is increasing at the moment of each measurement and the "tree of parallel worlds" gets ramified, it is possible to find out whether the first measurement occurred first, or the second measurement occurred first. Each measurement is a "vertex" in the branching tree of worlds and one of them has to be closer to the "root" of the tree in the past, so it occurred first. Because the branching of the worlds is an objective processes, all observers should agree about which of the two spacelike-separated measurements occurred first. But relatively implies that different observers won't agree which of the two spacelike-separated events occurred first. That's a contradiction between MWI and relativity.

Again, proper orthodox quantum mechanics cleverly avoids this problem because nothing "objective" is changing during the measurement. The measurement is a change of the subjective knowledge which is why the ordering of the two changes of knowledge may depend on the observer – and on his reference frame. The subjective interpretation of the wave function and the "collapse" (learning) is a property of orthodox quantum mechanics that is necessary for it to agree with relativity in all physically meaningful, operationally answerable questions. This consistency is fully preserved because the "ordering of the collapses" is identified as a question that is only meaningful subjectively which is why different people (in different inertial systems) don't have to agree about the answer. And indeed, they don't agree.

The splitting of the worlds, much like "objective wave function collapses", inevitably leads to contradictions with relativity.

Quantum mechanics is complete and consistent

But the main reason why all the research into hidden variables, de Broglie-Bohm pilot waves, Ghirardi-Rimini-Weber collapses, and many worlds – among a few other, less widespread "realist approaches" – make no scientific sense is that their main motivation, the hypothetical "flaws of the Copenhagen quantum mechanics", is completely invalid.

There is nothing wrong or incomplete about quantum mechanics – the new framework discovered within the Copenhagen school. The intrinsically probabilistic meaning of the wave function and related insight aren't a matter of "interpretations": they're inseparable properties of quantum mechanics, they're general postulates of quantum mechanics, they're really what makes quantum mechanics quantum. The very moment when someone starts to talk about "interpretations" as something that should be built "on top of quantum mechanics", he is already deluding himself and refusing all the insights that are actually summarized by the term "quantum mechanics". Quantum mechanics as a theory doesn't have any other interpretation than the Copenhagen interpretation, with the Born rule, or newer reformulations of pretty much the same thing.

All the other ideas that are called "interpretations of quantum mechanics" are really totally different theories – either toy models that only work as OK descriptions of some extremely limited and special situations but that can't be generalized to "full physics" or, which is more usual, just hypothetical theories whose existence is a wishful thinking many people protect as gold (even though one may easily show that no such theories compatible with the basic empirical data may exist). People who sell "interpretations of quantum mechanics" nearly as a whole new subdiscipline of physics are analogous to people who want to "interpret Darwin's evolution" for a long enough time so that the creator and the events from the Bible reappear again. They just don't like what the theory is saying and be sure that it is saying that the wave function has to have a subjective probabilistic meaning.

As long as one is doing genuine science and not prejudiced ideology, quantum mechanics, a new framework for physics clarified by the Copenhagen school in the 1920s, isn't negotiable. It's as established as evolution, heliocentrism, or any other important pillar of science, and the people who try to relabel basic postulates of quantum mechanics as "illusions" are as deluded as creationists or geocentrists.

Redistribution of probabilities among many worlds

Two paragraphs earlier, I mentioned that most of the theories in the MWI business and related businesses don't really exist: their existence is a wishful thinking. The dreamed about derivation of the Born rule for the probabilities from the MWI framework is a great example of this unlimited, breathtakingly irrational wishful thinking displayed by the MWI advocates and other "realists".

After the quantum revolution, we know that all empirical evidence coming from repeated experiments may be summarized as measured probabilities of various outcomes of diverse experiments. Once again, all the empirical knowledge about the physical processes that we have may be formulated as a collection of probabilities. Probabilities are everything we may calculate from quantum mechanics (and from other parts of science, too). So they're surely not a detail.

Orthodox quantum mechanics promotes probabilities to fundamental concepts and uses the standard probability calculus – which existed a long time before quantum mechanics – to give you rules how to verify whether the probabilistic predictions of a theory are right. The basic laws of quantum mechanics are intrinsically probabilistic. The MWI framework tries to deny this general point. It has many worlds – something we would normally call "a priori possible results" are pretended to be "real new worlds somewhere out there" – but at this moment, we haven't even started to discuss the empirical evidence yet. We only start to discuss the empirical evidence once we start to compare the measured probabilities with the theoretically predicted ones because everything we empirically know are probabilities!

But when it comes to the probabilities, MWI has absolutely nothing coherent to say. If the probabilities of "spin up" and "spin down" are 64% and 36% respectively, the MWI framework just gives you two worlds. How do you actually extract the numbers, 64% and 36%? There doesn't exist any proposed answer that makes any sense. It's clear that there can't exist any answer that makes any sense because probabilities can't be "derived" out of something that has isn't probabilistic.

You may try to say that there are 64 worlds with "spin up" and 36 worlds with "spin down" and all of them are equally likely (or 16 and 9 worlds, or other multiples). Except that almost all probabilities predicted by quantum mechanics (and verified by experiments) are irrational numbers so you would need infinitely many "many worlds" to account for these irrational probabilities: a division to fractions just isn't good enough and all "rational approximations" would look so awkward that no one would believe they're right. Or you may just have two worlds and assign the unequal probabilities to them (and justify it, for example, by saying that the observer doesn't know "in what universe he is", and the right answer is just one that happens to agree with the Born rule). But a new, "deeper" derivation of the quantum mechanical predictions – and all such predictions have the form of probabilities – is what you wanted to achieve. So if you mysteriously assign your fictitious worlds the probabilities by hand, you have clearly derived nothing. You have just invented a childish story compatible with your religion (one that may still be shown incorrect if you try harder).

When someone tries to offer an allegedly "deeper" explanation of the probabilistic rules but he ignores the fact that he doesn't have any remotely conceivable explanation what the probabilities could emerge from, and this is a superserious problem because all the data in science may be formatted as probabilities, it seems to me that the person is so prejudiced that it makes absolutely no sense to discuss these issues with such a person. What drives him is pure bigotry, a metaphysical, quasi-religious fanaticism ready to overlook all empirical evidence we have or we may ever add. Indeed, that's exactly what the anti-quantum zealots are doing all the time.

And that's the memo.