Breaking News
Loading...
Tuesday 5 February 2013

Info Post
...or not...

The black hole firewall saga continues. The original paper by AMPS has collected 35 citations according to SPIRES. Most of the recent 10 are papers that are primarily about issues that different than black hole firewalls.

However, Stephen Hsu who is also a blogger (article about the topic; don't confuse him with Steven Chu although there may be some similarities here, too) just posted a new, 3-page preprint that attacks the essential error done by AMPS:
Macroscopic superpositions and black hole unitarity
I believe his basic line of arguments is equivalent to what I've been saying and it's also compatible with the Raju-Papadodimas paper and Nomura-Varela-Weinberg papers. What's the basic fact that Hsu realizes and AMPS overlook?




The most important fact of this sort is that for a fixed pure initial state of the star, Alice (the infalling observer) always has some probability amplitudes (and some overall probability) that she falls into a black hole and some probability amplitudes (and some overall probability) that she doesn't. There are many possible microstates (possible evolution) in the first group and many possible microstates (possible evolution) in the second group and the overall state vector is the sum of all the pieces from both groups. I discussed the presence of the "Yes" as well as "No" branches here (click).



I found this cartoon by a rudimentary Alice-Bob search on Google Images but the content of the cartoon is actually exactly what I need. Alice correctly tells Bob who wants to marry her that in a quantum world, both possibilities – she will marry him and she won't, she will escape him to a black hole interior or she won't – have nonzero probability amplitudes. As the article below discusses many times, Bob much like Polchinski et al. seem to misunderstand this basic point about quantum mechanics. Well, this misunderstanding may be shared by most men as well because they often interpret the quantum mechanical women's "Maybe" as "Yes". :-)

Only the overall wave function must evolve unitarily from the initial state, only the total wave function (including pieces with Alice inside and pieces with Alice never inside any black hole) are subject to the constraint that the information should be preserved. This accuracy is very important because, as Papadodimas and Raju emphasized, even exponentially tiny corrections to the operators (I mean the matrix entries are exponentially tiny) may turn a pure state into a mixed one and vice versa.

Hsu says that Alice observes no firewalls but it's important to discuss "which Alice" we mean when we say that she sees no firewall; and "which Alice[s]" we need to preserve the information. The answers to both questions differ.

The answer to the first question is that a particular Alice with a particular history, one in which she's sure that she's falling into a black hole (a single term in the overall wave function), will experience no firewalls. If we imagined that the whole quantum evolution (evolved state) contains this Alice (term), we could argue – following AMPS – that the information isn't conserved.

However, the unitary evolution of the initial state contains, as we have emphasized, also other "Alices", including "Alices" who avoid the black hole interior at all times. The word "contain" in the previous sentence is meant to represent the mathematical inclusion of a term in a sum; the correct "physical" interpretation says that the laws of quantum mechanics imply that it's possible for Alice to evolve differently and, perhaps, avoid the black hole throughout her life. The total wave function is a state similar to Schrödinger's cat, one composed of macroscopically distinct states that quickly decohere from each other. This state evolves unitarily from the initial state and will evolve unitarily to the final state of the Hawking radiation that remembers the initial state.

There's no need for the information to be preserved on the "branch" of one particular Alice. One particular Alice who fell into a black hole just made some measurements of certain observables \(A_i\) that identify her "branch" of the wave function. One of these observables \(A_0\) is the qubit that determines whether she fell into a black hole or not.

These observables \(A_i\), including \(A_0\), refuse to commute with the observables \(B_k\) that observers outside the black hole (spatially separated from the particular Alice who is inside) may perform. There's nothing wrong about this fact; the fact that these sets of operators \(\{A_i\},\{B_k\}\) don't commute with each other is the essence of black hole complementarity. As long as the probabilities of the measured values of \(A_i\) are nonzero in the eigenstates corresponding to the measured values of \(B_k\), there is no contradiction.

Polchinski et al. made the mistake of trying to restrict the discussion of all the observations (including those outside) to the subspace of eigenvalues \(A_i\) that a particular infalling Alice has measured. But that's just wrong: most of the states identified by other observations (e.g. those done outside the hole, for example by Bob) fail to belong to this low-dimensional space because the relevant states onto which Bob projects are eigenstates of operators such as \(B_k\) that don't commute with \(A_i\), so they can't possibly belong to a particular subspace of shared \(A_i\) eigenstates.

Steve Hsu also discusses one interesting point showing that the fact that the black hole evolves into a superposition of macroscopically distinct states isn't just a formality that affects Alice in the same sense as it usually affects Schrödinger's cat. Instead, the black hole itself is evolving into a superposition of macroscopically distinct states, especially when it comes to the location of the black hole.

When the black hole is sending the Hawking quanta to random directions, their total momentum more or less averages out. But it doesn't average out exactly. The momentum of a Hawking particle goes like the temperature \(T\sim 1/R\) and each particle reduces the entropy of the remaining black hole by something like \(\Delta S\sim -1\). After the Page time (half of the initial entropy has been evaporated away), the black hole has sent something like \(S/2\) Hawking quanta. The momentum of each was \(1/R\) with a random sign/direction. If we add them, we may see by the maths of random walks that the total momentum deposited to the recoiling black hole shooting the Hawking quanta is of order \(\pm\sqrt{S/2}/R\sim\pm 1/\sqrt{G}\) (the last expression only holds for \(d=4\) while the previous expressions held for a general \(d\)) so the average velocity is of order \(P/M=\pm 1/M\sqrt{G}\). Yes, that's smaller than \(c=1\) because the mass of the black hole is larger than the Planck mass \(1/\sqrt{G}\).

When you finally multiply this velocity by the Page time \(t\sim G^2 M^3\), you get the estimate for the total distance that the black hole has traveled after the long Page time due to the recoils as it was shooting the Hawking bullets:\[

\Delta x \sim \pm \frac{1}{M\sqrt{G}} \cdot G^2 M^3\sim\pm M^2 G^{3/2}.

\] Now, all the formulae are only OK in \(d=4\). In the Planck units, the change of the location of the black hole goes like \(M^2\) which is still a very high number i.e. long distance! It is much longer, \(M \sqrt{G}\) times, than the black hole radius. (I wrote the derivation directly into the blog and haven't seen it previously; thank God, the result agreed with Hsu's underived claim.) So the Hawking quanta are pretty random but they easily combine to an uncertainty of the black hole location that may become macroscopic – in fact, much longer than the black hole radius – after a sufficiently long time such as the Page time.

(If you want to know, after the fraction \(f\) of the black hole mass evaporates away, \(\Delta x\) is generalized to \(f^{3/2}G^{3/2}M^2\); the simplified expression above was for the Page fraction \(f=1/\sqrt{2}\). It's not hard to derive the \(f^{3/2}\) dependence, I think: the velocity goes like \(f^{1/2}\) for the same reason why the random walk \(\Delta x\) goes like \(t^{1/2}\); the power \(f^{3/2}\) is just the indefinite integral of that which adds \(1\) to the exponent. Also, note that the "branches" with different locations of the black hole inevitably decohere from each other if we decide to trace over the degrees of freedom in the outgoing Hawking radiation.)

It's very important not to overlook the other branches of the evolution, including the inevitably nonzero branches in which Alice never falls into a black hole, if we want to verify that the information is conserved. AMPS failed in this test of accuracy. They were imagining that the full exact quantum evolution contains "one particular Alice with one particular life story" which is one of the typical errors in which people incorrectly use a classical reasoning in the quantum realm or, if I locate the blunder even more precisely, the error in which people "erase" the other terms in the wave function ("collapse" the wave function) prematurely because they just feel uncomfortable about the superpositions and want to return to the classical world (where all properties are objectively and uniquely determined) as soon as possible. But in this case (much like others), the "collapse" is premature, indeed, because the preservation of the information may depend on the interference of the many branches in which Alice fell into a black hole and the branches in which she hasn't.

(In fact, AMPS – and others – are not only making the error of assuming that Alice's being inside a black hole is a classical fact. They often want to determine Alice's position relatively to the horizon with quite some amazing precision. This contradicts the inevitably inaccuracy resulting from the random-walk character of the Hawking evaporation. The stunning similarity of this mistake to Einstein's mistake during his debates with Bohr – those about Einstein's box – suggest that physicists have learned almost nothing from these debates.)

So yes, Hsu's paper is another one that helps to settle my original suspicion that the error done by Polchinski et al. ultimately boils down to their misunderstanding of the foundations of quantum mechanics, something that they (much like others) try to "reshape" far too classically.

And that's the memo.



P.S.: I can't resist to mention that in 2005, when Stephen Hawking admitted that the information wasn't lost after all, he also offered his own proof that it doesn't. When properly reorganized, most of the histories that contribute to the final Hawking radiation are histories in which the black hole is completely avoided, he argued. This 2005 claim due to Hawking (which wasn't uncritically accepted as "yet another full proof", just to be sure) is not quite the same thing as the claim in Hsu's paper – that it's important that Alice herself may avoid the black hole interior – but it also has a similar spirit because it emphasizes the importance of histories and observers that never see a/the black hole interior. One may only derive the paradoxical "information is lost" conclusion if he assumes that the chances are 100% that the black hole is formed and an observer sees the interior but the probability that this ain't so, while it may be small, is always important to fix the qualitative conclusion and to see that the information is preserved.

0 comments:

Post a Comment