Breaking News
Loading...
Tuesday 22 January 2013

Info Post
Today, the black hole firewall saga continues with two new preprints:
Quantum Computation vs. Firewalls by Harlow, Hayden

Black Hole Complementarity and the Harlow-Hayden Conjecture by Susskind
They are the first two papers on hep-th, rather clearly signalling that the authors consider them important. Both papers are written as gifts to John Preskill's 60th birthday: Congratulations!

Recall that last summer, Almheiri, Marolf, Polchinski, and Sully (AMPS) offered an argument using basic facts about quantum information in general and the monogamy of entanglement in particular to argue that the black hole interiors can't be empty and the usual lore about the black hole complementarity isn't internally consistent.

Their argument is flawed because they assume that the degrees of freedom in the black hole interior are independent from the degrees of freedom describing the exterior (which is what needed for them to claim that these sets of variables are "two distinct wives"); in other words, they are assuming from scratch that black hole complementarity in its usual sense can't operate. They also incorrectly assume that the "natural" observables that may be measured are defined "in advance". In reality, the "natural" observables that decohere and that may be easily measured depend both on the Hamiltonian/dynamics as well as the state of the system. In the black hole case, it means that an observer may ultimately "project" the same Hilbert space into very different bases, depending on whether or not she falls into the black hole.

The first new paper above concludes that firewalls aren't necessary. However, the reason why they aren't necessary is claimed to be subtle.




Harlow and Hayden say that in order to perform their contradiction in practice, AMPS need a quantum computer and it must be a very fast one.

However, these two authors of the new paper claim – they don't "quite" prove it, but justify it – that even a quantum computer (which we imagine to be a very fast thing) actually needs too much time to complete the calculation. The required time scales like \(\exp(CS)\) where \(S\) is the entropy of the black hole. When this delay is acknowledged, one sees that the paradox can't be demonstrated in practice – well, not even "in principle" because these "practical" limitations are completely universal.

I don't think that such subtle "performance of quantum computation" considerations are needed to show that the existing arguments that firewalls have to exist are flawed. However, it's reassuring that at least their binary conclusion seems to be right: firewalls probably don't exist. To say the least, the authors show that there is one additional loophole that would have to be filled before the extraordinarily claim about firewalls could become credible.

Leonard Susskind wrote a "review" of those things, including the new paper that was originally submitted minutes before his paper (and acausally appeared after Susskind's paper was written) – officially speaking, Susskind clearly doesn't need an exponentially long time to write papers. ;-)

At the beginning of Section 4.2 (now the bottom of page 24), Susskind explains some argument that the interior and exterior bits can't be the same. It would represent a time machine, he quotes AMPS. These comments make no sense to me. The only inconsistency that causality considerations may lead to are "closed timelike curves". If Y evolves from X and Z evolves from Y and X evolves from Z, then this chain may force X to be something else than it was by assumption.

However, no such paradox may occur in this black hole setup. Even if one calculates a qubit in the region A from the Hawking radiation that is already out, it's nothing paradoxical. In fact, it's nothing else than a calculation of a prediction. In a slicing, the degrees of freedom in A (interior) appear in the future relatively to the Hawking radiation used as the input. So you just calculated a qubit in the future out of the data in the past. That's what physics always does.

Because the world is quantum, you will only get probabilistic predictions for the qubits in A. Most of the black hole microstates for a fixed mass look like the nearly perfectly smooth and spherical black hole with the empty interior so the probability will be almost 100% that if you calculate a low-energy field-theoretical mode in the black hole interior A, you will get the probability approaching 100% that the mode appears just like in the vacuum state – the black hole interior is empty. There can't be any paradox here. This calculation is completely analogous to and no more mysterious than the calculation of the temperature (or other macroscopic properties) of the center of a nuclear bomb that exploded a moment ago and emitted some radiation. It simply can be done.

A difference between the black hole and the nuclear bomb is that the black hole interior has degrees of freedom that aren't quite independent from those outside. But this doesn't increase any chances for a paradox. Even if you calculate some "unnatural" qubit describing the black hole interior by which it differs from an empty space (the difference is really tiny, so it's questionable whether an observer inside the black hole could ever observe such a thing), it wouldn't be a problem because it's nothing else than a prediction of a future observation.

The black hole interior may always be viewed as the "exterior events extrapolated to a particular direction in the future, a dead end", and because no data about the measurements may get out of the interior by any controllable information transfer, there can't ever be any causal paradox here. I feel that when they describe this "time travel" paradox, they are incorrectly thinking about the black hole interior as some set of data about the "distant past". But it is not a distant past; the black hole interior is always just an inconsequential "dead end" that evolved in the direction "to the future" from some region of the surrounding spacetime.

0 comments:

Post a Comment