Saturday 29 September 2012

Gore's investment firm: no green investments

Al Gore has repeatedly said that he was putting his money where his mouth is. However, it was just revealed that in the real world, Al Gore is stealing the money where almost every green criminal steals them, and he is putting them where almost every rich person or investor puts them. Be sure that these two places are very different from one another.



WND.com just published a very interesting text with the title:
Al Gore bails from green energy investment
Bill Gunderson, the president of Gunderson Capital Management, has looked at the portfolio of Al Gore's investment management firm, Generation Investment Management. In fact, you can look at the list yourself; it's at the SEC website:
SEC about Gore's firm
What firms will you see there?




You may check every individual company listed over there – I haven't done so but Gunderson claims that none of them is concerned with carbon reduction or alternative energy sources. In particular, none of the companies produces solar panels, wind turbines, or biofuels. Instead, what you find are mundane commercial real estate, biotech, and healthcare companies, aside from Amazon, Procter and Gamble, Colgate Palmolive, Polypore (whose stocks they doubled: production of membranes for batteries and filters), and others.

In general, the green companies were never promising or profitable by themselves. They have always relied on subsidies and distortions of the markets guaranteed by corrupt or just plain stupid politicians controlled by special groups such as the group around Al Gore. But even the subsidies are dropping and Al Gore, or at least people in his company, know it very well.

So the climate is the place where Al Gore earns his money starting from nothing – for insane speaking fees, from sponsors, printed carbon indulgences by himself, and so on. But once he has already stolen all these millions, he can't afford to throw them to the trash bin known as the green economy because it was so hard for him to steal all these funds by decades of lying and fraud! He invests them just like everyone else.

I wonder whether during one of the future "climate reality days" that Gore organizes, he will reveal the reality of the climate's absence in his investments.

Friday 28 September 2012

Raphael Bousso is right about firewalls

Five days ago, I reviewed the discussions on black hole firewalls, started by AMPS in July 2012. Joe Polchinski wrote a guest blog for Cosmic Variance yesterday.

During the days, I had enough time to sort all these issues and I am confident that Raphael Bousso is right and Polchinski and others are wrong.




Here is Raphael's 33-minute talk from Strings 2012



and here is his July 22nd paper:
Observer Complementarity Upholds the Equivalence Principle
(The two men who disagree about firewalls were playing accomplices of one another back in 2000 when they were helping to ignite the anthropic coup d'état in string theory by their Bousso-Polchinski landscape (then: discretuum) paper.)

In fact, I would say that the paper is very clear, crisp, and even shows the author's understanding of some basic features of quantum mechanics – features that others unfortunately keep on misunderstanding. That's a very different verdict from my verdict about the nonsensical MWI-inspired Bousso-Susskind hypermultiverse, isn't it?

Bousso defends the complementarity principle. What this principle really means has often been misinterpreted. For example, some people said that the black hole interior contains all the degrees of freedom that one may measure outside the black hole. This is clearly nonsense. The interior contains at most a scrambled version of a part of the exterior degrees of freedom.

Raphael nicely avoids many of the confusions by introducing a refined version of the complementarity principle, the so-called observer complementarity. It's a typically Boussoian concept – and one could argue that he has greatly contributed to this sort of thinking. In the firewall discussion, this Boussoian thinking is extremely natural and arguably right. If I add some "foundations of quantum mechanics" flavor to the principle, it says:
Quantum mechanics is a set of rules that allows an observer to predict, explain, and/or verify observations (and especially their mutual relationships) that he has access to.

An observer has access to a causal diamond – the intersection of the future light cone of the initial moment of his world line and the past light cone of the final moment of his world line (the latter, the final moment before which one must be able to collect the data, is more important in this discussion).

No observer can detect inconsistencies within the causal diamonds. However, inconsistencies between "stories" as told by different observers with different causal diamonds are allowed (and mildly encouraged) in general (as long as there is no observer who could incorporate all the data needed to see an inconsistency).
While the complementarity grew out of technical features of quantum gravity, you may see that this observer complementarity version of it sounds just like some Bohr's (or Heisenberg's) pronouncements. It shouldn't be surprising because it was Bohr who introduced the term "complementarity" into physics and the general idea was really the same as it is here.

Bohr has said that physics is about the right things we can say about the real world, not about objective reality, and it has to be internally consistent. However, in the context of general relativity, the internal consistency doesn't imply that there has to be a "global viewpoint" or "objective reality" that is valid for everyone. This is analogous to the statement in ordinary quantum mechanics of the 1920s that a complete physical theory doesn't have to describe the position and momentum (or particle-like and wave-like properties) of a particle at the same moment.

.....

Polchinski who is the most senior figure behind the recent crazy firewall proposal is not only the father of D-branes and other discoveries but also the author of perhaps the most standardized graduate textbook of string theory.

Bousso shows that AMPS are inconsistently combining the perspectives of different observers in order to deduce their desired contradictions but this is illegal because no observer has access to things outside his causal diamond, and therefore no observer can operationally demonstrate any contradiction. In the Penrose diagrams below, you see that no observer may observe the matter right before it is destroyed by the singularity as well as the late Hawking radiation. You must either sacrifice your life and jump to the black hole or you must stay out: you can't do both things simultaneously.

I recommend you to read the whole Bousso's paper which is just 7-9 pages long, depending on how you count it. However, a sufficient screenshot that explains all his resolutions is Figure 1:



Bousso's caption for Figure 1: The causal diamond of an infalling (outside) observer is shown in red (blue); the overlap region is shown purple. Observer complementarity is the statement that the description of each causal diamond must be self-consistent; but the (operationally meaningless) combination of results from different diamonds can lead to contradictions.
  • (a) Unitarity of the Hawking process implies that the original pure state \(\Psi\) is present in the final Hawking radiation. The equivalence principle implies that it is present inside the black hole. If we consider both diamonds simultaneously, then these arguments would lead to quantum xeroxing. However, no observer sees both copies, so no contradiction arises at the level of any one causal diamond.

  • (b) Unitarity implies that the late Hawking particle B is maximally entangled with the early radiation A (see text for details). At the earlier time when the mode B is near the horizon, the equivalence principle implies that it is maximally entangled with a mode C inside the black hole. Since B can be maximally entangled only with one other system, this constitutes a paradox. However, no observer can verify both entanglements, so no contradiction arises in any single causal diamond.
Therefore, it is not necessary to posit a violation of the equivalence principle for the infalling observer.

LM: Let me repeat those observations again. There are two possible paradoxes we may face but both of them are resolved by a careful application of observer complementarity: the xeroxing paradox and the firewall paradox.

The xeroxing paradox is the observation that the matter that has collapsed into a black hole carries information and the information may get imprinted to two places – somewhere inside the black hole and in the Hawking radiation. These two places may even belong to the same spatial slice through the spacetime. Such a doubling of information is prohibited by the linearity of quantum mechanics. Despite the existence of the "unifying" spatial slice, there is no contradiction because there is no observer who could have access to both copies of the same quantum information, no causal diamond that would include both versions of the same qubit. That's why no particular observer can ever discover a contradiction and this is enough for the consistency of the theory according to the quantum mechanical, "subjective" standards.

The firewall paradox on the right picture was proposed by AMPS. A late Hawking particle B may be shown to be maximally entangled both with some early Hawking radiation's degrees of freedom A as well as with some degrees of freedom inside the black hole C. In quantum mechanics, a system can't be maximally entangled with two other systems. But this is not a problem because no single observer able to access one causal diamond is able to verify both maximal entanglements. In fact, the "old" version of the complementarity could be a legitimate – although less accurate – explanation of the resolution here, too: the degrees of freedom in C aren't really independent from those in A.

You may (approximately or accurately?) say that C is a scrambled subset of A so when you say that B is maximally entangled both with A and C, it is not entangled with two things because the relevant degrees of freedom in A and C are really the same! It's a similar situation as if you considered a "zig-zag" spatial slice of the spacetime that happens to contain 2 or 3 copies of the same object at about the same time. All "xeroxing-like paradoxes" would be artifacts of this slice that pretends the independence of things that aren't independent.

So there is no paradox, one doesn't have to sacrifice the equivalence principle, the infalling observer may still enjoy a life that continues even after he crosses the event horizon of a young or old black hole, and everything makes sense. It really looks to me as though Polchinski et al. have really denied the very essence of the complementarity – whatever precise formulation of it you choose. Maybe they were also misled by some of the usual "realist" misinterpretations of the wave functions and density matrices – for example by the flawed opinion that it must be "objectively and globally decided" whether some system is described by a pure or mixed state, by qubits that are entangled with someone else or not. Such questions can't be answered "objectively and globally": they should be answered relatively to the logic and facts of a particular observer and whether a system is in a pure state or a mixed state is really just a question about the subjective knowledge of this observer, not an "objective question" that must admit a "universally and globally valid answer". In the case of general relativity, especially in spacetimes that are causally nontrivial, this subtlety is very important because individual observers can't transcend their causal diamonds so mutually incompatible perspectives that cannot be "globalized" are not only allowed but, in fact, common.

So applause to Raphael Bousso, ladies and gentlemen.

Update: Polchinski responds to Bousso

Joe Polchinski thinks that Bousso's picture has a bug. A comment on CV:
Bousso (and others) want to say that an infalling observer sees the mode entangled with a mode behind the horizon, and the asymptotic observer sees it entangled with the early radiation. This is an appealing idea, and was what I initially expected. The problem is that the infalling observer can measure the mode and send a signal to infinity, giving a contradiction. Bousso now realizes this, and is trying to find an improved version. The precise entanglement statement in our paper is an inequality known as strong subadditivity of entropy, discussed with references in the wikipedia article on Von Neumann entropy.
I would need a picture and details. Where is the measurement taking place? What is the new contradiction? If he measures C inside the black hole, then he clearly can't send it to infinity for causal reasons. If he sends it right before he falls in, in the B phase (old hole's radiation), the information comes out redshifted and hardly readable as a part of the information in the Hawking radiation; it's the same information that we already count as B. If the infalling observer makes the measurement in the stage A, i.e. even earlier than that, then it's irrelevant because that's really the initial state in which the existence of the black hole doesn't play any role yet. If the measurement is done so that it's available to observers near singularity as well as those after the black hole evaporates, it's just a fact that both of these observers share. It makes no sense to say that this piece of information is later entangled with anything else: once a qubit or another piece of quantum information is measured, it's no longer entangled with anything else! When I measure a spin to be up, the state of the whole system is \(\ket{\rm up}\otimes \ket{\psi_{\rm rest}}\) and similarly for the density matrices. No entanglement here.

So whatever method I choose to read Polchinski's reply to Bousso, it makes no sense to me.

President Klaus survives "assassination attempt"

There's a national holiday in the Czech Republic today. We call it the Day of Czech Statehood.

On September 28th, 935 – or perhaps 929 – Good King Wenceslaus (later St Wenceslas, the patron of the Czech nation) whom you know from a Christmas Carol was murdered by his brother Boleslav the Cruel.

It's probably unusual to celebrate anniversaries of the assassination but that's how it works here. No one knows the exact birthday of the king in 907, anyway.



Today it's also the name day of all people called Václav (=Wenceslaus) – a traditionally popular first Czech name that was also recycled by several Czech kings of a later era but one that began to disappear among the newborns, however. Despite the decline of the name, it's been an unwritten rule that the presidents of the Czech Republic have to be called Václav: Havel and Klaus both bore this name. To make things dramatic, the Bashar Assad lookalike on the picture above shot the current leader, Václav Klaus, today.




I couldn't tell you before the fold because the punch line would be lost. But fortunately the bullets were made of plastic and the gun was mostly addressed to users among children. ;-) Nevertheless, the incident shows a failure of Klaus' bodyguards. After the "assassination attempt", Klaus continued in his discussions with the citizens. He was in a small town of Chrastava to reopen an Art Nouveau bridge that was damaged by some floods 2 years ago.



This gas revolver was shooting at Klaus today.

Today, Klaus also spoke at an event of the Catholic Church where he urged the society to protect traditional values against various sort of "progressives" – well, he used a more funny and somewhat more obviously derogatory Czech word "pokrokáři" (Progressers? Progressists?) – but the content is clearly the same, anyway. :-)



Some of the Catholic recessists attending the St Wenceslaus Pilgrimage demanded Václav Klaus to be canonized.

If you have the nerves, you may watch those not so dramatic moments. Don't expect the assassin to be some extremist with clear anti-Klaus opinions. After his shooting, TV Prima reporters just asked him why he pulled out the gun. He replied with a formally coherent sentence: "Because the politicians are deaf and blind to the whining of the people." ;-) Just a countryside moron.

Update: the shooter, Mr (probably not Dr) Pavel Vondrouš, has actually had links with the communists but they kicked him out of the party. On his Facebook account, he says: "Pokut se mespojime vase deti se budou myt hur." This has four huge and embarrassing grammar errors, even if I ignore the missing diacritics, and it means "Iv we ton't join our forces your children wyll be worse off."

Exotic branes, U-branes, and U-folds

TBBT: All American readers should notice that the first episode of the 6th season of The Big Bang Theory was aired by CBS yesterday. Howard was in space.

Holes: Joe Polchinski wrote a Cosmic Variance guest blog on black hole firewalls.

SUSY: Dave Goldberg at IO9 asks what's so super about it.
The first hep-th paper on the arXiv today is a wonderul 94-page-long article
Exotic Branes in String Theory
by Jan de Boer and Masaki Shigemori of Amsterdam. It is a kind of an excursion into the inner workings of string theory that I find extremely important, playful, and I think that similar papers are heavily undercited, underread, and understudied.

The Dutch and Japanese string theorists investigate a certain unusual type of objects whose existence depends on special properties of string theory and which don't exist in regular quantum field theories etc. That's why they are called "exotic". However, as the authors argue, within string theory, these objects are common, unavoidable (they arise, for example, during "polarization" of ordinary branes), and therefore omnipresent.




What are those objects?

They are generalized codimension-2 branes (codimension-2 means that there exist 2 dimensions that don't stretch along these branes world volume) such that the monodromy around them induces a non-trivial transformation on the string theory configuration space, namely a U-duality (T-duality is already bizarre enough).

There exist "rather conventional" codimension-2 branes in string theory, namely 7-branes in type IIB string theory or, more generally, in F-theory. The exotic branes discussed in this paper may be viewed as their generalization outside type IIB string theory in ten dimensions, in vacua with extra compact dimensions (F-theory 7-branes are the special case without compact dimensions, except for the hidden F-theoretical torus fiber, of course).

First, let me start with anyons which also show us "special properties" of the codimension-2 case.

In 3+1 dimensions, we know that particles may obey Fermi-Dirac statistics or Bose-Einstein statistics. That's true in higher dimensions, too. If you exchange two particles' locations, the wave function either changes sign or it doesn't change at all. These are the only two choices because if you repeat the transposition twice, you literally get to the original state, so not even the phase can change. However, in 2+1 dimensions, there are new solutions because if you rotate a particle around another particle twice, the permutation can't be "unwound" to prove that it's the identity. Consequently, a more general phase – and even a general unitary transformation – may transform the wave function. That's what the anyons – particles with exotic statistics allowed in 2 spatial dimensions – are all about. Note that they are point-like particles localized in 2 spatial dimensions: they are codimension-2 objects.

Another example of codimension-2 objects are 7-branes in F-theory.

D6-branes and lower-dimensional D-branes are localized in a sufficient number of dimensions so they look like tiny enough perturbations of the spacetime that surrounds them. But as you go to higher-dimensional branes, there is just not enough room around them, and this has consequences – despite the fact that all the D\(p\)-branes seem to be T-dual to each other.

D9-branes are spacetime-filling (codimension-0) and the total charge they carry should better cancel: this is really the generalized Maxwell equation for the 10-form potential or the 11-form field strength. Because the 11-form vanishes identically, as you may have screamed when you were reading the previous sentence, the generalized Maxwell's equations extending \(\partial_\mu F^{\mu\nu}=j^\nu\) really say just \(j^\nu=0\) except that the current should have 10 indices. That's why you may only study D9-brane-anti-D9-brane systems with tachyons (which make the backgrounds unstable or interesting) or you may consider vacua with orientifolds that also contribute to the current and they lead you to type I string theory which has to have 16 D9-branes and their mirror images "behind" the orientifold plane, thus generating the \(SO(32)\) gauge group.

D8-branes are codimension-1 objects. They make the dilaton of type IIA string theory run very quickly unless the total number of them is again zero – or whatever it should. At the end of the world in type IA string theory, the T-dual of type I, you may combine the D8-branes with the orientifold plane and this gives you degrees of freedom that may become the \(E_8\) gauge supermultiplet in the M-theory limit.

D7-branes are the first example in which the branes leave some breathing space but it is very limited. A funny thing is that loops around the D7-branes can't be unwound; this has similar geometric reasons as the anyons. Consequently, you may objectively answer the question "how many laps around the D7-branes have you walked", something that would be impossible for lower-dimensional branes. Consequently, each "orbital period" around such D7-branes may induce a transformation on the other fields as long as this transformation is a gauge symmetry.

In particular, D7-branes induce a transformation on the dilaton-axion complexified field \(\tau\). As you should know, there is a whole \(SL(2,\ZZ)\) symmetry – S-duality group of type IIB string theory – that acts on this field \(\tau\) as\[

\tau\to\tau' = \frac{a\tau+b}{c\tau+d},\quad ad-bc=1, \quad \{a,b,c,d\}\subseteq\ZZ.

\] If you look at the simplest case of a 7-brane, the D7-brane, you will find out that it only changes the axion \(\theta\) by \(2\pi\) – because it measures some kind of electromagnetic flux around the brane – which means that the monodromy is \(\tau\to\tau+1\). In other words, the monodromy matrix is\[

\pmatrix{a&b\\c&d} = \pmatrix{1&1\\0&1}.

\] That's the generator \(T\) of the \(SL(2,\ZZ)\) group. Its order is infinity: any power of it is inequivalent. You may place \(n\) such D7-branes on top of each other. The value of \(b\) will change to \(b=n\) and the "stack of D7-branes" at the origin will carry the usual \(U(n)\) group you expect perturbatively.

However, D7-branes aren't the only possible 7-branes in F-theory or type IIB string theory. Pretty much any monodromy may be induced if you make a round trip around the origin where the 7-branes are located. In this way, you may get a rich spectrum of 7-branes, i.e. codimension-2 singularities in F-theory, and they carry various gauge groups (including exceptional ones) and may support matter in unexpected representations when they intersect, and so on. The possible "singular fibers" which is pretty much the same thing as the 7-branes are classified by the Kodaira classification and all these things are essential for the model building of realistic string models of reality via F-theory. It's a field that's been investigated since the mid 1990s.

So the possible monodromies around the 7-branes are elements of \(SL(2,\ZZ)\) which is a non-Abelian group. That makes the 7-branes more general than lower-dimensional branes because the monodromy is what measures the total charge; recall that \(b=n\) for the stack of D7-branes. So the charge of a 7-brane is a non-Abelian object. You can't just add the 7-brane charges as if they were commuting objects.

That's interesting but there's still something "ordinary" about 7-branes in F-theory: the \(SL(2,\ZZ)\) transformations – which appear as monodromies – may be geometrically visualized. After all, that's what F-theory is all about. You imagine that the 10-dimensional type IIB string theory is really a 12-dimensional theory compactified on a 2-dimensional torus. The two cycles of the torus form a basis of the first homology group. However, the basis isn't unique. You may reshuffle the two basis vectors by an \(SL(2,\ZZ)\) transformation – and it's the same transformation that is interpreted as the S-duality group of type IIB string theory.

Stringy monodromies

With the extra two dimensions, the whole spacetime looks like a purely geometric manifold – although a 12-dimensional one – and F-theory is indeed mostly a research of 12-dimensional manifolds that admit an elliptic fibration (in which you may pick two coordinates that always look like a torus attached to a point in the remaining dimensions). For example, realistic F-theory vacua with 3+1 large dimensions may be investigated by looking at possible 8-dimensional manifolds (of hidden dimensions) that are eliptically fibered.

However, the S-duality group \(SL(2,\ZZ)\) in string theory has its cousins, namely T-dualities and, even more generally, U-dualities. For example, M-theory on \(T^k\) has the U-duality group \(E_{k(k)}(\ZZ)\). Note that S-dualities exchange the weak coupling and the strong coupling (and may transform some axions, too); T-dualities exchange small radia and large radii and they are valid order-by-order in the string perturbative expansions; U-dualities are more general groups of transformations that do a "combination" of what the S-dualities and T-dualities do.

And when the monodromies around the codimension-2 objects are general U-duality transformations, one obtains the exotic branes that this new paper (and some previous papers by these two authors and several other authors) is focusing on. They list lots of examples – within a set, they really classify all the possibilities – and they describe how these exotic branes inevitably appear when you polarize "ordinary" branes; when you study the U-duality multiplets for a sufficiently high number of compactified dimension.

I must emphasize what the U-duality monodromies mean in plain English because it's really exciting.



Everyone knows what a Möbius strip is. When a letter "R" walks around the strip, it returns to the original place as a "Я" (it's the letter "ya" in the Cyrillic alphabet), i.e. "R" written on the opposite side of the strip. So the monodromy – a trip around the strip – induced a mirror transformation. If the Universe had a similar twist, you could go around and return as a human (or antihuman) with a heart on the right side. It can't really occur in our Universe because the left-right parity symmetry (P) isn't even a symmetry (and CP isn't a symmetry either) and you can't connect the Universe to a transformation of itself if the transformation isn't a symmetry of the Universe.

However, string theory allows you to construct more novel spacetimes. If you were going around the strip in the backgrounds discussed by these two authors, you could see that a string with a momentum along hidden dimensions (the counterpart of "R") would get morphed into a wound string (the counterpart of "Я"). Of course, the compact circle's radius has to be an aperiodic function of the angle: it must change from \(R\) to \(\alpha'/R\) as you go around the circle, too (for the "final wound string" to have the same mass as the "initial momentum mode string"). And even more complicated non-geometric (and therefore "non-local", when it comes to locations in the compact dimensions) transformations could be possible.

Moreover, these authors claim that these objects are generic, unavoidable, and get produced by many mundane processes. In a previous paper, they discussed "U-folds" which attempt to be completely non-geometric backgrounds obtained by a decompactification of the exotic branes.

Exotic branes, fuzzballs, and superstrata: possible black hole information players

A final section is also dedicated to a fascinating discussion of exotic branes and black holes. The black hole microstates were conjectured by Samir Mathur to be described by particular "classical solutions with string-like hair". However, this description of the microstates as "fuzzballs" was only written down explicitly in some simple situations.

More generic black holes, if they may be included into the fuzzball paradigm at all, probably require a more general description of the microstates than just some "ordinary hairy solutions to the equations of SUGRA". And it's been proposed – and defended by some evidence – that the exotic branes could be a very important fraction of the black hole "hair" in the fuzzball description of more generic black holes. So if you make a round trip around pretty much any curve at the event horizon, it induces some insanely non-geometric, U-duality-like transformation. Search for "superstrata" in the new paper to learn something about this exciting paradigm. Some relevant papers about those issues can already be found in the literature.

This ability of the black hole horizon to easily induce "almost uncontrollable" transformations could be a part of the explanation why they're able to "thermalize" so quickly. Incidentally, when Witten was discussing the monster group connected with the pure 2+1-dimensional gravity, I was led to consider very similar ideas. Transformations from the monster group may appear as various monodromies in "parts of a black hole". If you realize the monster group by the stringy construction that makes the monstrous moonshine manifest, the monster group transformations may be interpreted as some kinds of a T-duality.

What de Boer and Shigemori are doing may lead us to a rather new, more accurate perspective on "what string/M-theory looks like in the most generic situations", offer new tools to study some black hole puzzles, and maybe even bring us closer to the most universal possible definition of string/M-theory. That's why I am so annoyed by the small number of people who are working on similar essential questions.

And that's the memo.

Thursday 27 September 2012

Fifty years after Silent Spring

Conservationism or environmentalism as an ideology has its roots in Nazi Germany and one could probably go even further.

But the Western environmentalism in the form we are familiar with today was born exactly 50 years when Rachel Carson began to release her book Silent Spring. It's such an inconvenient anniversary that most of the articles about the anniversary listed by Google News have been written by the critics of environmentalism.

-->

Some good articles were penned by Ronald Bailey, Roger Meiners, Matt Ridley, John Tierney, and Charlie Stenholm with John Block. Links via Benny Peiser.

Let me add a few words. Centuries ago, people were living in Nature and they understood it was cruel. So they had no romantic feelings about Nature: they had to hunt if they didn't want to starve to death (or become vegetarians), they had to be ready for other predators, cruel weather, and so on, and so on.

Sometime in the 19th century, the industrial Western civilization became advanced enough so that most people were "stronger" than the usual natural foes. With great power comes great responsibility, Voltaire wrote in a different context. It makes sense that people started to be looking at themselves – whether they weren't doing something that was wrong or harmful. We could find examples in which the answer was Yes. Examples in which people changed their behavior because they just didn't want to destroy Nature, they just didn't want to destroy their own environment or health.

In the 19th century, our cities were already industrialized and the pollution was probably much higher than in recent decades. For example, the picture below shows what Emil Škoda's major factory in my hometown of Pilsen looked like in the 1880s:



There were already lots of chimneys and there were probably lots of pollutants. People didn't care much. We may be shocked what they found tolerable because various emissions over there wouldn't be considered tolerable today. However, those 19th century people could have actually been more reasonable than we are. Our worries and policies inspired by these worries may actually be causing more harm than improvement – they harm our psychology, optimism, creativity, and through the bad policies, they harm our industries and prosperity in general.

Nevertheless, I said that it is right for humans to look critically at themselves, to have feedback mechanisms. But what Rachel Carson did was something else and almost entirely negative. She introduced all the basic pernicious features of environmentalism as an ideology that we're still witnessing and struggling against today. I would summarize these features by the following list. She has introduced the following bad habits and beliefs:
  1. A small effect, usually a hypothetical one, may be taken out of the context and inflated.
  2. The industrial activity (and human activity in general) may always be assumed to be bad for Nature.
  3. Claims that are compatible with the previous point may be claimed to be scientific even though there is no actual scientific evidence supporting such claims or if there is even evidence showing that these claims are wrong.
  4. When a particular threat is no longer fashionable or powerful enough, when it "loses steam", the main problem with the human activity must be "continuously redefined" even though nothing has actually changed about the human activity or the scientific evidence.
Her particular book claimed that pesticides and DDT in particular would cause many diseases you may imagine, including cancer, and they would also lead to extinction of many bird species. Some of her claims – not those about cancer – may have had a true core but the vast majority of her statements were rubbish, as we can easily see with the hindsight of 50 years we have acquired by today.

As some of the articles mention, we are using 2 times greater an amount of pesticides than people were using 50 years ago. Still, the cancer rates went down in the last 20 years and the reason is that the actual main contributors to cancer – which are not really pesticides – went down. (Carson herself had breast cancer, had to be treated, got weaker, caught a respiratory virus, but she died of heart attack aged 56: so she's no recipe for longevity according to the best rules of Nature.)

Her particular concerns about the pesticides were mostly wrong and they are no longer relevant. No one talks about them anymore. Compare this irrelevance of all the details she wrote with the relevance of other texts written in the early 1960s, e.g. the papers about the Higgs boson. All the papers still matter and we (and a committee in Stockholm) still care about their details. While she has contributed to the death of tens of millions of people in Africa that resulted from the ban of DDT, everyone knows that pesticides as a principle couldn't have been abandoned. The planet today only feeds 7 billion people because the agriculture largely relies on genetically modified crops, pesticides, and other things. Without those modern technologies, billions of people would have to starve to death.



So I don't want to talk about the particular topics she was hyping: they don't deserve it.

Instead, I want to say that she was a pioneer of an ideologically driven pseudoscience pretending to be science. When she talked about the life of birds and their interactions with the environment, it sounded like a science – ecology. When she talked about pesticides, it sounded like a science, too – some kind of biochemistry. So by the choice of words, she could have pretended she was speaking as a scientist. A problem is that the claims she was making were actually never scientifically justified, at least not with good enough standards. They were ideological slogans. And she was one of the first people in the West who intensely insisted that the compatibility of a proposition with her ideology may replace the scientific rigor that was normally needed to establish scientific claims.

To prove her predetermined conclusion that the industrial activity was wrong, she picked a random technicality – some possible yet mostly hypothetical bad side effects of pesticides – and she inflated them out of proportion and added lots of accusations that weren't really true. At some moment, it became obvious that her scaremongering was indefensible (well, the population of birds was actually growing even when her very book was published) so the particular pesticide hysteria ended. However, what didn't end was her copyrighted dishonesty. It's been recycled many times.

While pesticides are pretty important to feed the whole mankind today, as I have mentioned, they're not really an essential and omnipresent part of the civilization. If we banned all pesticides, a fraction of the humanity would die but the rest could continue their lives in pretty much the same way. Unlimited fear of pesticides was replaced by fear of population bomb, ozone hole everywhere, acid rains, radioactivity from nuclear power plants killing all life on Earth, new ice age, and – finally – global warming. Some of the worries had a justifiable core; most of them didn't.

What the newest fearmongering wants to ban – carbon dioxide emissions – is much more universal and crucial for the civilization than pesticides (and vastly more omnipresent than e.g. freons that may have been genuinely harming the ozone layer – and even than the sulfur oxides that were certainly causing acid rains). We couldn't do most things if carbon dioxide emissions were "illegal".

Fifty years ago, the tiny "scientific" effect that was inflated out of proportion was some hypothetical lethal effect of pesticides on birds. These days, it's the greenhouse effect – an effect that is said to destroy the very climate, and therefore everything else with it. In both cases, they are rank-and-file, weak enough effects among millions of other effects that science may study and does study. But in both cases (and many other cases), the environmental movement promoted these phenomena to the most important processes that are taking place on Earth. Everything (at least in the modern agriculture) was about the lethal impact of the evil pesticides 50 years ago; everything is about global warming, Carson's posthumous children claim today.

The environmental movement loves to "worship" something as the "Devil" – sometimes it's the pesticides, sometimes it's the carbon dioxide. In reality, these "Devils" may sometimes cause a negative thing but most of the things they're causing are positive and it's surely scientifically indefensible to consider them "purely bad".

So the "principles" coined by Rachel Carson – which include the ability to mutate, see the last one – may be viewed as a dangerous infection that started to plague the mankind. An increasing number of people have been affected by this infection; and ever larger and more important technologies and industries were becoming potential victims of the proposed policies. These things have been happening – and are still happening – not because science would demonstrate some actual fatal threat but because it has become normal to promote exaggerations, pseudoscience, and downright lies as long as they are compatible with the ideology that the environmental movement holds dear.

As Carson's infection grew and started to threaten all industries that emit carbon dioxide – and most of the crucial ones do – we are in trouble. Aside from tens of millions of people that Carson helped to kill, this threat for the industrial civilization as we have known it – and the accompanying threat for the scientific method that was increasingly squeezed by something that only pretends to be science – may be, to a rather large extent, blamed on Rachel Carson herself. That's why you shouldn't forget to spit on her grave if you ever visit Maryland.

And that's the memo.

Experimenters, SUSY, frustrations, and anthropic ideas

High-energy phenomenologists and experimenters may be entering a psychological stage that was predictable – and that, in fact, many of us including your humble correspondent were predicting. The LHC is a great machine that works very well and pushes many frontiers but it's not a "miracle machine" that may answer all open questions about physics, at least not within two years.



Fashion for naturalness

So almost everyone who is building his or her interest in fundamental physics on accessible experiments seems to be frustrated these days. The discovery of the Higgs boson may have made those emotions even worse, see e.g. Jester at Resonaances, in agreement with the "nightmare scenario" people would talk about 5 years ago and we seem to be living through now, so far (the scenario is that the LHC only finds the Higgs boson and nothing else). Those of us who don't think that physics ends at \(8\TeV\) or \(14\TeV\) – and this includes most formal theorists, I would guess – are largely unaffected by this particular spleen, of course. ;-)

Note that patience is sometimes needed. Peter Higgs hasn't spent 50 years in depressions even though he had to wait for the discovery of "his" goddamn particle for 48 years – despite the fact that it's a very trivial quantum of a field that is more mundane than any other field we have found in Nature (it's spinless, stupid). That's a reason to think that the problem is with the people, not with the actual progress in physics. People just don't like physics as much as our predecessors did. They keep on whining, complaining, and I am annoyed by them because the current physics' image of the world is richer and more accurate than it has been at any moment in the past.




There are clearly many important physics questions whose characteristic energy is higher than \(14\TeV\) and that still need to be resolved because they're essential for the inner workings of our Universe. Careful theoretical arguments and calculations must obviously play a crucial role in this process. Much like formal theorists don't care about the absence of new physics too much, experimenters don't care about theoretical arguments much. ;-)



Let me discuss an interesting example. Dr Lily Asquith of ATLAS wrote an interesting blog entry at the Guardian blogs today,
Desperately seeking SUSY.
Her title was inspired by an old paper by Ginsparg and Glashow.

The picture above was originally placed beneath the subtitle. It shows Brad Pitt with someone who is definitely not Angelina Jolie but this combination was claimed to show a "perfect symmetry". Later, the picture was replaced by a less blasphemous picture of Madonna as Susan:



If the writer posted her own picture, it wouldn't really hurt! ;-)



But let's return to the content. Asquith expressed some degree of skepticism about supersymmetry, an attitude that is somewhat widespread among the experimenters. This asymmetry isn't shocking: experimenters are focusing on what they see and they don't think "too much ahead of time" because it's simply not their job.

She seems to be aware of the basic "naturalness" arguments although her detailed ideas about this principle are often incorrect. For example, we learn that it's unnatural for the up-quark to be twice lighter than the down-quark. Well, this ratio isn't unnatural because it's of order one.

The LHC has already ruled out many explanations of naturalness – such as the Higgs compositeness or technicolor – and "some corners of supersymmetric theories' parameter spaces" are really the only surviving candidates to explain naturalness. If these soldiers fall in coming years, "naturalness" as defined by 't Hooft is simply not a good guide to understand the unbearable lightness of the God particle's being.

Only recently, I realized how many people in particle physics – and which people – were deeply affected by the particular 't Hooft's paper that introduced naturalness. The paper contains many important conceptual ideas but when it comes to the question whether they're justified, it's a mixed bag. He really predicted some strong coupling near the electroweak scale, some compositeness or technicolor – and today we know that the prediction is almost certainly wrong.

The people whom I referred to in the previous paragraph – which includes various undergraduate and even graduate particle physics instructors that were teaching me – viewed the "new strong dynamics" at the electroweak scale to be "generic", "almost certain", and so on. I was never forced to read the paper in its entirety so I wasn't brainwashed. Maybe I am not alone in this "different generation" but I have always considered any ideas about the compositeness of the Higgs etc. to be totally awkward, contrived, unnatural, and unlikely. String theory seems to produce elementary scalar fields without any problems so it has always seemed unjustifiable to try to insert another layer of the onion in between the Higgs and more fundamental physics. The Higgs boson may also be light due to cancellations and other new principles (SUSY is the best example), something that was implicitly assumed to be impossible in 't Hooft's paper.

But imagine that there's just an elementary Higgs boson whose lightness is protected by no new physics at nearby scales. The anthropic explanation would get much more convincing but I think that strictly speaking, it would still not be established. It's still plausible that the underlying high-energy fundamental theory produces light fields for a certain mathematical reason to be understood – while this reason doesn't actually have to imply the existence of any new light fields.

Fine. Let me not go into it. The anthropic explanation of the light Higgs may in principle be right, too. The Higgs is fine-tuned to be light simply because it's needed for stars to be long-lived and, consequently, for life to exist. Ms Asquith doesn't like the anthropic principle, either. But because she doesn't seem to like any other explanation of the Higgs' lightness either, you may worry that she is an example of a permanent, habitual naysayer who has no solution but denounces all other attempted solutions.

Some people, like Nima Arkani-Hamed, think that this "anthropic vs non-anthropic" question is a defining, polarizing issue of contemporary physics, a crossroad that must send physics into one of two possible, vastly separated roads. This "extreme crisis moment" is intriguing and comments about it may be linked to Nima's immense temperament. But I, for one, have some doubts whether this polarization is as sharp as Nima is imagining. In fact, it seems plausible that the truth is somewhere in between.

Of course that it's wrong if someone proposes to use the anthropic slogans as an answer to – or as an excuse for – every currently open problem in physics. Many questions in the past – well, almost all of them – turned out to have non-anthropic explanations and there is no verse in the Bible that would imply that right after the year 2000 or 2012, a phase transition has to occur and all questions must be answered anthropically. This is, of course, nonsense. Currently open questions may allow for "conventional" physical explanations much like the questions open in the past turned out to have such explanations.

On the other hand, the laws and parameters of our Universe clearly have to be such that they allow life to exist because life does exist, after all. And whenever some features of Nature seem to be chosen "randomly", we must care about the probability distributions. While Nature may offer some cold hard lifeless distributions, it's clear that we only want to pick those that seem relevant to us. If Nature predicts zillions of universes – or a vast majority of universes – that don't allow life, it's not necessarily a contradiction.

The real question isn't whether the anthropic principle gives us a non-trivial explanation of something: it never does. All anthropic explanations are just tautologies or particular projections or aspects of the trivial observation that we're here. They're filters applied to the set of possible theories by looking at some function of the observed facts (even though the anthropic champions want to make their "existence of stars and life" sound more canonical than what they are, namely just some observed facts among many others). The real question is whether there exists a conventional physical explanation for a physics mystery. For many physics mysteries, we know that the answer is Yes: that's why they're no longer real mysteries. For others, it may be No.

Asquith mentions some of the other "unnaturally small numbers" that are known to have a conventional physics explanation. In particular, she reminds us that the proton and the neutron have very similar masses. They only differ by a small percentage of the proton mass. This approximate degeneracy is also needed for the existence of stars and life but we don't refer to those things when we explain the approximate degeneracy. Instead, we say that the proton and the neutron are two components of a doublet under an approximate symmetry, the flavor symmetry mixing two light quarks, namely up and down.

While the lightness of any quark may be unnatural, if two of them happen to be light at the same moment, it doesn't look excessively more unnatural. After all, whatever mechanism makes the up-quark light may make the down-quark light, too. So the explanation of the approximate degeneracy based on the approximate flavor symmetry seems to remove most of the mystery from the question.

However, we could perhaps find more complicated examples of "apparent fine-tuning" that still has some conventional explanation but where we may be tempted that the right explanation is "partly anthropic", anyway. For example, some of these "approximate degeneracies required for life" may depend on the detailed values of the quarks' bare masses. If such "approximate degeneracies (or other numerical accidents) needed for life" exist, we can't say that the right explanation is purely conventional because it still relies on the detailed values of the quarks' bare masses which we can't calculate at this moment.

You may see that I am proposing that the transition from conventional to anthropic explanations could be continuous and gradual. If that's the case, I would like to know what is the parameter that controls the "percentage" – the parameter that decides whether the right explanation to a mystery is more conventional or more anthropic.

Some clues.

Whenever you adopt the assumption that some mystery may be explained anthropically, it allows you – and perhaps encourages you – to imagine that our visible Universe is fundamentally very messy, awkward, a Rube Goldberg machine. The anthropic principle is an excuse that allows some people to say that many things must simply be separately adjusted because that's needed for life. This description of a universal property of the anthropic principle also makes it clear that creationism is one of the extreme versions of the anthropic principle (regardless of the pro-God or anti-God rhetoric that may sound diametrically opposite: but I am trying to analyze the logical framework of the ideologies, not some superficial religious affiliations).

On the other hand, physics based on conventional explanations always tries to make the world as "unified" as possible – explain a maximum number of observed features of Nature as consequences of the same, small number of assumptions. A possible problem with this "opposite extreme", conventional physics as we've known it, is that it may seem incapable of explaining certain things such as the tiny cosmological constant or the very small Higgs mass.

A relevant quantity is the probability that a world that qualitatively agrees with ours follows from a conventional explanation or from the anthropic explanation. To be able to decide which explanation is more likely, we must guarantee that all these probabilities are numbers in between 0 and 1 – but they're never 0 or 1. For example, it is illegitimate to use the "extreme anthropic reasoning" that would allow you "solve any problem". With this perspective, Christian creationism – but, to disappoint my Christian readers, every other miraculous story about the Universe as well – would be possible.

Nima Arkani-Hamed would say that it is legitimate to use the anthropic explanations to demystify the mysteries and consider probabilities such as \(10^{-400}\), i.e. ten to the power of a "few hundreds", acceptably large. There are many universes in the multiverse and their number is comparable to these numbers. On the other hand, creationism is invalid because it requires to consider processes (known as miracles) whose probabilities are \(\exp(-10^{400})\) or so – even exponentially smaller – to be acceptable.

While it's true that \(\exp(-10^{400})\) is "qualitatively" smaller than \(10^{-400}\), you may still be worried about the discrimination against one of them. After all, both of these numbers are small and the transition between them is continuous. To defend Arkani-Hamed's perspective, you may try to refer to string theory that apparently allows us to estimate that the number of semirealistic vacua is of order "ten to the hundreds" – which arises from the fact that the most complicated Calabi-Yau manifolds have Hodge numbers of order "a few hundreds" as well.

Maybe it's a right argument. Maybe the counting of cycles of Calabi-Yau manifolds implies that "universe-creating" events with probabilities "ten to the minus few hundred" are acceptably likely while events whose probabilities are "ten to ten to minus few hundreds" are unacceptable miracles. But I have some problems with this conclusion. After all, \(10^{-400}\) is still an extremely small number. The anthropic ideology de facto tells you to treat numbers such as \(10^{-400}\) to be numbers of order one. But if that's the case, all the values of observables – and even probabilities of "individual events" – we may ever measure are of order one! (It's because we can't really measure anything with the accuracy \(\exp(-10^{400})\) or detect similarly small effects.) That would mean that no explanations are needed for anything that will ever become a part of science: the anthropic ideology is enough. This conclusion would clearly be invalid.

So if we try to find a demarcation line between the conventional and anthropic explanations, we simply can't afford to declare the anthropic explanations the winner whenever the probabilities we need to explain are as small as \(10^{-400}\). These are already very small probabilities and even though the anthropic principle wants to claim it's able to "explain all these small miracles", we must resist and we must still give a chance to conventional explanations.

What's the right quantitative procedure to estimate – and to justify the estimate – whether it is legitimate to rely on the anthropic non-explanation? Is there a procedure at all? Do you have any ideas?

Wednesday 26 September 2012

Albert Einstein 1911-12, 1922-23

Several events related to Albert Einstein's life occurred in recent days and months. If you consider yourself a sort of Einstein fan, you may let me mention some of them.



First, you may finally buy Einstein's brain for $9.99 (or $13.99 now?), no strings or hair attached. See Google News or go to the iTunes AppStore.

Second, Caltech and Princeton University Presses teamed up and released the 13th volume of Einstein papers. They cover January 1922-March 1923 and you may already buy the book for $125 at the PUP website: it has over 900 pages. Einstein is travelling a lot, is ahead of time and already (or still) speaks Hebrew in British Palestine ;-), and doesn't even mention his Nobel prize. Wired about the new book.




Third, there was a conference of Einstein fans among relativists three months ago in Prague. It was a centennial one because Einstein was a full professor (for the first time!) at the local Charles University (German section: then named Karl-Ferdinands Universität) between 1911 and 1912. He left after the third semester, in July 1912, almost exactly 100 years ago.

You may want to read some of the dozens of presentations. I will recommend you the presentation by Prof Jiří Bičák, the main organizer and my undergraduate ex-instructor of general relativity:
Einstein’s Days and Works in Prague: Relativity Then and Now
It's a fun PDF file that shows a lot about the social context as well as the state of his thoughts about general relativity – which wasn't complete yet – at that time.



Source (EN). Click to zoom in.

He would work in the 7 (at that time: 3) Viničná Street (name related to wineries: Street View) which is a building of the Faculty of Natural Sciences these days. Of course, it was four decades before the Faculty of Mathematics and Physics was established but I know the place well because it's less than 500 meters (direct line) from the "Karlov" building of the Faculty of Maths of Physics, my Alma Mater.

In April 1911, he wrote this to his friend Grossmann:
I have a magnificent institute here in which I work very comfortably. Otherwise it is less homey (Czech language, bedbugs, awful water, etc.). By the way, Czechs are more harmless than one thinks.
As big a compliment to the Czechs as you can get. ;-) Two weeks later, he added this in a letter to M. Besso:
The city of Prague is very fine, so beautiful that it is worth a long journey for
itself.
Bičák adds lots of fun stuff about the local reaction to Einstein's presence. But there's quite some physics he did in Prague – essential developments that had to occur for the general theory of relativity to be born. Einstein himself summarized the importance of his year in Prague rather concisely, in a foreword to the Czech translation of a 1923 book explaining relativity:
I am pleased that this small book is finally released in the mother tongue of the country in which I finally found enough concentration that was needed for the basic idea of general relativity, one that I have been aware of since 1908, to be gradually dressed up into less fuzzy clothes. In the silent rooms of the Institute for Theoretical Physics of Prague's German University in Viničná Street, I discovered that the equivalence principle implied the bending of light near the Sun and that it was large enough to be observable, even though I was unaware that 100 years earlier, a similar corollary was extracted from Newton's mechanics and his theory of light emission. In Prague, I also discovered the consequence of the principles that says that spectral lines are shifted towards the red end, a consequence that hasn't been perfectly experimentally validated yet.
Well, be sure that as of today, it's been validated for half a century – in an experiment that took place in another building where I have worked for 6 years (and yes, the Wikipedia picture of the building is mine, too).

The Czech translation I used to translate it to modern Lumo English was probably obtained by translating a German original and you will surely forgive me some improvements.

Note that it was a pretty powerful combination of insights: gravitational red shift as well as the light bending (observed during the 1919 solar eclipse) were discovered in Prague. It was years before Einstein had the final form of the equations of general relativity, Einstein's equations.

Today, people – including people who consider themselves knowledgeable about physics – often fail to understand that many insights may be physically deduced even if one doesn't possess the final exact form of the equations. Principles imply a lot. They may be logically processed to derive new insights. At the beginning of the 20th century, people like Einstein could do such things very well. Many people today would almost try to outlaw such reasoning – principled reasoning that used to define good theoretical physicists. They would love to outlaw principles themselves. They would love to claim it is illegal to believe any principles, it is illegal to be convinced that any principles are true.



Albert Einstein and Johanna Fantová, his Czech yachting buddy while in the U.S.

The derivation of the bending of light is a somewhat annoying argument and the right numerical factor may only be obtained if you're careful about the equations of GR. So while I was not sure whether Einstein got the right numerical coefficient in 1911-12, I feel that I wouldn't trust it, anyway. Up to the numerical coefficient, the answer may be calculated from Newton's mechanics. (Well, later I searched for the answer and Einstein's numerical coefficient in 1911 was indeed wrong, the same as the Newtonian one, i.e. one-half of the full GR result.)

Just imagine that you shoot a bullet whose speed is the speed of light – Newton's light corpuscle – around the Sun so that the bullet barely touches the Sun and you calculate how much the bullet's trajectory is bent towards the Sun due to the star's gravitational attraction. You integrate the appropriate component of the acceleration to find out the change of the velocity, take the ratio of this perpendicular velocity to the original component of the velocity, and that's the angle.

General relativity just happens to give you a result that is almost the same: well, it's exactly 2 times larger.

It's perhaps more insightful to discuss the derivation of the gravitational red shift where one may reasonably trust even the numerical coefficient and it is indeed right. His argument went like this (optimization by your humble correspondent).

Consider a carousel rotating by the angular speed \(\omega\). Objects standing at the carousel at distance \(R\) from the axis of rotation will feel the centrifugal acceleration \(R\omega^2\). They will also move by the speed \(v=R\omega\). Special relativity guarantees that their clocks will tick slower (time dilation), by the factor of \[

\frac{t_\text{at carousel}}{t_\text{at the center}} =
\sqrt{1-\frac{v^2}{c^2}} = \sqrt{ 1-\frac{R^2\omega^2}{c^2}} \approx 1 - \frac{R^2\omega^2}{2c^2} .

\] Observers in the rotating frame of the carousel will interpret the centrifugal force as a gravitational field. And because from the rotating frame's viewpoint, the velocity of all objects standing on the carousel is zero so the special relativistic time dilation doesn't exist in this frame, the slowing down of time must be a consequence of the gravitational field.

The coefficient determining how much the time is slowed down only depends on \(R\omega\). How is this quantity related to some observables describing the gravitational field? Well, the gravitational acceleration is \(R\omega^2\), as mentioned previously, and it may be integrated to get the gravitational potential:\[

\Phi = -\int_0^R \dd\rho \,\rho \omega^2 = -\frac{R^2\omega^2}{2}.

\] Note that the gravitational potential is negative away from \(R=0\) because the gravitational (=centrifugal) force is directed outwards so outwards corresponds to "down" in the analogy with the usual Earth's gravitational field. Now, we see that the gravitational potential depends on \(R\omega\) only as well so it's the right quantity that determines the gravitational slowdown of the time, i.e. the gravitational redshift. Substituting this formula for \(\Phi\), we see that\[

\frac{t_\text{at carousel}}{t_\text{at the center}} = \dots \approx 1+ \frac{\Phi}{c^2}.

\] So the gravitational potential \(\Phi\) is really related to the "rate of time" which we would call \(\sqrt{g_{00}}\) these days. Einstein could realize this fact several years before he could write down the equations of the general theory of relativity.

Because special relativity guaranteed that motion unexpectedly influences the flow of time and because by 1911-12, he fully appreciated that the accelerated motion and the inertial forces it causes may be physically indistinguishable from a gravitational field, he could see that the gravitational field influences the flow of time, too. And he could even extract the right formula in the Newtonian limit.

Of course, it's safer to work with the full theory. However, such "semi-heuristic" derivations are valid very frequently and you shouldn't just dismiss them, especially if the author of such arguments seems to know what he or she is doing.

And that's the memo.

P.S. Joseph sent me a link to a rather fascinating 1912 or so notebook of Einstein with handwritten puzzles and trivial tensor calculus that Einstein was just learning or co-discovering.

Street View: Antarctica, deep ocean, Alps, Mars, etc.

If you haven't played with Google Maps for some time, you may want to try some of the wonderful links below. Note that the Street View always allows you to "drag" and change your point of view – or press \(\langle\) and \(\rangle\) arrows on the photograph itself and walk a little bit further.

You may think that Street View doesn't allow you to enter the living rooms but at least in some cases, this opinion is wrong.




So try these links near the Antarctica:
South Pole Telescope
Shackleton's Hut
Ceremonial South Pole
Scott's Hut
Cape Royds Adélie Penguin Rookery
Now, the ocean. A turtle was just swimming in front of the Google Minivan as it was happily driving on the ocean floor ;-).

More generally, try Street View Gallery: Ocean. You will find six spots including coral reefs. These organized pictures were added today.

Even more generally, try Street View Gallery. The collections cover the ocean, Antarctica, Scenic Hawaii, World Wonders Project, Amazon, Swiss Alps, seven continents, an art project, discover Israel, NASA, and so on, and so on.

Also, you should notice that the command buttons in the upper taskbar of a new Google Earth has a "planet icon" which allows you to choose the Earth, the Moon, the sky, and... Mars. Wow, I just discovered Nice, France on the Moon as well as Mars. ;-)

As you must have heard – or experienced – Apple introduced its new operating system iOS6 that replaced Google Maps by Apple Maps. Most bridges are broken, maps are not detailed at all, the Senkaku Islands that power some Sino-Japanese tension these days are doubled (in Apple's apparent diplomatic efforts to provide each Asian nation with one copy of the islands), and so on. Even Apple can sometimes screw something. ;-)



Meanwhile, The Telegraph just wrote that some pictures that the Hubble Space Telescope took in 2004 were just unclassified: 5,500 most distant galaxies in the XDF, extreme deep field.

Witten's new trilogy on RNS diagrams

Witten is such a big guy in physics that I will probably not attempt to write a single article attempting to cover all his important contributions to physics, something I did and I plan with other inaugural Milner Prize winners. Instead, we may talk about specific new papers.

I have already mentioned Edward Witten's work on technical issues of RNS Feynman diagrams but because the full trilogy is now out, it may be appropriate to summarize the links and add a few comments.

There are three papers
Short (review of the long one),
medium (background on manifolds),
long (the bulk).
This ordering is chronological, too. After we learned that the first two papers had 42 and 118 pages, respectively, we could guess what the length of the final paper would be. The answer is that the digit counting hundreds is linearly increasing, the digit counting tens is alternating between 1 and 2, and the last digit is describing coordinates of the point (2,8,5). I guess that no one had the right answer. ;-)




Now, what is Witten solving and what is new about these papers?

He is clarifying some technical issues of the procedure to calculate the scattering amplitudes in superstring theory using the perturbative method – to any order. He focuses on the Ramond-Neveu-Schwarz formalism with a manifest world sheet supersymmetry (but not manifest spacetime supersymmetry).

Because the world sheet has a supersymmetry, and the number of supercharges is small enough, it's guaranteed that one should simplify her life if she talks in terms of superspaces all the time. Surprisingly, it hasn't been the case for the moduli space of Riemann surfaces. People would always reduce the discussion to ordinary moduli spaces with bosonic coordinates only. But we're dealing with a supersymmetric theory on the world sheet and its solutions or backgrounds are naturally supermanifolds, too.

Witten does his analysis in terms of the supermoduli spaces of super Riemann surfaces. And he feels good about it.



Using this method, he proves the gauge invariance of the amplitudes – i.e. the decoupling of the BRST-exact scattering states – as well as the right spacetime supersymmetry of the amplitudes whenever it is expected and the right infrared behavior. To do the latter, the infrared limits of the Riemann surfaces must be studied carefully. You may want to know that the word "infrared" appears 79 times in the long paper.

On the other hand, the word "ultraviolet" only appears 8 times and this underrepresentation has a good reason. The integrals representing the stringy scattering amplitudes don't have any ultraviolet regions at all! So there is not even potential for ultraviolet divergences. This has been known for a long time. As Witten mentions, the most mathematically explicit or rigorous demonstration of this fact is the Deligne-Mumford compactification of the moduli spaces (or super moduli space) which shows that all "extreme limits" of the Riemann surfaces are infrared limits. There are no ultraviolet "extreme limits" and no other limits, for that matter. In some cases, you could think that there is an ultraviolet limit but you will find out that it's been already counted as an infrared one.

So the absence of ultraviolet divergences is a trivial matter, even at the rigorous level adopted by Witten. When you want to analyze potential divergences in string theory, it's always and demonstrably enough to focus on infrared divergences.

Witten demonstrates that to all orders in the string coupling, the massless modes have a nicely behaving, supersymmetric, gauge-invariant S-matrix that reproduces all the required long-distance effects from the effective field theory. Do you remember all the discussions whether there exist amplitudes at all or whether string theory has any formulae for them? Witten surely closes this Smolinisque babbling even at the most rigorous level.

What remains somewhat open at this level of standards is a proper accounting for the external massive states whose masses are corrected by the stringy loops. Of course, it's pretty much known how to calculate these things but the methods haven't been converted to Witten's super moduli formalism so he doesn't have exact super moduli formulae for the mass corrections of the massive states.

Also, and it is arguably related to the previous paragraph, I say, the rigorous proof of unitarity of the resulting S-matrix hasn't been given in this formalism. While it's clear that unitarity boils down to splitting of Riemann surfaces into pairs by tubes in which pairs of vertex operators are inserted, the full treatment with all the glory and up to all orders hasn't been written down, especially because the vertex operators would have to include those for the massive states whose masses get corrected. Witten admits that the most economic available way to demonstrate unitarity is to show the equivalence between the RNS covariant mechanism and the light-cone gauge Hamiltonian formalism (my favorite formalism, at least in the past, in which I learned to calculate in string theory for the first time in the early 1990s). With a Hermitian Hamiltonian, the unitarity is pretty much manifest.

Witten's work is impressive but just like with many things, I can't avoid thinking that it's right to classify it as maths. String theory's engine to calculate the amplitudes is in principle very simple and straightforward and you may efficiently learn it. If one writes 400 pages about some technicalities, it just doesn't match my idea about the underlying simplicity and naturalness of the rules. Of course, I may only say such a thing because I adopt certain semi-heuristic arguments that are OK for physicists. Mathematicians may expand these arguments to hundreds of pages of rigorous maths. In most cases, they confirm what the physicists have known all the time. In exceptional cases, they find out that the answer is different.

Witten's new papers seem to follow the first scenario.

However, the reasons why I am sometimes dissatisfied by the length of Witten's papers goes beyond the rigor. They seem to contain pretty much all the intermediate steps etc. so the "concentration of action" (in the Hollywood sense) is reduced and not much is left for the reader's own re-discovery. So while I think that Witten's papers are excellent, they don't quite match my preferred genre which is dominated by excitement and the opportunity for the reader to rediscover things and find his or her own proofs (or disproofs) of the assertions.

Tuesday 25 September 2012

BBC: Who's afraid of a big black hole?

Another episode about fundamental physics of the BBC 2 "Horizon" program, featuring people such as Andy Strominger (in a dark classroom of the Jefferson Lab and in his office), was aired in November 2009.




Here's the 59-minute video:



At the beginning, they edit a few interviews so that all the physicists say that no one understands black holes. A little bit over the edge but funny.

After 5:00, they switch to an astronomer who explains what stars and black holes are. He believes that a bright star went supernova and then it became a black hole because we see nothing there. ;-)

They switch to theorists at 8:40, Kaku, Strominger, Tegmark... History of black holes from Einstein. Kaku under skyscrapers about GR and gravity. Funny ancient TV "popularizations" of GR. Comments about black hole – hydrodynamics link. Tegmark tortured by waterfalls; nothing as dramatic as my talk about black hole event horizons while skydiving. ;-)

At 16:05, Tegmark says that we know that you may perfectly survive the crossing of the event horizon; no firewalls here. Inner horizon of a rotating black hole. At 19:20, the singularity is presented as a bug or monster of GR, Kaku.

21:45, Strominger, singularity means we don't know what to do. Less concisely, Tegmark says the same thing. Einstein wrote a paper that black holes couldn't arise, Krauss. 23:50, X-ray observations of black holes. Reinhard Genzel, a guy from the Max Planck Institute who was looking for certain BHs. Well, the galactic center BH. Using motion of stars around it. Won a $1 million astronomy prize. He gave it away and bought a new car.

30:55, Ramesh Narayan of Harvard-Smithsonian is comparing the BH with others. I actually covered his paper in an astronomy course at Rutgers – my talk was exactly about the evidence that there was a giant BH at the center of the Milky Way (two-temperature plasma etc.).

Lots of stellar black holes.

35:30 GR bad for the small world. Need quantum mechanics. Why Krauss? ;-) 36:50, Andy says that to understand the final fate of BHs, QM will be needed. Krauss misinterprets QM: "a particle can be at many points at the same time". It's just ain't the case. A particle may have nonzero probabilities to be at many points but we may still prove that there're just one point where the particle is although it's unknowable in principle before the measurement.

At 38:30, Andy says that QM describes everything, one can't escape it. All objects are quantum and the world is a quantum world. Most of the time, QM and GR are in peace. But there's an arena where they are in conflict, high-density, small size – inside BHs. "Quantum gravity" is said for the first time around 40:00. Kaku and Lagrangians. He sketches some toy UV divergence and adds big words to it. 42:30, Andy also talks about the breakdown of GR.

42:00 BHs – problems become opportunities, a next key, Andy. Linked to the Big Bang mysteries (singularity). Narayan, Andy, Krauss add a few words. We have no clues what QG is. And no one has seen a BH. The cameraman plays with the physicists' eyes, deforms their words (like near a black hole) etc.

50:20, a new telescope guy, Shep Doeleman of MIT. Computer-combines lots of telescope to get an image at a supercomputer. Huge increase of sharpness. Nerdy discussions with another empirical guy. Trying to see a horizon via shadows.

Sorry for these chaotic catchwords. It wasn't supposed to be a fully formatted text.

Verdict

It was a so so program. I find it very paradoxical that they haven't even tried to cover any new actual theory from the last 40 years. I mean, there was no black hole thermodynamics or string theory or information loss in the program. That's very paradoxical given the fact that Stephen Hawking is arguably the most recognized living scientist. What he's famous for among physicists has never been really covered by a popular program, or at least the number of such programs is infinitesimal.

And yes, I was annoyed by the highly repetitive occultist comments that everything about the black holes is completely mysterious and misunderstood. It's not really the case. Such programs help to reinforce the widespread laymen's misconception that physicists don't have a clue what they're doing and everyone could be employed as a physicist, too.

Did EPA test toxins on humans?

Steve Milloy of JunkScience.COM and pals have sued Lisa Jackson's EPA for having deliberately exposed unhealthy humans to harmful and lethal substances:
EPA Human Testing.COM, a special website about the story

The initial text over there

Canada Free Press, WUWT, PR Web, Tucson Citizen, NLPC story (Paul Chesser)
Milloy et al. claim to have gone through hundreds of pages obtained via the Freedom of Information Act. The documents imply, he says, that at least a dozen of citizens were treated by carcinogenics and other harmful pollutants, including diesel exhaust and fine particulate matter (PM2.5).




I have no way to safely verify whether the allegations are right. If they're right, it's kind of shocking – emotionally – but one should still calmly think whether such activities are necessary and whether they could be helpful. Milloy seems to suggest that those experiments on people have been used to determined which things shorten people's lives.



What do you think? Pictures such as one above are scary, aren't they? That's especially the case for those of us from Central Europe.

Monday 24 September 2012

Almost all particle physics papers will be free

Journals will sign a deal with libraries

In many other fields such as Earth sciences, people are dreaming about the free access for everyone. In high energy physics, this dream is becoming a reality. After all, most high energy physicists have relied on arXiv.org, a freely accessible preprint server (see a NYT story about it), as their main source of information for more than two decades.

Nature tells us some details about the deal that will make the transition of journals to the free form possible.




In the article called Open-access deal for particle physics, Richard Van Noorden describes the contract between the publishers of 12 journals and a consortium of libraries that will make 90% of HEP papers freely available since 2014 – and it will be pretty much the case earlier than that. Note that the Higgs discovery papers were freely published by PRL two weeks ago.

The consortium of libraries is called SCOAP3. The digit "3" is an exponent, a power, because the acronym is really a product SCOAPPP which stands for the Sponsoring Consortium for Open Access Publishing in Particle Physics. It encapsulates various HEP libraries, HEP funding agencies, and HEP laboratories.

This consortium will pay certain amounts of money to each of 12 journals for every paper to guarantee that the access will be free. The journal subscription fees will also be adjusted (reduced) in such a way that all the institutions should see pretty much the same expenses or profits after the transition. Meanwhile, the humanity – I mostly mean readers (of physics) like you – should benefit because it will acquire a free access to all the papers.

It's fun to sort a table from Nature containing the 12 journals and the price of making the papers free. You may interpret it as a kind of the price of a paper. In the table below, I sort the journals according to the number of high-energy physics papers they published in 2011 and copy the price that the journals have negotiated for one paper. If you need to know the full names of the journals and not just acronyms or their publishers, click at "More" over here.

Journal acronym 2011 HEP papers Price per paper
PRD 2,989 $1,900
JHEP 1,652 €1,200
PLB 1,010 $1,800
Eur. Phys. J. C 326 €1,500
NPB 284 $2,000
JCAP 138 £1,400
PRC 107 $1,900
Prog. Th. Phys. 46 £1,000
Adv. HEP 28 $1,000
Acta Ph. Pol. B 23 €500
New J. of Ph. 20 £1,200
Chinese Ph. C 16 £1,000

Note that the prices are in U.S. dollars, euros ($1.29), or U.K. pounds ($1.62). They are between $650 and $2,000 per paper. You should realize it's the price for all readers, not just one reader. For some papers, it may seem like a lot of money. On the other hand, for good papers, it's a hugely ludicrous amount.

If the same money were paid to the author, the fees for papers couldn't cover the salaries of most physicists. Take some of the big shots who have 300 papers. Multiply it by $1,667, an estimated average fee per paper, and you get just $500,000 or so, a salary for these big shots for 2 years or so.

It's of course a very subtle question what the actual price of the work and results behind a physics paper is. I would stress that it badly depends on the paper. Everyone would probably agree with this point. However, only people who actually understand physics may meaningfully say which papers are valuable and which papers are not. I think that the "value of the mankind" behind many key papers may be quantified in billions of dollars while some rubbish papers may be balanced by a few pennies (but I mean worse papers than average papers in those 12 journals!).

Don't overlook that the agreed upon price per paper's freedom is not only independent of its quality and later impact (which is unknown at the moment of publication and can only be estimated based on the author's name and other rough criteria) but even of its length. Any incorporation of the length and other parameters into the "freedom fees" would probably be counterproductive because it could push the journals to prefer longer, talkative papers etc. Well, the current system really does the opposite, it supports the publication of many short papers (rename each section to a paper and your journal will make a profit) but please, journals, don't read this insight of mine. ;-)

Can other disciplines emulate the example of high energy physics? HEP has had some tradition in free publishing. However, what I find essential for the transition to be politically feasible is that most papers are not being "bought" by any individual readers because HEP is simply too esoteric a discipline. Most people, including those with science degrees, are only capable of reading popular books by shameful aggressive cranks such as Woit and Smolin but they can't actually follow the science.

In fields where the profit from selling papers comes from many readers, i.e. in "easy enough disciplines", it could be difficult to rearrange all the money flows which is needed to make the papers free. One may express the same idea in a different way: high energy physics is hugely "non-commercial" – almost all the money that circulates in it comes from some government-related institutions so they may just "rename" the money flows in between them. In publishing that depends on many readers' desire to buy the products, making these products free is always a financial problem.