Tuesday 31 July 2012

Prof Matt Strassler and global warming

Prof Matt Strassler who blogs at ProfMattStrassler.com made an excursion into our favorite topic which is an inferior scientific discipline at the same moment, namely to the climate change science:
Humans, Carbon Dioxide and Climate
He's been instantly conjectured by Twistor 59 to be Dr Luboš Motl because he blogs both about physics and the climate. As you will see, this conjecture may be intriguing but it is incorrect; the difference is that Dr Luboš Motl doesn't write nonsense about climate change. What do I mean?

Try the minimalistic template.

Prof Matt Strassler (yes, I will repeat the titles as often as Prof Matt Strassler does so that even the people who like titles will soon be driven up the wall by this pretentious way to talk about the people) discusses the recent bold claims by Prof Richard Muller of Berkeley.

Prof Matt Strassler describes Prof Richard Muller in this way:
But he’s one of the most famous, now, because he was also a loud skeptic not long in the past.
Except that a rudimentary knowledge of the facts is enough to realize that the proposition above is false. Prof Richard Muller himself has described his past attitudes to the climate change in an interview for The Huffington Post:
It is ironic if some people treat me as a traitor, since I was never a skeptic – only a scientific skeptic. [...] But I never felt that pointing out mistakes [in Gore's movie] qualified me to be called a climate skeptic.
So the claims that Prof Richard Muller has been a climate skeptic are indefensible; they are malicious lies, a part of an orchestrated propaganda campaign. Let me admit that because of Prof Matt Strassler's references to blog articles that have irritated him, I find it hard to believe that Prof Matt Strassler hasn't encountered the information that Prof Richard Muller hasn't been a climate skeptic yet. It seems clear to me that he is spreading this misinformation deliberately while he is aware of its invalidity.




Prof Matt Strassler also argues that no one in the world can possibly be such a fast reader to be able to get through the two-page summary of Prof Richard Muller's newest claims. It's a rather remarkable conjecture of yours, Prof Matt Strassler. Well, I admit that I am among those who have these supernatural powers and I have been able to read that summary as well as "more technical results, meant primarily for scientists" that are presented on that web page (click the link). In fact, it took me just a few minutes.

It turns out I am not the only "psychic" with these megafast reading skills. As I summarized in my blog post about Prof Richard Muller's newest claims, Prof Michael Mann and Prof Judith Curry, along with Dr William Connolley and Mr David Appell, have been able to read about Prof Richard Muller's newest "results", too. They also concluded something that is obvious to those who have at least looked at the Berkeley Earth web page, namely that Prof Richard Muller has just approximated the temperature curve by a curve emulating the carbon dioxide concentration. He's been satisfied with this approximation so he decided that the suggested mechanism has to be the right explanation. Much like those other readers, I have concluded that Prof Richard Muller's justification of the man-made character of the temperature variability is childishly naive. This closes the story. I won't study his new remarkable claims again. I know everything about them and so do the other critics.

Prof Matt Strassler doesn't like the fact that Prof Richard Muller's claims about the man-made origin of the temperature variability are considered bogus so he proposes to spend a very long time by reading those two pages and delay all verdicts:
Let’s wait a few days and see if anyone makes more intelligent remarks about its limitations.
The difference between Prof Matt Strassler on one side and the other professors, doctors, misters, and mistresses on the other side is that Prof Matt Strassler is only "waiting" while the other side has actually studied what Prof Richard Muller was saying and we saw that it was worthless from any scientific viewpoint. There isn't any evidence, especially not new evidence, in his claims and graphs that would demonstrate the man-made origin of the temperature variations in the recent centuries.

So there can't be any more intelligent remarks than the remarks acknowledging that Prof Richard Muller's claims are pure trash; the remarks that Prof Richard Muller's claims are pure trash are already maximally intelligent and maximally accurate remarks one can make about Prof Richard Muller's claims.

Prof Matt Strassler's second and third paragraph are "beautifully" opposed to each other. In the second paragraph, he says that it is illegitimate to criticize Prof Richard Muller so quickly because two pages written by Prof Richard Muller have to be read at least for several centuries:
Immediately, of course, he’s being lambasted for everything he’s done, on all conceivable grounds.  Discredit him as fast as possible, is the approach – instead of let’s look at the study carefully and see if it does or does not have flaws. Well, I’m sure most of those attacking him right now haven’t had time to read his study yet, because it was just posted. 
However, as Prof Matt Strassler shows in the third paragraph, it is actually possible to quickly lambast and discredit those who actually realize that Prof Richard Muller's claims are indefensible. One of them is Prof Judith Curry whom Prof Matt Strassler describes in the following way:
One interesting question here is [Prof] Judith Curry, who disagreed with the majority view on [Prof] Muller’s panel.  I’d like to understand her point of view more clearly, though honestly she didn’t make a good impression on me with her objections last November, which seemed thin and statistically flawed.  My impression is that this time she views the approach to the data used in the most recent [Prof] Muller et al. paper as disturbingly simplistic. Maybe it is.  The data is open for any expert to use, so this is an objection that I would think could be settled.  If she’s right, more complex and complete models applied to the data should give qualitatively different results; let’s see if they do.
I added the titles so that it doesn't seem like Prof Matt Strassler is only adding the titles in front of his name to spuriously elevate his credibility in the eyes of the most superficial readers, relatively to Prof Judith Curry and others.

So we have learned from Prof Matt Strassler that she didn't make a "good impression" because her objections seemed "thin" and statistically "flawed". That's enough; no additional data or explanations are needed for Prof Matt Strassler's verdict. I wonder how long a time Prof Matt Strassler needed to quantify this impression from the last November.

Incidentally, it is a complete logical fallacy to suggest that if Prof Judith Curry's criticism of Prof Richard Muller's arguments are right, more complex and complete models applied to the data must give qualitatively different results. No such dependence exists. Prof Judith Curry hasn't made a claim about the answers to the biggest climate-related questions in her criticism. Her criticism was about a much more modest point. Prof Judith Curry, much like every other remotely sensible person, has only realized and articulated the self-evident fact that the propositions presented by Prof Richard Muller as a "proof" of man-made global warming are simply not a valid proof. This observation of incorrectness of Prof Richard Muller's arguments don't depend on any future models, especially not more complex and more complete models, and I don't even discuss the utterly anti-scientific suggestion by Prof Matt Strassler that more complex models should be the ultimate arbiter of the truth in natural sciences.

Prof Richard Muller's claim about his graphs and "proofs" of the anthropogenic global warming is analogous to the "proof" (or "disproof") of the Riemann Hypothesis saying that the hypothesis must be true (or false) because 111 is approximately a prime integer. Whether it is approximately a prime or not, whatever this sequence of words means, has no implications on the validity of the Riemann Hypothesis.

I can't leave the sentence "let's see if they do" without a comment, either. Many of us are actually working hard to "see if objects do" various things. On the other hand, Prof Matt Strassler only uses phrases such as "let's see something" as a rhetorical sleight-of-hand to dismiss evidence and delay any verdict he finds inconvenient. That's also why Prof Strassler has been wrong about pretty much every single "hot" question whose resolution was just becoming available, about every question on the "cutting edge". Five months after the key error in the "superluminal" OPERA neutrino measurement was found and three months after it was made public, he was still denying that the claim about the "superluminal" observation had been invalidated. In the same way, he denied the existence of the 125-126 GeV Higgs-like boson as recently as half a year after its existence was demonstrated at a very high confidence level to everyone who actually followed the CERN experiments, who knows some statistics, and who can rationally and impartially draw conclusions out of the statistics.

In this case, Prof Matt Strassler wants us to delay verdicts about Prof Richard Muller's "proof" of man-made global warming. But the verdict has already been determined. It's been determined that Prof Richard Muller's "proof" is childish nonsense. One may repeat the title Prof five thousand times and ask everyone to delay the verdict by additional days or years or centuries and one may even forget that Prof Richard Muller has said anything, but his assertions will still be rubbish.

One may repeat the title Prof one million times but it will still be clear to all historians of science as well as everyone who simply studies the sources that the invalidity of the OPERA "superluminal" observation had been publicly known since February 2012 while Prof Matt Strassler was standing on the wrong side of the history at least for three more months; in the case of the existence of a Higgs-like boson near 125-126 GeV, he was standing on the wrong side of the history for half a year. He is standing on the wrong side when it comes to Prof Richard Muller's claims, too, and he is working hard to prevent the truth from becoming clear to everyone, as he chastises everyone who has managed to see that the evidence has already spoken. But much like in the case of his superluminal hype and Higgs denial, that can only work temporarily.

Uncontrolled experiments

Finally, I want to focus on Prof Matt Strassler's key claim about "uncontrolled experiments" that is presented in the last paragraph of his blog entry. The paragraph starts like that:
Unlike many bloggers, I’m not willing to pontificate on a subject in which I’m not expert. 
If I were as ignorant about this scientific discipline as Prof Matt Strassler and if my judgments were similarly reduced to irrational impressions and attempts to selectively delay the realization of the truth that is motivated emotionally and ideologically, I would be much more successful than Prof Matt Strassler in my efforts to use the opportunity to remain silent. Prof Matt Strassler hasn't been successful in these efforts which is why he erred and posted his rant about a topic he completely misunderstands, indeed. Prof Matt Strassler's website has been around for a sufficient amount of time for us to figure out that as a source of information about hot topics, it's a completely untrustworthy source of information that usually describes things in the opposite way than how they actually are.

But let's return back to the experiments because I think that many other people have raised exactly the same talking point in the past.
But personally, I think this whole debate is missing the point anyway.  What we are doing, folks, in dumping all of this carbon dioxide into our atmosphere is an uncontrolled and difficult-to-reverse scientific experiment on our planet… the only one we’ve got.  (Hmmm…  let’s see what happens to the Earth if we turn up the CO2! Gosh, won’t that be interesting to watch!) Would you do an uncontrolled scientific experiment inside your own home?  You probably wouldn’t think it very smart to do that.  And if a bunch of apparently intelligent people started warning you this might turn out disastrously — even if other people who are apparently intelligent disagreed with them — you might consider that given the uncertainty, the question might turn on an issue of prudence.  Perhaps it would be wise to get control of this experiment before it has a chance to take control of us.
We are producing carbon dioxide; our biological ancestors have been doing the same thing for billions of years. This uncontrollable experiment is called "life". It's a part of a broader uncontrollable experiment that started 13.7 billion years ago which may be called "the life of the Cosmos".

The giant uncontrollable experiment "the life of the Cosmos" has had (and still has) many equally uncontrollable sub-experiments that are called "life on Earth", "animal life", "life of mammals", "life of primates", "human civilization", "industry powered by the fossil fuels", and others. Those persistent uncontrollable experiments were occasionally interrupted and interpolated by shorter events such as the great oxygenation event 2.4 billion years ago. I emphasize that all of these experiments, processes, and events are fundamentally uncontrollable. The evolution in all these experiments depends on huge amounts of factors and no single agent can be sure about the future outcomes – and, even more obviously, no single agent can "control" what the outcomes of the global experiments will be (even though "ambitious" people such as Mr Adolf Hitler and Mr Ioseb Jughashvili have unsuccessfully tried the same thing that Prof Matt Strassler proposes as well, namely to control the world).

Despite Prof Matt Strassler's suggestion that "irreversibility" is the same thing as "catastrophe", almost all these experiments are irreversible; in fact, the second law of thermodynamics guarantees that all macroscopic processes are irreversible because the entropy keeps on increasing. It's not just the entropy growth that makes the phenomena in the world irreversible. There are many other sources, manifestations, and interpretations of the irreversibility – for example, life is evolving towards ever more complex life forms.

If the aforementioned list of uncontrollable ongoing experiments doesn't make Prof Matt Strassler look like a breathtakingly ignorant fool in your eyes (so that you are telling yourself, holy crap, this Prof Matt Strassler is really ignorant about the very existence of the world; how could he have written something this dumb), let me know and I can send you a much longer list of ongoing uncontrollable experiments. Almost everything that happens on the Earth and in the Universe deserves to be called an uncontrollable experiment. These uncontrollable experiments are changing the state of the world at virtually all time scales we may talk about and whoever isn't able to see uncontrollable experiments in the world around him or her is just a stunningly uninformed fool who probably lives hermetically isolated in an ivory tower that is shielded from uncontrollable experiments – i.e. from events and processes in the real world.

Do we perform uncontrollable experiments at home?

I surely do. I do lots of experiments that are hopefully both uncontrollable and unsupervised by anyone. And I am sure that billions of people are doing the same thing and they are hoping as much as I do that their acts are not being controlled by anyone. Aside from examples that have to be naturally filtered out at a polite blog, we are doing lots of more innocent experiments (I just tried to cook mushrooms we picked in the morning according to a new recipe an hour ago – just delicious).

In fact, I have even performed an uncontrolled experiment called "global warming on Earth" in my own apartment after I received a Christmas gift from Lisa Randall, a propagandist AGW toy for children. ;-)

And even without any toys, we are also using our homes for the very same experiment that Prof Matt Strassler would like to centrally prohibit at the global scale: we are increasing the concentration of the carbon dioxide. A difference is that the experiments we do at home are millions of times faster and more efficient than the analogous experiment we are performing at the global scale. Instead of a century, we need an hour to increase the concentration of the carbon dioxide by 100 ppm.

For example, Will Happer – oops, I just made a carefully planned mistake. Let me start over.

For example, the Cyrus Fogg Brackett Professor of Physics at Princeton University William Happer gave a talk at Berkeley – Prof Richard Muller was in the audience – and while the concentration of the carbon dioxide in the lecture hall was as uncontrolled as it is uncontrolled in all of our homes across the entire world, he changed one adjective about the concentration. He actually made it "uncontrolled but monitored". I guess that most TRF readers don't even monitor the CO2 concentrations in their apartments and houses. Within an hour, the concentration grew from 650 ppm to 730 ppm and it continued to increase.

No one has died, no one felt sick. That's not surprising. The most sensitive humans only start to feel strange when the CO2 concentration reaches 10,000 ppm i.e. 1% of the volume of the air (equivalently, because of the equations for ideal gases, 1% of the number of the molecules). And only when the concentration gets to 50,000 ppm i.e. 5% of the volume of the air (120+ times higher concentration than what we have in the atmosphere today), a consensus that the concentration has reached toxic levels that makes breathing lethally dangerous (within an hour) emerges. One may also mention that the detrimental nature of CO2 concentrations around 20,000 ppm is really not due to the excess of CO2 but due to the associated lack of oxygen.

Animals such as humans don't directly care about CO2 in one way or the other. However, we depend on CO2 indirectly in many ways and in all of these ways, CO2 is a vital gas that is either important for something we indirectly need or it is an inseparable side effect (or "sign") of processes that are important for us. First of all, most plants need at least 150 ppm of CO2 in the air to survive and grow. It's not a coincidence that this concentration is just slightly below the minimum concentrations the Earth experienced in recent millions of years, namely during the ice ages. Why? Because it's just damn hard for plants to adapt to low CO2 concentrations – low amounts of plant food in their environment. So they only adapted to as low concentrations as needed but those plants that haven't been able to lower their tolerated levels to 180 ppm or so have gone extinct during the ice ages. It's that simple. The plants that survived simply have to sustain 180 ppm, otherwise they wouldn't be here, but they can only survive a little bit harsher i.e. lower concentrations than that.

Nature is cruel but it works. However, the detailed consequences of Nature's workings depend on the initial or immediate conditions. Nature looks different at different moments; it keeps on evolving. That's the reason behind the existence of time; time is what allows Nature and things to change and evolve – and yes, it may sound paradoxical evolution (of anything) has become the primary enemy of various left-wing ideologues, something that isn't natural, they say. Billions of years ago, almost all life forms would agree that oxygen was not only a corrosive gas but also a metabolic poison under most cellular reaction (it's still the case but we, creatures that breath oxygen in, no longer emphasize this fact much).

Oxygen used to be the shared enemy.

However, different life forms that love oxygen and depend on it have emerged. Plants – our current allies and our direct or indirect food – started to produce oxygen and they just exterminated everyone who wasn't compatible with the atmosphere containing oxygen. They were not asking about uncontrollable experiments with oxygen and if they had the intelligence to do so, they answered that uncontrollable experiments should have continued much like they were continuing in previous billions of years. It's good that they continued, otherwise we – and other modern life forms that are compatible with oxygen – wouldn't be here today. The current life on Earth considers both oxygen and carbon dioxide to be essential gases. We can't do without them. CO2 is still the rarer among them so it is a precious resource for plants. The more CO2 our atmosphere has, the better. The mankind may face some rather serious problems after it stops using fossil fuels and the CO2 concentrations will significantly drop within the following 50 years; agriculture will become harder.

But even if there were something wrong about higher concentrations of CO2 – I've explained that the atmosphere with much more CO2 in it is just fine both for animals and plants (the former don't care; the latter really love it and need it) – it would still be true that the experiment called "life on Earth" is uncontrollable. If CO2 were an inseparable product of life processes of one life form but it would be killing another life form, well, the endangered life form would either have to adapt or it would have to convince or kill the "culprit" and stop the threat. You can't expect the "culprit" to go on a suicidal mission to allow somebody else to live. Life doesn't work in this way.

Needless to say, these comments are completely hypothetical in the case of CO2 because CO2 doesn't endanger anyone's life whatsoever – it's just a hostage in irrational talking points about intolerable "uncontrollable experiments" that demagogues such as Prof Matt Strassler love to spread. But if you had a problem with some gases that appear everywhere in the environment and that is accompanied with the normal life of others, you would have to adapt, fight and stop the threat, or die. You can't expect those who endanger your convenience to kill themselves or stop things they are free to do.

Prof Matt Strassler recommends the suicidal solution for our civilization because "a bunch of apparently intelligent people" says that the uncontrolled experiment might turn out disastrously. But this is not a sufficient condition for a rational person to pay attention to such proclamations. A rational person would only pay attention to such warnings if they came from people who are both intelligent as well as rational and impartial themselves, lacking incentives that affect them more intensely than the passion for the truth. No such climate alarmists exist on this planet as of today.

But even if they existed and some rational people decided to obey the recommendations by these hypothetical intelligent yet honest climate alarmists, they could only "control" their own decisions, not the decisions of other people who may have different attitudes to such recommendations. The other people will behave differently. You may commit suicide – or stop using fossil fuels, directly or indirectly – but it's foolish if you expect that you have the "right" to change it to a suicide attack combined with mass homicide – or "force" other people to stop using fossil fuels, too.

So if someone found out that 800 ppm of CO2 were deadly for all climate fearmongers in the world – this is nonsense in the real world, as quantified above – and one needs to stop using fossil fuels to prevent the concentration from reaching 800 ppm, of course that all other people who are not climate fearmongers would continue with their business as usual because it's an important part of their life (look at the mess in India today when 0.7 billion people were without power). If the laws of Nature want to create a better world by sending the concentration of climate fearmongers to zero as the CO2 concentration is approaching 800 ppm, then Nature has the undeniable right to evolve (well, improve) the world in this way. It has done millions of similar moves (well, improvements) in the past and we wouldn't be here without them.

And that's the memo.

Nine physicists win $27 million in total

The first Milner fundamental physics prize for advances in delving into the deepest mysteries of physics and the universe



One million dollars. Each winner has received three similar piles of paper trash.

Yuri Milner, a graduate physics school dropout, earned a few bucks [interview] via Internet games such as Facebook, Zynga (yes, I was just playing Mafia Wars for a few minutes), and Groupon and created a new prize:
9 Scientists Receive a New Physics Prize (The New York Times)
Each of the nine winners has won $3,000,000, more than twice the Nobel prize. The choice of the winners is very sensible; the selection is impressive, showing that Yuri Milner still understands what's shaking.




The full list of winners include:
Nima Arkani-Hamed
Juan Maldacena
Nathan Seiberg
Edward Witten
Alan Guth
Andrei Linde
Alexei Kitaev
Maxim Kontsevich
Ashoke Sen
This prize will be awarded every year (lots of bucks, indeed) and new winners will be chosen by the previous ones. He must be very rich although sources estimate his wealth as $1 billion "only"; if I were giving away $3 million every year, I would become hungry sooner than 300 years later. ;-)

It's a very good selection, not only because I know most of the new multimillionaires in person. (I can't realize I've ever talked to Alexei Kitaev (the father of the topological quantum computer concept) but I've surely talked to everyone else – and in most cases, many many times.) Concerning the rumors that I was the person who was selecting the winners, I hope that you understand that I am not allowed to say whether the rumors are true.



Alan Guth's office at MIT before he received the prize. Please superimpose this picture onto the picture at the top to get an idea how Alan Guth's office looks today. ;-)

Alan Guth is an ex-student of a 2004-2005 string theory course of mine (he has always had the best questions even though he slept through most of the classes but I didn't take it personally; Alan Guth already announced that his bank charged him $12 when those three millions were added to his account), Nima Arkani-Hamed is a long-time ex-colleague and co-author of mine (I don't want to make it sound much more personal than that because it would sound like licking the buttocks of people who became multimillionaires), and so on. At Rutgers, I've worked next to Maldacena and Seiberg for quite some time.

Of course, all of the winners are theorists and most of them are string theorists. I guess that they may choose more general physicists, too. I think that each of them may deserve a blog entry or several of them that would describe his major contributions to physics.

Congratulations and thanks to Mr Milner for donating several bucks for a prize that seems to start with stellar names, indeed! The prize has instantly become the most lucrative academic prize in the world, beating the Templeton Prize and the Nobel Prize combined.

It's a topic for deep philosophical debates – and your comments – whether or not such huge amounts of money actually help the folks to improve their creativity in the future. I have some doubts about this particular ability of the money; your humble correspondent may be close to Grigori Perelman's idea about the optimum amount of money available to a thinker.

However, I have no doubts that physics needs to gain more authority in the society and creating multimillionaire physicists is a way to do so because most ordinary people understand the concept of the money even if they fail to understand the value of physics. This change of the atmosphere may be immensely good for the society – helping the mankind much more intensely and much more permanently than billions spent by other billionaires for random charities. It's plausible that some of the young people who are starting to work on string theory today will do so mostly because of their dreams to win the Milner prize in the future. While it's not the most innocent and purified motivation, I still think that it's a good thing if that's how many more smart people will be thinking.

There are a few obvious people who deserve a prize with this description, including Stephen Hawking. This Gentleman could be another example, beyond the list of nine physicists at the top, of the important fact (also explicitly stated by Yuri Milner) that people often discover something that is quite obviously true, deep, and spectacularly important but they can't get down-to-Earth prizes such as the Nobel prize because their findings are ahead of their time.

There's also a $100,000 prize for young emerging stars. The smartest young TRF readers may find this amount of money helpful, too.

Monday 30 July 2012

Have Muller or Watts transformed the AGW landscape?

I don't think so.

But let me add a few more words.

In recent days, we witnessed two major salvos in the climate wars as well as many minor repercussions. Both of them are claimed to be equivalents of the battle of Stalingrand in the climate wars. A frustrating aspect of the hype surrounding both salvos is that they came from the opposite camps but they are very analogous, anyway.

I will start with Richard Muller – which seems as the worse example among the two – and continue with Anthony Watts.




Richard Muller of Berkeley wrote an op-ed in the New York Times
The Conversion of a Climate-Change Skeptic
in which he claims to have been a skeptic but he saw the light and now he's not only sure that the global warming is real but it's man-made, too. Tons of propagandist sources claim that a major denier took his reversal a step further, and so on.

Needless to say, this interpretation is a flagrant lie. It's just a dishonestly manufactured story about a "revelation" that never existed. Last year, Richard Muller himself said to the Huffington Post:
It is ironic if some people treat me as a traitor, since I was never a skeptic – only a scientific skeptic.
So either that proposition or the title and the bulk of his newest NYT op-ed is a proof that Muller is a liar. Which Muller was saying the truth? Do we care? Is there an answer at all?



The only climate-related issue he's been skeptical about was Michael Mann and his fraudulent hockey stick. Muller was right. But that doesn't allow Muller to claim that he was skeptical about the global warming doctrine itself and it doesn't promote Muller to an überclimatologist who has the final word.

These comments are about Muller's ego and the cult of personality he would love to extend outside the Berkeley lecture halls attended by naive undergraduates at this traditional U.S. Academia's hotbed of Marxism. Someone wisely said:
My view is that Muller's efforts to promote himself by belittling the collective efforts of the entire atmospheric/climate research community over several decades, though, really does the scientific community a disservice. Its great that he's reaffirmed what we already knew. But for him to pretend that we couldn't trust this entire scientific field until Richard Muller put his personal stamp of approval on their conclusions is, in my view, a very dangerously misguided philosophical take on how science works. It seems, in the end – quite sadly – that this is all really about Richard Muller's self-aggrandizement :(
Well, I improved the word "wisely" a little bit to make this paradox sound a little bit stronger; of course that I think it's silly to suggest that what is pompously referred to as the "atmospheric/climate research community" should be taken too seriously. But I agree that if you take the validity of the claims by this community for granted, it is preposterous to consider Richard Muller a final arbiter.

Anyway, the awkward aspect of the quote above is that it comes from Michael Mann, the very source of Richard Muller's temporary moral capital that he earned a few years ago and overspent in the following years. 

I also think it's preposterous for Richard Muller to place himself above the other alarmists. In my opinion, he is just another alarmist – one who differed from others by having had a conflict with Michael Mann in which Muller happened to be right. But that's it. It doesn't give Muller the moral right to place himself above everyone else who is dealing with the climate. Whether he happens to be right or wrong on particular issues, he's a rather average person in most respect that are relevant for the climate science.

In fact, there are issues in which Muller may be even more naive than an average alarmist. Hardcore fearmonger and annoying e-mail spammer David Appell rightfully wrote (another shock) the following thing about Muller:
Attributing climate is more like figuring out the structure of DNA than it is like figuring out the laws of quantum mechanics – simple curve-fitting (“exponentials, polynomials”) doesn’t cut it. In fact, it makes you look kind of foolish.
Even the top U.K. Green Party's Wikipedia climatic distorter William Connolley said that Muller's statements appear absurdy naive. Judith Curry, a lukewarmer and former BEST collaborator of Richard Muller who refused to be a part of similar nonsense, said the following about the attribution "proven" by hot air:
Their latest paper on the 250-year record concludes that the best explanation for the observed warming is greenhouse gas emissions. Their analysis is way oversimplistic and not at all convincing in my opinion. There is broad agreement that greenhouse gas emissions have contributed to the warming in the latter half of the 20th century; the big question is how much of this warming can we attribute to greenhouse gas emissions. I don’t think this question can be answered by the simple curve fitting used in this paper, and I don’t see that their paper adds anything to our understanding of the causes of the recent warming.
One could argue that Muller's "touching story" of a former skeptic who would suddenly make a difference was planned from the beginning. Muller should be ashamed and sensible undergraduate students at Berkeley should boo him when he enters the lecture hall again. ;-)

Anthony Watts 2012

Things are a bit better in the second story – because there seems to be a serious manuscript that actually brings something new (and something that isn't quite naive) behind the big claims. Yes, I am talking about Anthony Watts et al. 2012.

On Friday, Anthony Watts made a dramatic announcement that he was suspending all new posts on his famous blog for two days. On Sunday, everyone should have expected an unprecedented press release with global implications. Obviously, no one knew what it could have been – not even Steve McIntyre who finally turned out to be a co-author of the sensational paper didn't know what fascinating event was coming on Sunday night. ;-)

No one could have predicted what the event could have really been. My guess was wrong in details, too, although I was right it would be related to the NOAA temperature data. Nevertheless, the press release is out. It describes a so-far unpublished manuscript of a paper called
An area and distance weighted analysis of the impacts of station exposure on the U.S. Historical Climatology Network temperatures and temperature trends
Additional files are available, too. Watts, (Evan) Jones, McIntyre, Christy, and perhaps other co-authors have studied the adjustments applied to the U.S. weather stations – something that's been a pet topic of Anthony for years (see surfacestations.org)

They used a new classification of the weather stations into the well-sited and others (well, several groups, read the paper for details) – a new convention endorsed at least by the World Meteorological Organization. With this new classification, it seems like the homogenization of the temperatures had been done in a wrong way. Instead of correcting the trend in urban stations downwards, to match the well-sited rural (and other) ones without an urban heat island distortion, it seems that NOAA had added corrections to the good stations for them to match the bad ones.

So far, I don't understand whether they claim that NOAA's methodologies had been invalid even with the old classification of the stations. There are many similar important technicalities that people will become familiar with in coming weeks unless the manuscript disappears from our radars more quickly than that.

At any rate, their conclusion is that with the new methodologies and correct calculations, the warming trend in the U.S. since the late 1970s through the end of the first decade of this century was exaggerated almost by a factor of two. When corrected, the warming trend in the U.S. was about +0.17 °C per decade, pretty much close to the global average.

This number is often overlooked by those who celebrate Anthony's "bombshell" even though it is rather important; it shows that even if all the improvements in the new paper are substantiated, the paper doesn't really change anything about the existence of a warming trend in the last 30-35 years. Their calculated trend is compatible with many other trends etc.

However, what's more important is that this is just the U.S. whose area is 2% of the globe's surface. This is a tiny fraction. If you prove that one-half of the warming trend at 2% of the world's area was spurious, you will lower the estimated trend of the global mean temperature since the late 1970s by approximately 1%. For this reason, all claims that this has global implications look like insane hype to me. I even think that it's invalid to suggest that the new paper immediately invalidates or weakens (irrelevant) papers by Muller and others. In my opinion, they have virtually nothing to do with each other (and neither Muller nor Watts have any sensible evidence affecting the attribution issue). Only people (on either side) who view climatology as a sequence of salvos from two sides (and who don't really care about the content of the salvos) may make such irrational connections. The correction by Watts and friends, even if it is legitimate, and I am inclined to guess that it will turn out to be mostly right, changes the global trends for BEST and others by roughly 1%. The impact is totally negligible.

And it's the global mean temperature that the global warming doctrine is all about. Even James Hansen and others were trying to talk about the global temperature. Many of us have criticized this obsession with the global averages – which no one experiences in his or her life and which are artificial quantities that unnaturally hide the huge regional "noise" normally known as the weather (which makes a one-degree change of the global average or local temperatures negligible). But it's true that the climate alarmism is all about the global averages, alarmists have almost always talked about the global averages (except for times of good enough heat waves in the U.S. when they love to suggest that the whole "problem" is about the U.S. temperatures), and the paper by Watts et al. – even if it is accurate – changes almost nothing about the global trend.

It seems like a good paper and Anthony Watts who may be a bit new in that field of writing papers may be excited in a similar way as people are "extra" excited during their first sex or anything else of this sort. ;-) But the hype that has accompanied the publication of this manuscript has been excessive from my viewpoint. The world's most influential climate blog may want to be more careful when it promotes a manuscript co-authored by the owner of the blog because such promotion is likely to make a disproportionately strong impact. This "amplification of the impact" caused by someone's controlling the media outlets is something that many of us have criticized – and I would kindly criticize it on both sides. I would even say that Anthony's pre-Sunday hype was analogous – if not stronger than – the hype produced by Richard Muller and similar folks, something we – including WUWT – have often criticized in the past.

A particular point of similarity is that the Watts 2012 paper is being celebrated before it's carefully enough verified by others and published anywhere. That's the same non-kosher attitude that Muller took in the past, too. By the way, the BEST papers failed in the peer-review process. Incidentally, a reviewer, Ross McKitrick, was rightfully upset about Muller's new media frenzy and hype, so he – much less legitimately – revealed his identity and published his referee report that should have stayed confidential. You're right that Muller's work with the media is outrageous but where is the rule that allows upset reviewers to use this excuse to publish their confidential reports, Ross? ;-) What's the connection here? In this violent ideological clash, non-kosher events occur on both sides.

So I do perceive that the hype is already being excessively produced by both sides. Of course, the alarmist side still produces a vastly greater amount of this hype in total and more importantly, they are wrong about all these "problems". On the skeptical side, I view the desire to participate in similar media wars as more irrational because the fearmongers control most of the scientific journals as well as the "mainstream" media so if Anthony Watts wants to compete in the number of papers, published papers, or the media hype surrounding them, he is bound to lose. That would be unfortunate because there are good reasons why the skeptics should win and we also have other, more sensible venues that are helping us to be winning so far.

Much more generally, I think that the idea – apparently believed or fictitiously believed by people from both camps – that very soon, we are expecting an event that will totally change the people's knowledge of the climate system is just a totally silly idea. Nothing is really dynamically changing about the climate and the climate science. The weather is changing quickly but the climate – the statistical distributions of weather phenomena and temperatures studied at least over 30-year intervals – are changing much less quickly. The same is true about the climate science. It's doing very little progress. This progress could perhaps be faster but it's not.

Don't expect another random paper claiming that there is a mistake in some partial result or methodology used in the past to change the basic perception of the climate and the climate change debate by either alarmists or skeptics. This won't happen and it is very correct that it won't happen because no sources of fast and urgent knowledge exist. We are very far from knowing everything about the climate – but we are very far from knowing nothing, too. What we know is already enough for everyone to decide about the answers to questions such as "Does science justify the regulation of CO2?"

Needless to say, all of us see data that are pretty much the same and if our estimates about some temperature trends differ by 50% or 92%, it's not the real source of the differences. The preferred responses and different answers we choose are primarily dictated by our values – whether we prefer prosperity, GDP growth, freedom, and capitalism or controlled decline of the civilization, elimination of GDP growth, and the transformation of individuals into rusty and centrally controlled wheels in a gigantic dysfunctional socialist dystopia.

Added: Anthony sent me some kind e-mails with helpful explanations, saying that he believes that the problem has to exist in the "best" network in the whole world, too. See e.g. the discussion by Roger Pielke Sr. He also mentions that many authors (including Watts' group and BEST), by oversight, have used the the obsolete Leroy 1999 system instead of the newer one, Leroy 2010. It took a year for them to figure out the bug. It seems that Anthony did feel a heureka moment.

Sunday 29 July 2012

Permutations join twistor minirevolution

I have finally found the time to watch Nima Arkani-Hamed's excellent Strings 2012 lecture.
Scattering Amplitudes and the Positive Grassmannian (four formats, 40 minutes)

(Flash video, PDF only)
I recommend you the Flash video; the PDF file without Nima's words and handwaving is vastly less comprehensible, it seems to me.



Let's hope that this toasted twister is an OK symbol of Nima's talk. It's similar to snack wrap chicken in McDonald's

You may be pretty sure that the talk gives us the hottest information about the state of the twistor minirevolution.

The readers may want to watch it as many aspects of the BCFW and similar formulae that looked very complicated get clarified. Suddenly they have a reason. Lots of arbitrariness disappears. Different, equivalent diagrams boil down to the same invariant structure that the physicists are starting to see in front of their eyes and touch by their hands.

So what are the points you shouldn't miss while watching the talk?




First of all, the papers in 2010 or so would include lots of diagrams with black and white vertices. Those diagrams may have had many loops – but they will be interpreted in a new, very simple way: one diagram is a permutation.

Let us begin with a question we should have understood for some years: Why do the diagrams have black and white vertices? What does this racial segregation mean? Well, note that all these vertices are cubic. The black vertices correspond to \((++-)\) while the white vertices correspond to \((--+)\) where the signs denote the left-handed or right-handed helicity of the external gluons that we scatter. You may say that the white vertices and the black vertices are left-right mirrors to each other. One of them is the maximally helicity-violating process (MHV) and the other one is "one step even more than maximally" helicity-violating process.

Another important point is that all the "propagators" in these twistor diagrams are on-shell. Nevertheless, they still contain everything you need to emulate the usual off-shell Feynman diagrams. An added "bridge" and therefore a new loop shifts the spinors in the right way.

But this new development is a story about permutations that play a key role.

We should first understand: What are we permuting? Well, if we're scattering \(n\) gluons, we are just permuting them. So there are \(n!\) different permutations. A funny thing is that the gluons transform in the adjoint of \(SU(N)\) which means that they have another predetermined permutation – encoded in the trace of the color matrices. So we're adding another permutation \(p\) into the game.

Now, let's return to the old physics in 1+1 dimensions. If one lives in the Lineland, he may talk about the "ordering of the people" along the Lineland. If you permute two adjacent points on a line, and every permutation may be decomposed into a product of such transpositions, it's an actual objective operation; the two points must penetrate through one another. In a process where \(n\) particles scatter in a 1+1-dimensional world, you find the Yang-Baxter equations that must hold for the "matrices" remembering the internal operation on the exchanged particles. Those equations have important applications in braid groups, knot theory, and other things. They effectively tell you that the ordering of the transpositions shouldn't matter as long as the overall permutation is the same.

However, these things look like special features of 1+1-dimensional physics. In 3+1 dimensions, for example, it seems impossible to "order points". However, in a specific sense, this research by Nima Arkani-Hamed and various collaborators – mathematicians and physicists – is an extrapolation of the "laws of the Lineland" to a 3+1-dimensional world.

How do we find out the permutation from the black-and-white Feynman-like diagram?

This is a somewhat key point of the talk so you shouldn't miss it. You have those \(n\) external gluons distributed along the circle and they're connected to something like a Feynman diagram with black and white cubic vertices inside. What is \(p(j)\), i.e. the gluon assigned to the \(j\)-th gluon by the permutation associated with the diagram?

The rule to find out \(p(j)\) is straightforward. Place Mickey Mouse on the \(j\)-th gluon and send him inside i.e. through the Feynman diagram with simple instructions. Whenever the Mickey Mouse hits a black vertex, he turns left; when he hits a white vertex, he turns right. (I hope that it doesn't matter too much if I reverted the rules upside down but you surely need to use their conventions if you want to check the detailed formulae.) Finally, Mickey Mouse gets out of the maze: he reaches another external gluon.



It's not hard to see that the resulting rules \(p(j)\) for each \(j\) define a permutation. Each black-or-white cubic vertex becomes a clockwise or counter-clockwise roundabout. Mickey Mouse always gets out on the next exit so whenever he arrives from different roads to the roundabout, he leaves via different roads, too. That probably implies that \(p(j)\) is simple.

It must also be an "onto" map as long as we can prove that Mickey Mouse always gets out of the maze "somewhere", to another external gluon. But that's necessary the case, too. He can't get caught into loops. If he could, there would be the first edge \(E\) (piece of road) that he visits for the second time. But the roundabout before this edge is a one-to-one map so it's not possible for two different "previous edges" to be sent to \(E\).

It makes sense. One may associate a permutation with those Feynman-like black-and-white cubic graphs. Now, the next big statement is that the amplitude only depends on the permutation. There are actually many equivalent ways to write down an amplitude but as long as they are associated with the same permutation, they are evaluated to the same result.

Well, there seem to be some subtleties that Nima described and I am not sure whether I fully understood them. They prefer to write the permutation of \(n\) external gluons in such a way that \(p(j)\) is always greater than \(j\). So if you have a definition where \(p(j)\) is smaller than \(j\), you redefine \[

p(j)\to p(j)+n.

\] Fine. However, they also want to generalize the permutations into decorated ones (technical jargon: affine permutations) in which some \(p(j)\lt j\) exceptions are allowed. I didn't quite understand whether the decoration was needed, whether it affects something, why it was needed, and what the decoration affects. The paper may be needed but I will try to watch the talk again. Moreover, I had thought that the affine permutation is just an ordinary permutation of a very special type, \(p(j)=aj+b\) modulo \(n\) in \(\ZZ_n\), not a general permutation with some extra structure.

Nima discussed the special subset of "bipartite graphs". In those diagrams, only edges connecting one black vertex and one white vertex are allowed. Especially for those graphs, one may discuss various nice operations such as the "square move" – changing the colors around a "box" inside the diagram from black-white-black-white to white-black-white-black.

An annoying technical feature of these Munich videos is that one doesn't see the laser pointer on the PDF slides so we don't really know what object on the slide the speaker is talking about in a given sentence. It surely reduces the degree of understanding for those of us who watch the talks on the web.

Grassmannian

Nima argues that the "change of consecutive vectors" is an important procedure and he slowly gets to the Grassmannian – the set of lower-dimensional (hyper)planes within a higher-dimensional (hyper)space. In particular, this whole story is about the positive Grassmannians. You may see that Alexander Postnikov, an amazing mathematician and a co-author in this permutation twistor business, is routinely talking about and teaching the positive Grassmannians.

To see why the positive Grassmannians are yet another equivalent way to describe the graphs, one must realize that there exists an explicit integral of delta-functions that guarantees that three spinors \(\lambda\) are proportional to each other – which follows from the momentum conservation for 3 vectors (well, either this follows, or the same statement for the mirror \(\tilde\lambda\) spinors follows from that).

This explicit integral enforcing the proportionality of the three spinors \(\lambda\) allows one to glue the things together and see the emergence of the Grassmannian. In the mathematicians' jargon, this gluing is known as the amalgamation. While you could have a priori thought that the "life" occurs at the vertices, these arguments lead you to realize that "life" takes places at the faces of the diagram – or at least the edges (modulo something).

Nima discussed how to add orientation to all the edges. White and black vertices must have 2-in-and-1-out or 2-out-and-1-in arrows, respectively. (Sorry if I reverted the rule again.) He finally offers some map from a diagram to a Grassmannian.

There are various simplifications. If you modify a diagram and replace it by equivalent ones, you may create a "bubble" – a propagator with a 1-loop "self-energy" with one black vertex and one white vertex in the loop. It's possible to get rid of such components of the diagrams.

Again, a key new insight is that a permutation of the external gluons is an invariant way to describe a particular diagram.

In this new perspective, the BCFW recursion rules may be viewed as a particular way to construct the Feynman-like on-shell diagram with black and white vertices out of a given permutation. There exist many other ways, too. I suppose that the total amplitude is given as a sum over all permutations. At this point, I have an obvious temptation to represent the whole structure in an analogous way as matrix string theory without a time dimension (action-based instead of Hamiltonian-based) but I will have to think about all these things in the wake of all the information that I learned from the talk and didn't know from my previous exposure to these cute ideas. ;-)

Now, the dual conformal invariance is getting manifest in this new permutation-or-positive-Grassmannian language. While the dual conformal invariance would rearrange the black-and-white diagrams in very complicated ways, it modifies the permutation in a very well-defined and simple way. Some jumps over two are involved but I haven't caught the exact answer so I will refrain from confusing you with possibly invalid caricatures of the right rule.

This positive Grassmannian construction allows one to write all the loop amplitudes in the form\[

\text{d log} (\dots)\,\text{d log} (\dots)\,\text{d log} (\dots)\,\text{d log} (\dots)\,

\] and this form may be obtained by some clever change of variables that (and whose existence) couldn't have been clear from the beginning and that was discovered by an accident (after some months of confusing interactions between physicists and mathematicians, they realized that they had been talking about the same thing – but the physicists had paradoxically had the more mathematically elegant, coordinate-independent form of the object). Nevertheless, this change of variables remarkably clarifies many hidden pattern in the amplitudes.

Nima said that Feynman's evaluation of the loop diagrams may be thought of as an evaluation using lots of auxiliary spaces to perform the integrals. In some sense, their twistor construction uses no auxiliary spaces at all: it reduces all the integrals to some low-dimensional loci. At the beginning, you could think that the d-log-like structure may simplify the integrals into "nothing" or "points". But when you do things properly, you will always find out that the integrals simplify but not to constants. They reduce to low-dimensional integrals such as polylogs.

For example, the d-log-to-the-fourth structure of the 4-point amplitude above is related to the geometric fact that there are 2 null-separated points in spacetime from the 4 points associated with the gluons we scatter.

At the end of the talk, Nima had to speed up a bit so many of his statements looked like intriguing demos. For example, we learned that the Yang-Baxter equations as well as the ABJM stuff (another, membrane minirevolution!) became special cases of this twistor business.

The folks were told that loops are needed reconstruct the world sheet theory out of the planar diagrams – and yes, as Nima confirmed in an answer to a question, everything he discussed was about the planar diagrams only.

The locality and unitarity are not the stars of the show. It would still be good to see why they're right – well, a proof of the equivalence to the normal ways to calculate the diagrams is probably enough for that. (Of course, by seeing the right properties that determine the amplitudes uniquely, they know that the final S-matrix is the same. It's still plausible that the equivalence may be shown at a "more localized" level, I think.)

Nima thinks that those structures should arise from the \((2,0)\) theory in six dimensions; it seems likely that this comment largely boils down to a wishful thinking at this point.

Questions and answers

The first physicist who asked – a Russian one I guess – asked the same question he asked on Friday so he got the same answer and Nima made it clear that this is exactly what happened. He asked about the complexification. Nima said that all these things have to be considered in the complexified space but if you want to evaluate the amplitudes for the LHC or anything tangible of this sort, you may always convolute the formulae with real wave packets. That's it: that's Nima's efficient and fast way to deal with questions that are not exactly new and that are not exactly good, either. ;-)

The second question, a better one, was about the restrictions on gauge groups, the number of colors, planarity, and similar things. So Nima said that only planar diagrams of the \(SU(N)\) gauge theory were considered at this point and repeated that this methodology is about localization of the integrals to low-dimensional loci in the spacetime (or related spaces).

He hopes to see a contact with the world sheet description. It's all very exciting but I may imagine that it will be ultimately seen to be a "purely" mathematical procedure dominated by the change of the variables on the world sheet (of the \(AdS_5\) dual string theory, used in the planar limit, which becomes "mostly topological" for these purposes) and a reorganization of the integrals and path integrals. Even if that's the case, it may still be a very important reorganization. They have already learned a great deal about interesting and "unified" geometric structures underlying amplitudes that are normally represented as sums of thousands of distinct Feynman diagrams.

Saturday 28 July 2012

Strings 2012, a few words



On Saturday, I added a link to the 51-minute public lecture by Edward Witten above (Flash, PDF only): click the screenshot. In the right lower corner, there is a "full screen" button. On the left side from it, there is another button to "embiggenify / mediocrize / elsmolinify Edward Witten". The dimensions of the slides change inversely to those of Witten.




After a nice and somewhat standard and classic introduction to particle physics and rudimentary string theory, questions start at 43:30. All four questions have been asked by trolls with no knowledge of the field and no interest in the field; sorry if I overlooked someone but it's extremely unlikely. ;-)

The first troll asks whether Edward Witten is aware of his responsibility towards science or something like that – because Witten isn't like Newton who calculated numbers (WTF?). I must admit that I was laughing when I saw Witten – a denier of the problem of the existence of Peter Woit and Shmoit-brainwashed jerks and imbeciles – being grilled by idiots in this way. The arrogance of these mental cripples is just hilarious. The organizers were completely unable to stop this particular asshole's monologue for 3 minutes. ;-) (It could have been a high-profile German crank, Herr Alexander Unzicker.) When Witten finally wins the battle to be able to speak again, he recalls some historical examples of theories that have faced skepticism for a long time but they ultimately prevailed.

A question at 47:30 was dedicated by the calculation of the number of generations in string theory from the Euler characteristic. Witten offers a short but completely non-technical answer, mentioning that certain results depend on certain assumptions.

Someone asks at 49:00 whether loop quantum gravity will be seen as a part of string theory. Witten says that first of all, good ideas such as noncommutative geometry and twistors are becoming components of string theory all the time. In the case of loop quantum gravity, however, Witten expects it to stay in the dumping ground of the history of science. A very stupid question about "\(10^{500}\) parameters" (WTF?) follows. Witten doesn't have the time or desire to correct the confusion about parameters etc. so he pretends that by parameters, the visitor meant the number of solutions and he returns to some general facts.

In general, the quality of the question was lousy and it showed that Germans have been brainwashed by pure shit as much as some other groups. I feel relaxed not to be an official part of the institutionalized particle physics. Facing such overwhelming piles of stupid yet arrogant and stinky assholes, I would surely react in a more interactive and explanatory way than Witten and I would probably be lynched for that.

Next year, if I were in charge of those events, I would introduce pre-filtering of the questions after the public talk so that the Q&A period wouldn't become an arena for exhibition of assholes again.

The text below was posted on Friday.



Strings 2012, the annual conference which has been taking place at the LM University in Munich, Bavaria, Germany, is ending today.

To get the access to all the talks, see Speakers, titles, talks. All of the videos will ultimately be posted, too. I have substantial doubts whether there would be a sufficient blogospherical market for my hypothetical review of all the 57 talks. So let me reduce this blog entry to a few short comments on a few talks.

The omitted ones may be much more interesting than the picked ones, of course. And I could have failed to mention the most important things in the selected talks, too.

John Schwarz gave a rather enthusiastic introductory talk about the purpose and state of the field etc. Lara Anderson showed that the heterotic phenomenology is thriving and the young folks in particular know much more about the higher-dimensional geometries than what was known years ago. Hans Peter Nilles gave a more conceptual or more speculative talk on what he calls heterotic supersymmetry – SUSY with some extra phenomenological features.

Sandra Kortner introduced the audience to the results from the LHC from a pure experimenter's perspective. Ignatios Antoniadis reviewed the LHC events from a viewpoint of a string phenomenologist, focusing on some possible ambitious signatures that I consider very unlikely.

Savas Dimopoulous interpreted the task of the LHC as a that of a judge who decides about the next physicists' journey from a grand crossing, multiverse vs naturalness. He believes that this big question is gonna be decided by the end of the 2012 run. While they surely sound exciting, I find such claims excessive. If the LHC finds no new physics by the end of 2012, I will be very far from adopting the multiverse as an established thing. Naturalness is a fuzzy concept – how much natural things have to be? – so such clean conclusions about it aren't possible.

Angel Uranga gave another general enough string phenomenology talk, promoting their new book along the way.

Several talks discussed the correlators in the \(\NNN=4\) gauge theory. And many talks analyzed the applications of string theory and its AdS vacua on things like superconductors, liquids, and other squalid state materials. I decided not to enumerate those great speakers; they included Gary Horowitz and Andreas Karch.

There are talks about twistors. Nima Arkani-Hamed finally presents his permutation-based formalism for the twistor amplitudes. I haven't understood it yet; it seems nontrivial and heavily dependent on all the previous technical developments. What makes my reading harder are the suggestions that this is a totally new picture completely deviating from all the QFT-based and string-based frameworks. I still don't see how this could be true. It's a great stuff but I feel that many technicalities are overinterpreted as conceptual breakthroughs and the lack of the new formalism to explain things like locality and unitarity is presented as a virtue. I don't understand these sentiments, Nima. Locality, unitarity, Lorentz symmetry, and other things may be obscured by a formalism or two but seeing how they arise is always an advantage – knowledge – not a disadvantage.

Edward Witten has resolved some small enough old confusing subtlety in superstring perturbation theory, including an explanation why there aren't perturbative corrections to the gravitino mass. His presentation resembles the way how he writes papers: the final version of the paper with all the equations is monotonically growing as you press "page down" on his PDF file. ;-)

Cumrun Vafa reviewed topological string theory. Andy Strominger gave an update on dS/CFT. I thought he already abandoned it. It may contain some completely new ideas such as Wilson surfaces as observables in the de Sitter space. It will need some time for the evaluation of others. But if it is a completely different thing than the dS/CFT a decade ago, he should also change the brand because I am sure that there are many who share my impression that the dS/CFT brand has been discredited. ;-)

Juan Maldacena, Xi Yin, and Daniel Harlow talked about the higher-spin "Vasiliev" gravities. Add some talks about three-dimensional gravity, topological string theory, fuzzballs, quivers, superconformal indices, and other things.

Hermann Nicolai reviewed pretty much all the "non-stringy" approaches to quantum gravity and concluded that they have an infinite amount of ambiguities, a loss of predictivity, and they are much more detached from experimental tests than string theory. For various major reasons that are often (deliberately?) overlooked, this includes loop quantum gravity, spin foams, asymptotic safety, Regge calculus, causal dynamical triangulation, and others. A very sensible talk.

I think it's refreshing that the organizers once decided to include someone (well, one of them) who knows – much like all real quantum gravity experts who have looked into those things – that none of those alternatives really works in the same sense as the sense in which string theory works. In some of the previous years, it would be politically correct to invite some people from those failed directions and say things that no other participant considered correct. This waste of time was abandoned, at least this time.

David Gross wraps up with outlook and visions today. Due to the constrains of causality, the talk hasn't been posted yet. If you were intrigued by something, don't hesitate to share your feelings and knowledge.

Olympic Czechs w/ wellies: climate in London hasn't changed

According to The New York Times [+pic], the two most memorable aspects of the 2012 Olympic opening ceremony were Ralph Lauren's and the Czech athletes' outfits. (A live NYT blogger has two different winners, the wellies and Danny Boyle's speech.)

Irish Independent, LBSports, UKPA and tons of #wellies tweeters [realtime] have also noticed the wellies and umbrellas which they opened simultaneously. As you can see, the Czech designers have carefully read the warnings in the Daily Mail.



Click to zoom in. See a gallery I with 5 pictures or gallery II with 31 pictures. The march was led by badminton player and cancer survivor Mr Petr Koukal. If the German team had the same shoes, it wouldn't be original because the same shoes have already been preferred by Irma Grese. :-)




While many other athletes across the politically correct world believe that the climate is changing and England is becoming a paradise for orange growers, the Czech athletes – supervised by Czech President Klaus, a climate realist and a passionate athlete – think that the climate hasn't changed and London's weather is as rainy, cloudy, and sucking as it was several centuries ago.



Needless to say, with a shower during the ceremony, the Czech outfit turned out to be a good idea. Don't believe fads and don't be afraid to be the only individual or the only group who has preserved some common sense! And don't forget that the English weather sucks.

Meanwhile, the wellies remain the only Czech success (a gold medal, our Slovak brothers confirm) after much of the first day. Shooter Ms Emmons and tennis player Mr Berdych are among those promising representatives who didn't manage to grab medals.

Friday 27 July 2012

Watts up with that Anthony's major announcement?

About his project unrelated to health, FOIA, politics, society
Update Saturday: Wow, the last paragraph in Wikipedia contains what I consider the most likely answer to the mystery:

According to the Heartland Institute 2012 fundraising plan document, they agreed to help Watts raise $88,000 to set up a website, "devoted to accessing the new temperature data from NOAA's web site and converting them into easy-to-understand graphs that can be easily found and understood by weathermen and the general interested public."

LM: BTW the Guardian plans to publish a significant news at the same moment: it may be the same thing.

Update Sunday: It was related to NOAA but it turned out to be a paper claiming that NOAA incorrectly doubled the late 20th century warming trend in the U.S. See James Delingpole's summary that I found more comprehensible than Anthony Watts' press release.

Well, it's fine but yes, I am disappointed. I wouldn't agree that this has a global importance; the U.S. is 2% of the globe, after all. Also, it's just another paper claiming something about those matters. Let me admit that its importance may be a bit inflated because Anthony is still excited about publishing a paper somewhere.
The world's most visited climate blog, Watts Up With That, just interrupted the publishing of new posts for two days. An inner circle of the Earth's 11 climate skeptics including your humble correspondent was informed about the interruption by e-mail at the same moment.



We are told to expect a major announcement of an unprecedented and controversial event on Sunday around 9 pm (Pilsner Summer Time: I suspect Anthony also meant noon Pacific Daylight Time and not Pacific Standard Time), one that led Anthony to cancel vacation plans, will arguably attract the attention of the whole planet, overshadow the Olympic games, reconstruct or destroy the United Nations, and perhaps even delay the war in Iran.

And believe me, Anthony generally doesn't like much hype and big words (similarly tome) so this is either his unprecedented hype that may turn out to be a dud or a really big story.

Anthony also told us he wouldn't respond to our e-mails before Sunday so I honestly don't know what it is and I can't find out (I guess). Moreover, the symmetric position of my e-mail inside the list suggests that other people in the list such as Steve McIntyre don't know what's going on, either. If you try to Google search for anthony watts major announcement, you will quickly get stuck in a TRF infinite loop, too. ;-) Three more speculative threads are hosted by Bishop Hill, Steve McIntyre, and Suyts.




So we may only speculate what the announcement is going to be. Is it a new energy-efficient server, the kind of stories that we could see on Anthony's Facebook wall most recently?

Or Climategate III (with a bombshell that doesn't make it just another fading sequel which contradicts Anthony's word "unprecedented")? A hockey team member confessed he was FOIA, the Climategate whistleblower?

An error in an irrelevant table written down by an assistant to James Hansen or his grandkids has been found? Anthony Watts is an alarmist who has pretended to be a skeptic and he will boast he has duped the real skeptics? ;-) Or is he selling the WUWT domain to George Soros and Real Climate?

A new pound of black asphalt was added to a U.S. weather station, making it darker and warmer? Has Anthony measured the climate sensitivity to be 50 degrees Celsius per doubling? Or does he have a proof that CO2 is cooling the planet?

Has Mann filed a lawsuit against WUWT for its being a climate heretic website that realizes that Mann's work is a pile of garbage? Is Watts being blackmailed by an armed thug hired by Al Gore? Has Rajendra Pachauri together with Joe Romm hacked WUWT? Maybe the article about the interruption was written by them and on Sunday, they will post "climate change is real, that's the big story". :-) A mass resignation in scientific societies? Has CERN proved that the climate is changing due to the Higgs boson?

The evil Big Oil corporations have finally sent billions to Anthony's bank account so that he can distribute it to other evil skeptical bloggers who may finally begin to attack the climate scientists?

Is the climate Armageddon coming this Sunday? Has a group of hawkish climate skeptics pretending to be doves – led by Anthony – kidnapped the U.S. Olympic team and demand the cancellation of the IPCC etc.? Or, as WUWT reader Matt suggested, is Watts running for the U.S. presidency as an independent?

You may offer your own explanations. I have no idea about the character, sign, or severity of the message. We will see on Sunday.

Off-topic: Higgs

Things were much clearer with Matt Strassler's quiz asking his readers when the first 125 GeV Higgs boson was produced by the humans without their realizing they had done so.

All thermonuclear bombs and colliders before the 1980s had vastly insufficient energies (below 1 GeV in the former case) and SpS at CERN in the early 1980s which discovered the W-bosons and Z-bosons still had insufficient luminosities (chances of a Higgs were of order 1% over there). So it "probably" had to be the Tevatron in 1988 or 1989. Sensible. No one noticed the particle in the 1980s but the production of the Higgs boson still managed to ignite the fall of communism: I must answer this thing when I am asked about the practical applications of the Higgs boson again. ;-)

Incidentally, by accident, the Higgs boson could have also been produced much earlier than that.

And I would also like to stress that the question whether a Higgs boson existed during a particle collision doesn't actually have a sharp answer. Only the final states may be predicted and all the intermediate histories contribute to the probability amplitude and interfere with each other.

So both Higgs-ful histories as well as Higgs-less histories contribute to the probability of a particular final state and we may only see the particular random events "encouraged" both by Higgs-ful and Higgs-less histories. Still, this problem is a bit academic because the actual events had the energy-momentum of the final product so accurately aligned with the Higgs boson mass (the width and therefore uncertainty of the Higgs mass is well below 1 GeV) that you may be "de facto sure" that the collision may be blamed upon the Higgs boson if the reconstructed mass agreed within the width.

Thursday 26 July 2012

Greek triple jumper and political correctness

Voula Papachristou is the hottest among the top Greek athletes who defended her 2009 European gold a year ago (July 2011) in Ostrava, Czechia.



She also has a Twitter account where she showed that she isn't just a pile of protein; she is apparently rather creative and intelligent.




Mosquitoes managed to infect several Africans in Athens by the West Nile virus which she commented on in this way:
With so many Africans in Greece, the West Nile mosquitoes will be getting home food!!!
LOL. Needless to say, this refers to a shared birthplace, not to races, and even if it were referring to races, there is nothing negative about the tweet. That's really why I've seen black tweeters who agreed it wasn't racist.

Assuming she invented it, I think this is a very clever joke, putting her in the top 10% of the people when it comes to the sense of humor. The real reason why it's funny is that it's true. We usually don't track viruses and people and the same moment. She did; good for her. Well, not so good for her because she was instantly removed from the Greek olympic team. The explanations looked like this:
No matter how old you are, when you offend the Olympic values, you can’t be a member of the Olympic team.
Oh, really? Needless to say, this is a shameless lie. There are many Olympic values but the political correctness is surely not one of them. The true violators are those who would love to hijack the political movement and transform it into an arm supporting their favorite ideology such as the political correctness. Adolf Hitler did nothing else in 1936 and if you wonder why I compare them to Hitler, it's because they are structurally doing the same thing. What about the PC promoted by the ancient Olympics? Even women were prohibited to participate at the Olympics. What about black slaves?
Sport was in Greece above all a domain of the free, in which slaves could barely participate.

There is no record of a single slave victor of Greek games. Participating in the crown-games was strictly forbidden for them. At some local games, however, they were permitted to compete, although their participation was not encouraged. An inscription from Misthia, a small town in Asia Minor, states that slave victors had to give one fourth of their prize money to the other participants. In this way, it could be avoided that masters trained their slaves as athletes to profit from the prize money.
So fix your history, comrades. It's you who is bending the Olympic values.

There could have been a more general underlying reason why she was ousted. She supports the nationalist Golden Dawn party. I wouldn't endorse those folks, they are really not my cup of tea, but the feeling that the supporters of other average Greek parties – such as SYRIZA or the Communist Parties of various other types or even PASOK – have the moral credentials to place themselves above the voters of the Golden Dawn such as Voula is arrogant, anti-democratic, and preposterous.

At any rate, I am amazed that the officials in the failed state where nothing works still find enough time and energy to impose the political correctness and harass the citizens who aren't innate politically correct asslickers. The apparatchiks responsible for this decision should be ashamed and they should be kicked into their butts, too.

BTW I am sure that during the times when I were at Harvard faculty, I would run into trouble even for this very blog entry. The political correctness isn't just an inseparable feature of officials in far-left failed states such as Greece; it has penetrated into almost every corner of the Western Academia, too.



After the 2011 victory in Czechia...

Here in Czechia, most people would find such violations of the basic freedom of speech to be unacceptable. At least in generic enough sectors of the society, there's no one who matters and who tries to cripple the basic human rights in this way. That's also why jokes that are much more "racist" than Voula's tweet may be broadcast in the primetime for millions of viewers.

Take Tele Tele. A "commercial" confusing Vietnamese sandals with... Vietnamese women. Footage of a (fake) African soccer player who was kicking into coconuts before he moved into a Czech soccer team. Detergent named Aryan modifying the usual advertisement for detergent Ariel: why isn't my baby white? And so on. Sometimes I found the number of such references to be too high for my taste. While I was a great fan of the program, sometimes it looked shallow to me. But it's still important that people aren't harassed for innocent jokes and that one group of political parties doesn't change the official institutions into an octopus used to fight their political competitors. That's a major thing that I and others were fighting against during the times of communism and I won't sacrifice an inch here.

Wednesday 25 July 2012

Euler characteristic

Topology is an important branch of mathematics that studies all the "qualitative" or "discrete" properties of continuous objects such as manifolds, i.e. all the properties that aren't changed by any continuous transformations except for the singular (infinitely extreme) ones.



In this sense, topology is a vital arbiter in the "discrete vs continuous" wars. The very existence of topology as a discipline shows that "discrete properties" always exist even if you only work with continuous objects. On the other hand, topology always assumes that these features are "derived" – they're some of the properties of objects and these objects are deeper and that may have many other, continuous properties, too. The topological, discrete properties of these objects are just projections or caricatures of the "whole truth".

The sphere – the surface of a ball – can't be continuously deformed to a torus – the idealized two-dimensional inner tube inside a tire. The torus has a hole in the middle. So they're topologically distinct two-dimensional manifolds. We may prove that they're topologically different if we find a "topological invariant" – a number or a more complicated quantity that doesn't change by any continuous deformations – that has different values for both manifolds. Of course, the "number of holes" (known as genus) is a way to distinguish a sphere from a torus.




If a topological invariant coincides for two manifolds, they may still be topologically different because there may exist other kinds of topological invariants that differ. (And if we're unlucky, their difference may even boil down to invariants that are unknown right now; in that case, it may be hard to distinguish them.) So "the number of holes" is just one possible invariant. But we will talk about "the number of holes" in this article and generalize it to a more natural quantity, the Euler characteristic (or "Euler number" if you're among topologists or topologians – let's hope that they differ less than physicists differ from physicians: otherwise "Euler number" is being used for other things which still differ from the Euler constant and many other things named after this emperor of mathematicians).

I instinctively use the term "Euler character" but I have carefully corrected the term to "Euler characteristic" throughout this text, a term which is probably more accurate. ;-)



A torus is topologically equivalent to a (blue) sphere with a (yellow) handle. Just cut two small disks into the sphere and connect them with a tube. The resulting surface is topologically equivalent (diffeomorphic) to a torus, as you may see by gradually deflating the (blue) balloon. Of course, you may also create more complicated shapes by adding \(h\) handles instead of one handle. Just to be sure, only the surfaces are counted, not the "bulk". The surface with \(h\) handles is known as the genus \(h\) Riemann surface.



A genus-2 Riemann surface

Instead of the number of holes \(h\), we will describe the same information in terms of the Euler characteristic \(\chi\) – the Greek letter is pronounced by English speakers as "chi" i.e. "kchaj" in the Czech transliteration even though it should be pronounced as the Czechs do, namely as "khee" i.e. "chí":\[

\chi=2-2h.

\] If you determine \(h\), you know \(\chi\), and vice versa. You might think that \(\chi\) is more artificial but we will see many formulae for \(\chi\) that show that it is actually more natural a label distinguishing manifolds than \(h\) itself. The first such formula is one for polyhedra:\[

\chi = V - E + F.

\] We just sum (or subtract) the number of vertices \(V\), the number of edges \(E\), and the number of faces \(F\) with alternating signs. If we had a manifold whose dimension would exceed two (recall that we're talking about the surface of the polyhedron only), the formula would continue to hold:\[

\chi = \sum_{n=0}^d (-1)^n E_n

\] where \(E_n\) generalizes the number of \(n\)-dimensional "edges" which may be vertices, edges, faces, hyperfaces, or user-friendly interfaces as the dimensions increase from zero. ;-)

I forgot to tell you that the formula \(\chi=2-2h\) is equal to \(2\) for a sphere, \(h=0\). So if our new vertices-edges-faces formula is any good, you should get \(2\) for any polyhedron that has no hole, whether it is regular or not. In particular, Platonic polyhedra have no holes so they should give you \(\chi=2\). Let's check it.

Tetrahedron:
\(\chi = 4 - 6 + 4 = 2\)
Hexahedron (cube):
\(\chi = 8 -12 + 6 = 2\)
Octahedron:
\(\chi = 6 -12 + 8 = 2\)
Dodecahedron:
\(\chi = 20 - 30 + 12 = 2\)
Icosahedron:
\(\chi = 12 - 30 + 20 = 2\)

If the names of the objects sound Greek to you, then it is correct because the names come from Greek. More importantly, the result is equal to two in all cases.

If you've never noticed that, pairs of the Platonic polyhedra are related: the dual partner has the same number of edges but the numbers of vertices and faces are interchanged. The icosahedron is dual to the dodecahedron; the cube is dual to the octahedron; the tetrahedron is dual to itself. The duality goes beyond this numerology: you may obtain the dual polyhedron by replacing the faces of the original one with vertices and connecting them. The new edges will then just intersect the old ones so their numbers will match.



Off-topic but somewhat cool. Pilsner Catholic bishop Mr František Radkovský (*1939) recorded this song (the narrator is himself) in order to collect some money for the bells at the Pilsner Tower, the St Bartholomew Cathedral. Mr Radkovský is a graduate of my Alma Mater, the Faculty of Mathematics and Physics of the Charles University in Prague, specialization mathematical statistics.

It is not a coincidence that the alternating sum yielded \(2\) in all cases. You may rather easily prove that the alternating sum is invariant under all combinatorial changes of the polyhedron that emulate the continuous deformations. For example, \(\chi\) doesn't change if you add a new vertex inside a face and connect it with all the \(k\) vertices of the original face, a polygon or \(k\)-gon, by \(k\) new edges. By this procedure, you increase the number of faces by \(k-1\), from \(1\) to \(k\), the number of vertices by one, so the positive terms increase by \(k\), but you also added \(k\) edges which are subtracted so the alternating sum is unchanged. In the same way, you may divide an edge, or you may do these procedures backwards. It just works.

You are also invited to check that \(\chi=0\) for a cubist, Picasso version of a torus.



Yup, because of the white background of these pictures, the mobile template prettifies this article a bit.

If we divide the faces to the smaller ones as indicated on the picture (and we should because the alternating formula shouldn't include any faces with holes: the separation of the surface should be fine enough to obey this condition), we find out that this "cuboid torus" has \(32\) faces (8 upper, 8 lower, 4 inside, 12 outside), \(32\) vertices (an upper grid and a lower grid with \(4\times 4\) vertices), and \( 64 \) edges (96 edges from 8 little cubes per 12 edges, but 32 have been double-counted on the 8 interfaces of the little cubes per 4). In total,\[

\chi = 32+32-64 = 0.

\] That agrees with \(\chi=2-2h\) for \(h=1\) hole. If you think about it, it must always agree, for arbitrary polyhedra of arbitrary topology, because we have proved that the alternating sum formula is topologically invariant and correctly decreases by two if we add a handle.

Adding boundaries and crosscaps

A sphere with \(h\) handles is the most general closed (boundary-less) orientable manifold. But you can also have manifolds with boundaries and unorientable manifolds. Much like the genus \(h\) surface could have been obtained by adding handles to the sphere, the most general Riemann surface is obtained by adding \(b\) circular boundaries to the sphere (with handles etc.) and \(c\) circular crosscaps.

A circular boundary is simply created by cutting a disk: the newborn boundary is circular. A crosscap is the same thing except that we identify the opposite points of the newly created circular boundary. This is indicated by an "X" (therefore "cross" in the name) in the circle (the two lines in the "X" indicate two such identifications of points but we must identify them in all directions). And the impact of such an identification is that the resulting manifold remains closed (boundary-less) because the boundaries were turned into teleportation gates into a new portion of the manifold; however, as you can check by sending a letter to the gate, the orientation is flipped so whenever \(c\) is positive, the resulting manifold is unorientable.

The disk itself may be thought of as a sphere with one hole/boundary (cut the Northern Hemisphere from the surface of the globe, a disk-shaped hole, and another disk, the Southern Hemisphere, will remain there), i.e. \[

(h,b,c) = (0,1,0)

\] and its Euler characteristic must be \(1\) because two disks may be interpreted as hemispheres and connected into a \(\chi=2\) sphere via their shared \(\chi=0\) circular equator. Relatively to the sphere, the Euler characteristic decreased by one so the formula must be\[

\chi = 2 - 2h -b \dots

\] if \(c=0\). Similarly, the manifold with \[

(h,b,c) = (0,0,1)

\] is the so-called projective sphere, \(S^2/\ZZ_2\), which may be obtained by identifying the opposite points on a two-sphere. Imagine that the antipodeans are actually identical to you and your kin. This identification changes the maps of countries to their "mirrors" (the longitude is just shifted by 180 degrees, a rotation, but the latitude is turned upside-down which is the source of the mirroring) so the resulting quotient of the sphere is unorientable. Again, the manifold looks like one-half of a sphere, up to possible subtleties at an equator which has \(\chi=0\) anyway, so the full formula for the Euler characteristic has to be\[

\chi = 2 - 2h - b - c.

\] From the Euler characteristic of the sphere, \(\chi=2\), we subtract twice the number of handles, once the number of boundaries, once the number of crosscaps, and that's it. (The fact that the coefficient in front of \(h\) is twice as large as the coefficient in front of \(b\) is related to the fact that the closed string coupling constant in string theory is the square of the open string coupling constant.) Let me mention some simple and well-known surfaces. (All these surfaces appear as "Feynman diagrams" i.e. histories of merging and splitting strings in string theory. Closed strings are always there; if they're unorientable, we must also sum over unorientable surfaces. If we also include open strings, we must add open surfaces.)



Examples of unorientable surfaces

The kids' most favorite unorientable surface is the Möbius strip. You take a strip and glue it to itself after you rotate a short interval at the end by 180 degrees. If a letter "p" swims around such a (transparent) Möbius strip, it becomes a "q", its mirror image. The Möbius strip has a single circular boundary because the potential two original boundaries are twisted into one another. So it must be that \(b=1\). However, it is unoriented so \(c\) is positive, too. Well, we have \(c=1\). There are no handles. We have \[

\chi = 2 - 1 - 1 = 0.

\] Much like the torus, the Möbius strip has a vanishing Euler characteristic. Surfaces with a vanishing Euler characteristic may be visualized as pieces of a flat space with simple identifications. For a torus, it boils down to torus' being a rectangle with the identification of opposite edges. A cylinder is a torus divided by a \(\ZZ_2\) which acts as a simple left-right reflection with respect to the vertical axis. On the torus, this action has two lines of fixed points which become the cylinder's boundaries. A Klein bottle is another \(\ZZ_2\) "orbifold" of the torus in which the \(\ZZ_2\) transformation combines the same left-right reflection we had for the cylinder and a translation by one-half of the torus periodicity in the vertical direction so that there are no fixed points.

As the previous sentence leaked, there exists a simple boundary-less, closed unorientable surface i.e. closed cousin of the Möbius strip called the Klein bottle with \((h,b,c)=(0,0,2)\). Melt a bottle of Pilsner Urquell so that it may be easily deformed. Cut a small disk at the bottom. Now drag the neck at the top, bend it down, and attach it to the hole at the bottom. (You must send the neck along a small detour in a fourth dimension of space to avoid a collision i.e. to deal with the tiny technical problem that glass [the travelling neck] can't penetrate another piece of glass [the body of the bottle].)



Click the picture to learn how to buy this eine kleine Kleinsche Flasche for $35. In the middle of the product, they probably cheated and made the glass self-intersect so you should get a $35 refund.

As emerging (and adult) string theorists know, the Möbius strip with \((h,b,c)=(0,1,1)\) is really the geometric average between the Klein bottle and the cylinder (the latter is just a non-Möbius strip, i.e. just a sphere with two boundaries, \((h,b,c)=(0,2,0)\)). This fact is behind the fact that \(SO(32)\) type I string theory is free of anomalies.

Aside from the sphere, the torus, the disk, the cylinder, the projective sphere, the Klein bottle, and the Möbius strip, all the other Riemann surfaces are just "some other more complicated manifolds" with various values of \((h,b,c)\).

Euler characteristic as an integral of Euler density

The Euler characteritic may also be calculated as an integral of the Euler density which is a function of curvature.\[

E_{2n} = \frac{1}{2^n} R_{i_1 j_1 k_1 l_1} \dots R_{i_n j_n k_n l_n} \epsilon^{i_1 j_1 \dots i_n j_n} \epsilon^{k_1 l_1 \dots k_n l_n}.

\] I suspect some powers of \(1/\pi\) should be added as well but roughly speaking,\[

\chi = \int \dd^n x\sqrt{\abs{g}} \, E_{2n}.

\] Note that the formula for the Euler density contains \(2n\) indices in the epsilon symbols; \(n\) is the dimension of the manifold. These \(2n\) indices are contracted against those in the Riemann tensor which has \(4\) indices per piece, and thus we have to include \(n/2\) copies of the Riemann tensor. The formula only works in the even dimensions. I will later explain what's simple or boring about the Euler characteristic in odd dimensions.

The equivalence of the Euler-density-based formula with the vertices-edges-faces formula may be established by noticing that the Euler-density formula is topologically invariant, too. That's because it's locally a "total derivative". And then one may verify that they produce the same thing if we add handles etc. Well, there are probably better ways to derive the equivalence of the formulae.

For \(n=4\), the Euler density coincides with the Gauss-Bonnet term and – much like other topological invariants – it doesn't affect the equations of motion (because the equations of motion are derived from infinitesimal variations of the fields but the topological invariant terms in the action don't vary in such a situation). In higher dimensions, however, the same quadratic formula in the curvature tensor is no longer topologically invariant and matters.

An even greater simplification occurs for \(n=2\) dimensions. We have said that the Riemann surfaces are specified by \((h,b,c)\). For boundaries, we would have to add an integral of an extrinsic curvature-based term over the boundary to get the right topologically invariant Euler characteristic (otherwise the integral is not even topologically invariant). In two dimensions, all the components of the Riemann tensor are proportional to the Ricci scalar so the Euler characteristic of closed surfaces is just proportional to the integral of \(R\) over the manifold (with the proper volume measure).


That's great because we see that it vanishes for a torus which may be given a flat metric: a torus is a flat rectangle with the identification of upper-and-lower edges and the left-and-right edges. Also, for a sphere, \(R=2/a^2\) where \(a\) is the radius of the sphere, so the integral over the \(4\pi a^2\) surface of the sphere nicely cancels \(a^2\) and yields \(8\pi\) which gives the right \(\chi=2\) after we divide it by \(4\pi\) again which is the right normalization.



A cone may be made from a piece of flat paper in which the angular wedge with the internal angle \(\delta\), the deficit angle, is cut off.

How \(\chi\) may be computed just from vertices' deficit angles

The Euler density formula for the Euler characteristic may also be applied to polyhedra or polytopes. Let's focus on ordinary two-dimensional surfaces of three-dimensional bodies in the space we know, such as the Platonic polyhedra, although the method is easily generalized to any dimension. If we make all the polyhedra's faces flat and all their edges straight, and we often do, it's clear that the curvature tensor vanishes in the interior of all the faces – and even for the edges because the two adjacent faces of a paper polyhedron may be obtained from ordinary flat paper (remember how you glued them when you were a kid) so the intrinsic curvature has to vanish in the edges, too.

However, the vicinity of a vertex can't be obtained from flat paper by innocent bending (without cutting and gluing). So the curvature tensor is actually nonzero at the vertices – it's proportional to the Dirac \(\delta\)-function. Now, we said that the Euler density in \(n=2\) can be written as \(R/4\pi\), a multiple of the Ricci scalar. By "regulating" a vertex and thinking about directions in three dimensions and solid angles, it's easily seen that \(R/2\pi\) is \(\Delta\phi/2\pi\) where \(\Delta \phi\) is the deficit angle (taken positive if it is a deficit angle and negative if it is an excess angle).



What is the deficit angle? That's another thing that schoolkids know. For example, if they want to build a cube out of paper, they need three adjacent squares around a vertex and the missing wedge will be filled after bending and gluing. You may see that even though there was a hole, a wedge around each vertex, one with the internal angle equal to \(360 - 270 = 90\) degrees – three \(90\)-degree angles of a "solid paper" which were cut from \(360\) degrees of the planar paper you could have fully used but you didn't – we could connect it to a compact close body.

So for a cube, the total deficit angle is 8 (number of vertices) times \(\pi/2\) which is \(4\pi\), and after dividing by \(2\pi\), we indeed get \(\chi=2\) again. You may check that the same thing works for all other polyhedra. For example, the tetrahedron has the total deficit angle \(4\times \pi\), the octahedron has \(6\times 2\pi/3\), the dodecahedron has \(20\times \pi/5 \), and the icosahedron has \(12\times \pi/3\). It's \(4\pi\) giving \(\chi=2\) in all cases.

We see a cute thing here: there are different ways to imagine "where the Euler characteristic is localized" on the manifold. For a polyhedron, we may imagine that the faces, vertices, and edges contribute something. But we may also rearrange the formula so that only the vertices give us their contributions proportional to the deficit angles. For ordinary integrals and path integrals, different ways of "localization" may also be used to prove various "index theorems" but I won't go into this stuff.

The Euler characteristic of a union; and a Cartesian product

Disconnected manifolds may be obtained as a union of their components. What is the Euler characteristic of a disconnected manifold? Well, it's simple. Whether you calculate \(\chi\) from vertices, edges, and faces or from integrals, all the terms are just additive. So we have\[

\chi_{A \cup B} = \chi_A + \chi_B.

\] That was simple. While the "union" may be viewed as an addition of points, there's another way to construct a manifold out of two manifolds \(A,B\), the Cartesian product. The addition (union) of two equally dimensional manifolds doesn't change their dimension. However, the Cartesian product just multiplies them. For example, if \(A\) is 5-dimensional and \(B\) is 7-dimensional, \(A\times B\) is 35-dimensional.

The formula for the number of \(n\)-dimensional edges of \(A\times B\) is simply\[

\eq{
E_n^{A\times B} &= E_0^A E_n^B + E_1^A E_{n-1}^B + \dots E_n^A E_0^B \\
&=\sum_{p,q}^{p+q=n} E_p^A E_q^B
}

\] because the \(n\)-dimensional edges of \(A\times B\) are simply the Cartesian products of some edges of \(A\) and some edges of \(B\) whose dimensions add up to \(n\). Sorry, here \(E_n\) no longer means the Euler density, it is a different thing and I have some clashes in the notation. ;-)

If you sum \(E^{A\times B}_n(-1)^n\) to obtain the Euler character, you will easily see that with \(E^{A\times B}_n\) being a sum itself, this Euler characteristic of the Cartesian product factorizes as well and we have\[

\chi_{A\times B} = \chi_A\cdot \chi_B.

\] The Euler characteristic is simply multiplicative under the Cartesian product of manifolds. It looks simple and natural. We could also derive this property from the integrals of the Euler densities. The Riemann tensor would be "block-diagonal" and the Euler density of the Cartesian product also factorizes into a product of simpler Euler densities of the two factors. So it would work, too.

Why \(\chi\) is the naturally regulated "number of points in a manifold"

We have seen that \(\chi\) of the union/sum of two manifolds is simply the sum of pieces; and \(\chi\) of the Cartesian product is the product of \(\chi\)'s of the factors. Is there another quantity that has the same properties? Yes, it's the number of points. If all our manifolds are just finite sets of points, the same conditions will hold. The union just adds the points and the Cartesian products of finite sets have the numbers of elements which are products, too.

For continuous manifolds, it's a more subtle thing to say because the number of points is always "infinite". However, if you know that the actual physical value of the number of points is finite because it appears in a finite formula, you may always get the "finite part" of the number of points. And it's nothing else than the Euler character.

Because it has the right properties, you may simply define \(\chi\) to be the "regulated number of points" in a manifold. It's a regularization in the same sense that allows you to calculate identities such as\[

1+2+3+4+5+\dots = -\frac{1}{12}.

\] The knowledge that \(\chi\) is the number of points has nice interpretations. For example, consider the path integral over all scalar functions \(L(x)\) where \(x\) is a point on the manifold \(A\) and the ordinary integral \(\int \dd L\) would have some units, e.g. the units of length. What are the units of the path integral \[

\int_{x\in A} {\mathcal D}L(x)

\] over all length-valued functions defined on the manifold \(A\)? The comments we made a minute ago answer the question: the natural units of this path integral are \({\rm Length}^{\chi(A)}\). Well, be careful about this simplified result: there may be other, quantum i.e. "anomalous" contributions to the dimensions of various things in quantum field theories.

The Euler characteristic in terms of Betti and Hodge numbers: homology and cohomology

When we were discussing the calculation of \(\chi\) using the vertices, edges, and faces, you may have wanted to simplify the calculation by covering the surface (or manifold) by the minimal possible number of faces. If you complete this program "really completely", you get another formula for \(\chi\),\[

\chi = \sum_{k=0}^n (-1)^k b_k.

\] It's the same alternating-sum formula that depended on the number of \(n\)-dimensional edges \(E_n\) but now we have another number instead of \(E_n\), namely the "Betti numbers" \(b_k\). They may be interpreted as the "truly minimized" number of the edges when you try to make the covering as simple as possible. But you can't take this definition literally although it does explain why the alternating-sum formula has the same form as the formula for the edges.

Instead, it is better to give an independent definition here. The Betti numbers are the dimensions (well, ranks) of homologies\[

b_k = {\rm dim} \, H_k(A)

\] where homologies count all topologically inequivalent and independent closed (boundary-less) submanifolds of \(A\) that are not boundaries of another manifold. This sounds complicated and the idea is something we're not trained to understand in the kindergarten. However, it's an idea that is omnipresent in mathematics and very beautiful.

We may say that the "homology" of a manifold is the "cohomology of the boundary operator". The boundary operator \(\partial\) is something that maps a submanifold to its boundary. And "cohomology" is a quotient,\[

{\rm cohomology} (\partial) = \frac{{\rm closed}(\partial)}{{\rm exact}(\partial)}

\] where the set (linear space, in fact!) of "closed objects" in the numerator is the set of all objects obeying \(\partial T=0\) while the "exact objects" in the denominator obey \(\exists L:\,T=\partial L\). Because \(\partial^2=0\) i.e. \(\forall L:\,\partial(\partial(L))=0\) – for example, the boundary of a boundary is nothing because the first boundary of anything is automatically boundary-less (we say that the boundary operator is "nilpotent") – the denominator is a subset of the numerator and if they're linear spaces, we may define the quotient. Even though both the numerator and the denominator are typically infinite-dimensional spaces, the quotient tends to be finite-dimensional because when we choose an appropriate basis, "almost all" closed basis vectors are exact at the same moment and there are only finitely many exceptions (the exceptions are linked to various "global obstructions" caused by the nontrivial topology of the manifold).



While "homology" counts topologically inequivalent noncontractible submanifolds of a sort (for example, the torus has 2 independent loops that may be wound around, namely the two red circles from the picture above whose Cartesian product the torus is, and therefore \(b_1=2\)), there is an equivalent method to obtain the Betti numbers \(b_i\), namely as the dimensions of the cohomology. Cohomology's elements are not submanifolds. Instead, the elements are differential \(k\)-forms, i.e. completely antisymmetric tensors with \(k\) indices.

Because \(k\)-forms are "linear forms" acting on \(k\)-cycles (\(k\)-dimensional submanifolds) where the action of the linear form is given by a simple \(k\)-dimensional integral with the natural measure, cohomology and homology are "dual" to each other in the sense of linear algebra (vectors and linear forms on vectors). That's why their dimensions ultimately coincide.

Also, one may Hodge-dualize the differential forms (and choose the by-Laplacian-annihilated representatives of the cohomologies, and this condition is unaffected by the Hodge dualization) which is why for well-behaved manifolds, we also have\[

b_k = b_{n-k}.

\] The list of Betti numbers is left-right-symmetric! For example, the K3 manifold has \[

(b_0,b_1,b_2,b_3,b_4) = (1,0,22,0,1).

\] Because it's even-dimensional, the alternating sum is nonzero, namely \(\chi=24\) in the case of the K3 surface. But if you had an odd-dimensional manifold, the Betti numbers would look like \((3,17,17,3)\) and \(\chi\) would cancel because each positive term would have a compensating negative term. That's why the Euler characteristic of any odd-dimensional closed manifold is actually zero. This fact also explains why we were only able to write down densities for even dimensions: in odd dimensions, the densities make no sense because the required integral should ultimately vanish, anyway. So you may just define the odd-dimensional Euler densities to vanish by themselves.

Examples of higher-dimensional manifolds and their \(\chi\)

A point, the only connected zero-dimensional manifold, has \(\chi=1\) as I implicitly said when I described \(\chi\) as the "regulated number of points". We discussed the Euler characteristic of two-dimensional manifolds in detail (holes, boundaries, crosscaps, and all this stuff). Odd-dimensional closed manifolds have \(\chi=0\), we just said. So the next interesting case are the 4-real-dimensional manifolds. I have already mentioned that the K3 surface has \(\chi=24\). The four-torus, much like all tori, has \(\chi=0\) because it admits a flat metric.

Complex manifolds allow us to split the \(k\)-forms discussed above to \((p,q)\)-forms that distinguish \(p\) holomorphic indices and \(q\) complex conjugate, antiholomorphic indices. We have \(k=p+q\). Consequently, the cohomologies may be divided to several pieces much like their dimensions, the Betti numbers, which may be written as alternating sums of Hodge numbers:\[

b_k = \sum_{p,q}^{p+q=k} h_{p,q}.

\] For the most interesting six-real-dimensional manifolds, the Calabi-Yau three-folds, the formula for the Euler characteristic reduces to\[

\chi=\sum_{p,q}(-1)^{p+q} h_{p,q} = 2(h_{1,1}-h_{1,2}).

\] All other Hodge numbers except for \(h_{1,1}\) and \(h_{1,2}\) are universal, either one or zero, for any Calabi-Yau three-fold. Incidentally, the K3 surface discussed above is also a "complex manifold" and the only decomposition of a Betti number to Hodge numbers that needs an explanation is\[

22 = b_2 = h_{0,2}+h_{1,1}+h_{2,0} = 1+20+1.

\]A recent article discussed the Hodge charts, i.e. the Hodge numbers \(h_{1,1}\) and \(h_{1,2}\) of (almost?) all the known Calabi-Yau manifolds. The Euler characteristic goes up to \(\pm 960\) or so. And the mirror symmetry exchanges \(h_{1,1}\) with \(h_{1,2}\) (the numbers of two-dimensional and three-dimensional "cycles" get interchanged) which implies that it relates manifolds whose \(\chi\) are equal to each other up to a sign. In my heuristic terminology above, the mirror symmetry therefore relates members of pairs of manifolds whose number of points are opposite to each other. ;-)

I have already written enough stuff and even the most patient readers have grown bored so let me stop.

That's not the memo, just an expression of a relative exhaustion. :-)