Thursday 28 February 2013

Superconductive Leon Cooper: birthday

Sheldon Cooper is named after two people: physicist Leon Cooper and – with apologies to Sheldon Glashow – the film actor and producer Sheldon Leonard.

Today, on the last day of February 2013 and the last day of a 700-year-long popeful period, the first man in the list celebrates his 83th birthday. Congratulation to Leon Cooper! He is of course most famous for his and BCS theory of superconductivity for which they received the 1972 Physics Nobel Prize.




While it's a well-known fact that CBS named Sheldon Cooper partly after Leon Cooper, it is a much less well-known fact that CBS itself stands for Cooper-Bardeen-Schrieffer, the three men who authored the modern theory of superconductivity and shared the Nobel prize that I mentioned above.

Leon Cooper was born in the New York City.

During his career as a student or researcher, he was affiliated with Columbia, IAS, University of Illinois, Ohio State University, Brown, and CERN. But perhaps the most interesting school is one at the beginning, the Bronx High School of Science. This is a true gem among the U.S. high school magnets for physical sciences. Even if we restrict ourselves to Nobel prize winners, the alumni include Cooper, Glashow, Weinberg, Melvin Schwartz, Hulse, Politzer, Glauber, and Lefkowitz (for chemistry). Not even my high school in Pilsen has collected this many Nobel prizes. ;-)




The three men, BCS, teamed up to write a Physical Review article on superconductivity in 1957. However, I chose Cooper's own 1959 paper, Theory of Superconductivity, as something you may want to read.

Superconductivity means that the electrical resistance of a material drops exactly to zero. Make no mistake about it: It's not just small. It's \(0.00000\). Superconductors qualitatively (and not just quantitatively) differ from ordinary conductors whose resistance may be small but it is decidedly positive. (The very fact that materials and other objects may differ "qualitatively" at all depends on quantum mechanics: properties of things are often quantized.) Superconductivity was first observer by a Dutch physicist, Heike Kamerlingh Onnes, already in 1911. However, it took quite some time to understand where it came from.

In 1950, Landau and Ginzburg designed their phenomenological or "macroscopic" Landau-Ginzburg theory of superconductivity. The free energy is given by a quartic function of \(\psi\), a complex (and charged) order parameter, plus other terms expected in interactions with electromagnetism. The quartic function indeed resembles the potential for a Higgs field. The discoverers of the Higgs mechanism were undoubtedly inspired by the Mexican hat potential that should have been called the Landau buttocks potential – it was actually invented by Landau long before the war in his description of the second-order phase transitions. The Landau-Ginzburg theory combined that theory of the second-order phase transitions with ideas about Schrödinger's equation. Well, I often emphasize that the wave function isn't a classical field but if there are many particles exactly in the same state, their "common" one–particle wave function does become a classical field and the field \(\psi\) in the Landau-Ginzburg theory is an example.

In a certain sense, superconductivity allows you to observe "quantum" phenomena at macroscopic scales. This was the explanation of my instructor of electromagnetism and the dean of my Alma Mater in Prague, Prof Bedřich Sedlák, why he was so excited by his discipline of low-energy physics. However, the description is a bit misleading because once \(\psi\) loses its probabilistic interpretation and becomes a classical field, it's just another classical theory of physics even though it mathematically copies Schrödinger's equation.

The 1957 BCS theory of superconductivity came a few years later and it was a microscopic one. It was suddenly recognized that the complex field \(\psi\) was an effective field creating or annihilating particular particles, the Cooper pairs composed of two electrons. The Cooper pairs' charge is therefore equal to \(-2|e|\).

Why do the Cooper pairs exist at all? It was understood by Cooper himself in 1959. Just like there is an attractive electrostatic interaction between protons and electrons in the Hydrogen atom (opposite charges attract, a feature of forces with spin-1 messengers such as photons), there is an attractive interaction between pairs of electrons mediated by spin-0 phonons (because 0 is an even spin, the rule for the "like/opposite charges attract/repel" gets reversed relatively to what you know from the spin-1 electrostatic force).

So some free electrons inside the superconductors decide to combine into pairs that resemble other bound states such as the Hydrogen atoms. There are differences in the numerical parameters. While the binding energy of the Hydrogen atom is \(-13.6\eV\), it is just in millielectronvolts (the precise value depends on the material) in the case of the phonon-induced bound states. You see that the binding energy is thousands of times smaller than it is in the case of the Hydrogen atom. For this reason, the internal size of the Cooper pair is also much larger – longer radius - than the size of atoms. And yes, because the electrostatic energy goes like \(1/r\) and due to the virial theorem, it represents a fixed portion of the total energy, the radius is roughly speaking inversely proportional to the binding energy. It follows that the Cooper pairs are thousands of times larger than the atom!

Note that the Cooper pairs contain even numbers of fermions (electrons) so they're bosons, they may form Bose-Einstein condensates of various sorts, and this is really the basic reason behind superconductivity (as well as superfluidity). You may squeeze as many bosons to a region as you want. This seems bizarre: if we just reinterpret a group of electrons (fermions) as a collection of pairs of electrons (bosons), we may apparently circumvent the Pauli exclusion principle. However, there's no paradox here. Because the Cooper pairs are so large relatively to the atom (the typical volume that an electron likes to occupy), the condensation of the Cooper pairs won't allow us to compress too many electrons into a volume, anyway.

If we try to squeeze these electrons too much and pump many, many millielectronvolts into the Cooper pairs, we just break them apart. That's the analogy of the ionization of atoms.

The Landau-Ginzburg and BCS theories of superconductivity are cornerstones of condensed matter physics, of course. They're also among the key examples of the ideological proximity of concepts in condensed matter physics and ideas in particle physics. As I said, the LG theory helped the Higgs mechanism to be discovered. In fact, due to this analogy, some people refer to the electroweak supersymmetry breaking as "electroweak superconductivity". The LG models are often used as defining equations for certain conformal field theories in 2 dimensions (that are also used in string theory). And there are many other friendly relationships. One could say that particle physics has owed a lot to condensed matter physics. To some extent, it began to repay the debt once the AdS/CFT correspondence was discovered.

I have mentioned Prof Sedlák's meme that superconductivity shows quantum phenomena at the macroscopic scales. The classical field inspired by the Cooper pair's wave function may spread over a whole macroscopic superconductor. However, we see some "macroscopic quantum mechanics", and in this case a genuine one, even if we focus on the structure of the Cooper pairs.

Some people who don't quite think about the world in the quantum way are often imagining that the quantum uncertainty of the position of an electron may only be as small as an atom or a molecule or even smaller. They believe that there is some unimportant, strange, localized subtlety (that's how they would describe quantum mechanics) that affects the atomic distances but it surely can't influence longer distances where their classical intuition must always be allowed to take over.

However, the Cooper pairs show that this belief is completely wrong. The internal structure of a Cooper pair is described by a wave function similar to the Hydrogen atom wave function – it's just thousands of times larger (when it comes to linear distances). It's totally important how this internal wave function of a Cooper pair looks like at distances close to a micron. If you decided to suppress the rights of quantum mechanics to dictate the physics of all the length scales, time scales, and energy scales, you would be instantly led to predictions that violate the observations of superconductivity.

Let me mention a highly relevant example. Some people such as Ghirardi, Rimini, and Weber and their mindless followers believe that the wave function may be interpreted as a classical wave, in a "realist" fashion. However, wave functions keep on spreading which isn't nice for the world around us that isn't speading. So they solve this problem – a problem that directly follows from the denial of the probabilistic meaning of the wave function (in proper, probabilistic QM, there's no problem: probabilistic distributions may spread arbitrarily as the uncertainty in our knowledge grows but they still describe sharp objects and sharply separated, mutually exclusive options) – by certain centralist "kicks". Each \(10^{15}\) seconds or so, but otherwise randomly, each particle is given an "order" from the GRW headquarters of Nature to "behave". This order means that the wave function for the location of the particle is requested to be spread at most 0.1 microns or so. Each electron is asked to "shrink" to a Gaussian of this size each \(10^{15}\) seconds; its center is probabilistically chosen from the prior probability distribution. (All the numbers and the shape of the hypothetical post-collapse packet are completely made up, of course.) With this bizarre mechanism in their hands, GRW and others believe that Nature sometimes measures Herself, so no one else has to do so: there's no room for a subjective or probabilistic or epistemic interpretation of the wave function, they want to believe.

Because the Cooper pairs are typically somewhat larger than that, perhaps between 0.1 and 1 micron, such an "ordered collapse" sent to an electron simply destroys the internal structure of a Cooper pair. Needless to say, if such deviations from proper quantum mechanics existed, they would have observable consequences. Lots of electrons shooting out of the Cooper pairs freshly destroyed by the GRW "kicks" would be observed every second. The superconductors would lose their perfect conductivity, too. Needless to say, none of these effects exists.

It is critically important that your theory allows the wave function to spread over arbitrarily long distances – not only atomic distances but also 5,000 times longer distances and one could construct systems where quantum mechanics shows up at even longer distances (the Cooper pairs are so large because the phonon-electron interactions are rather weak; one could arguably find systems with even weaker interactions) – and because it's so critically important for your theory not to prevent the wave functions from spreading into huge regions that we start to see with our naked eyes, it's also obvious that the wave function must be interpreted probabilistically because no wave (wave function) that we may objectively observe is spreading into micron-like distances.

Superconductivity is just another example of the slogan "Don't mess with quantum mechanics". Any attempt to squeeze quantum mechanics into a straitjacket defined by some special conditions (e.g. very short, atomic or subatomic distances) – instead of admitting that quantum mechanics, including its features that so many people find counter-intuitive, governs all phenomena in the world, regardless of the scale – will inevitably lead to conflicts with experiments.

And that's the memo.

Czech president's alleged high treason: a childish yet harmful game

Even followers of stations such as Fox News could learn about a story that makes my homeland look like a banana republic:
Czech president could face high treason charges for his controversial amnesty
Czech president Václav Klaus will leave his job in 9 days – after 10 years which he spent at the Prague Castle by bringing inspiration and ideas to everyone and by impressively defending common sense, conservative values, and the Czech national interests – and the Club of Czech Klaus haters has prepared a nice gift for the economist: a trial to investigate whether Klaus has committed high treason by having declared a partial amnesty in the New Year Address. And perhaps by other things that Klaus' mindless, euronaive critics have considered politically incorrect for quite some time.

Needless to say, the accusation is completely absurd and the hypothetical "sins" have nothing to do with high treason. Most of the proponents of the trial know that it's absurd and unjustifiable. Nevertheless, they're complete slaves of their infinite hatred and the ends always justify the means in their eyes.




The Czech constitution clearly defines the crime of "great treason" that only the president may commit. I will deliberately use a different phrase; "high treason" (vlastizrada, literally "betrayal of the fatherland") is something that everyone may be guilty of. On the other hand, the "great treason" ("velezrada") is an exclusive crime of the president constitutionally defined as "acts of the president done with the purpose of eliminating the sovereignty or integrity of the republic or its democratic system". If a Czech politician in the recent 500 years has worked most effectively not to belong into this description, it was Václav Klaus. The defense of the sovereignty and integrity of his country and of its democratic system is what he's been working on most of the time.




Much like Czech kings, Austrian emperors, as well as Czechoslovak and Czech presidents since 1918, he had the right to declare an amnesty and he did it once – less frequently than many others. Amnesties took place in every system, with a possible exception of the Nazi protectorate era when it took minutes to execute an inconvenient Czech (is that the period someone considers our example?). But it was abused by Klaus haters, decorated with all kinds of idiotic conspiracy theories, and a routine yet infrequent event – and amnesties should be routine yet infrequent – became a minefield.

There's no doubt about the right of the president to declare the amnesty, much like about his right to delay the approval of various controversial international treaties etc. But the Klaus haters just want to find a way to create problems for him – and it doesn't matter a tiny bit to them whether the procedure to create the problems is justifiable.

A paradoxically important new motor behind these Klaus haters is a guy who is a complete zero in the standard politics of a democratic country as we have known it since 1989 – a billionaire and algorithmic trader Mr Karel Janeček, an almost high school classmate of string theorist Martin Schnabl. He has no clue about politics whatsoever but his hatred towards president Klaus is unlimited. He collected 70,000 signatures and 27 out of 81 Senators to support his crusade claiming that Klaus has committed "great treason". If the majority of the Senate decides, the Constitutional Court will have to seriously think whether or not Klaus has committed great treason.

Even if the verdict were Yes, it wouldn't mean much for Klaus. He would lose his $2,500 a month presidential pension but it's not such a significant amount of money relatively to his wealth as well as expected future income from the CATO Institute, a university, and perhaps other sources. But I think that what it would do to the image of the Czech Republic would be significantly bad.

Klaus himself said that this "great treason crusade" is just a "political frolic", if I use a literal translation constructed with the help of a dictionary. Clearly, he doesn't want to suggest he is afraid of these political dwarfs, he doesn't want to produce advertisements for them, he knows that their efforts will almost certainly be unsuccessful, and he realizes that a "guilty" verdict wouldn't be a personal catastrophe for him, anyway.

But the prime minister, center-right trained plasma physicist Mr Petr Nečas, said that these efforts are petty and its champions should be ashamed. Nečas' ODS party – which was founded by Klaus 20 years ago but has kept some distance from him in recent 10 years – is the only party in the Senate that has a clear opinion about the "great treason trial": it's bullshit. All the other parties' clubs are split. Miloš Zeman, the nominally left-of-center President Elect, is against the lawsuit and considers the term "great treason" exaggerated (while he counts himself as a critic of the amnesty). Zeman's strength not to participate in similar absurd games just because a strange would-be smart majority does is a reason why I was voting for him.

Whenever I was thinking about the political processes in the Nazi era or the early communist era of the 1950s, I couldn't understand how it was possible for the people to be that nasty, dishonest, and how those political trials could have accumulated a sufficient support by the citizenry to be defensible at the national level. But when I see how easily people may propose similar witch hunts and how many ordinary people defend this utterly immoral fascist game, my surprise disappears. I have very little doubt that if Mr Karel Janeček were in charge of the Holocaust, he would be able and willing to use an arbitrarily unrelated law to exterminate millions of Jews, too. These people have absolutely no morality – and they love to talk about it as if they were the Messiahs. They're fascist scum.

The communist regime has "framed" itself as the natural latest stage in the old battle of the mankind for a better society. As you must know – e.g. from the official name of China – the communist totalitarianism calls itself "people's democracy" which is an even better, probably even more democratic, version of democracy. The fact that it is pretty much the opposite of democracy isn't important for the totalitarianism's champions. A lie repeated 100 times "becomes" the truth.

Both Nazism and communism have used some traditional laws and principles to destroy the old and basic values of democracy, freedom, and human dignity. In this respect, the likes of Mr Janeček are doing exactly the same thing. The "great treason" – acts against the existential interests of the country – is the only crime that may be committed by the sitting president according to the constitution and this rule has a very good reason. It protects the head of the state against coups that could be justified by less serious excuses while the protection of the existential interests of the country is ultimately the only "lethally important" thing that the president has to do. And Klaus has undoubtedly protected these interests at every moment of his political career.

But they don't care that it's really them who are the traitors according to these constitutional principles. While they know that the "great treason" accusation is absurd, they realize that its applicability will be decided by real-world people and many real-world people are as corrupt, dishonest, and hateful as the authors of the "great treason crusade" themselves – so they (correctly) realize that their witch hunt has a nonzero chance to succeed even if the accusations are self-evidently absurd. It's enough for these efforts if the questions will be decided by sufficiently fanatical and dishonest Klaus haters.

In the same way, they want to invent some contrived explanation why it might be a "great treason", after all. In the Charter of Rights and Freedoms, an amendment to the Czech constitution, there are some comments about courts – the charter guarantees a fair trial to everyone, and so on – so it could perhaps be used to argue that Klaus has worked to undermine the democratic system.

This justification is highly contrived because details about the work of courts are not what democracy is all about. Democracy is about the control of the citizens over the political institutions of the country, not about particular damages paid by a criminal to someone else or something like that. Most laws, bills, and charters in the legal system are simply not about the very existence of democracy so their violation can't be considered an assault against democracy. But even more importantly, the Charter actually says nothing that could be used against Klaus. After all, the very purpose of the Charter of Rights and Freedoms is pretty much exactly the opposite: it is meant to protect the accused individuals from a system that could go awry, against mobs that would like to harass or murder an inconvenient individual, against premature verdicts decided without the accused person's proper defense, and so on. It's very clear that the Charter of Rights and Freedom was pretty much designed exactly to protect folks like Klaus against immoral fanatic mobs such as the assholes currently led by Mr Janeček.

They must know that the Charter is against them much like all other documents that are meant to protect democracy and the sovereignty and integrity of the country. And I am really convinced that they do know. But they hate this arrangement. They hate democracy and the rule of law as we have known it for quite some time. They're just not courageous enough to admit (or they – perhaps rightfully – think that it would be strategically unwise to admit) that democracy and the rule of law are the main principles they really despise.

So they say something else instead: that the purpose of the documents that should protect democracy, the rule of law, the individual freedoms, and the sovereignty of the country is exactly the opposite: to transfer the power to self-appointed mindless mobs with 70,000 heads, to pay no attention to clear laws that give some authorities the right to make certain decisions (e.g. amnesties), to suppress individual rights and freedoms in favor of the rights of mobs who hate certain individuals, and to force politicians to immediately endorse every treaty that reduces the national sovereignty. They don't care that the documents were designed to protect exactly the opposite principles than the principles they find dear.

Mr Janeček and his soulmates are the new Nazis and Stalinists who will never hesitate to harass, prosecute, or kill an individual if they feel some hatred or jealousy. Of course that the latter is the primary driver, after all. All these pseudointellectuals hate the fact that relatively to Prof Klaus, they are intellectual dwarfs and constantly whining wee-wees.

These folks are disgusting and I don't have to explain you how high their concentration is in the Czech Academia and in the "cultural battlefront" of the nation. They may be a minority but they're such a loud minority that they determine the "etiquette" in numerous occupations, especially those considered "intellectual ones". That's a pity.

Wednesday 27 February 2013

Sir Ranulph Fiennes' frostbite highlights global warming

There have been numerous stories of the same kind (even on this blog) but they apparently never stop coming. Sir Ranulph Fiennes (63) decided to become the first human to cross Antarctica in winter:
Sir Ranulph Fiennes to Attempt First Wintertime Trans-Antarctic Trek
The page above and many others revealed that a key purpose of the event was to promote the global warming panic:
But this trek is not merely another notch in Fiennes’ belt (which, presumably, is comprised entirely of notches). Ironically, the expedition team hopes ‘The Coldest Journey’ will draw attention to global warming — namely, the effect that climate change has wrought upon the polar ice cap. Fiennes additionally intends to raise $10 million for Seeing is Believing, a charity organization that assists the blind.
This motivation has been behind many similar treks so you may be able to guess what the outcome is. ;-)




Yes, the outcome was described in The Telegraph and uncountable other outlets:
Sir Ranulph Fiennes abandons Antarctic crossing after frostbite
During a (Southern) summer training for his trek, he made the "small slip" to remove his glove at –30 °C. Previously, he cut fingertips on his hand so that doctors wouldn't have to amputate the fingers.




Unfortunately, such a previous case of frostbite makes one more susceptible and sensitive. So his new frostbite is another problem, perhaps a greater one, and the plans for the "Coldest Journey" have been abandoned.

At any rate, he managed to draw attention to global warming as he planned. Global warming has clearly caused his frostbite. While global warming warms the globe, it brutally cools down the volumes of air surrounding gloves that have been just taken off.

More seriously, I want to assure all heroes and not quite heroes that even if a real ongoing trend or process deserved to be called "global warming", it just cannot have any detectable implications for the old wisdom that the polar regions are damn cold. Global warming – independently of the subtle questions about its causes – may have added 0.15 °C per decade. In two decades, since your last visit to the frozen continent, it could have contributed 0.3 °C to the temperatures over there. But there's still about 50 °C by which Antarctica is colder than what your fingers would find comfortable! When it comes to the experiences of an individual who visits the frozen continent, global warming is at most a 0.01 sigma effect and that's surely considered a non-effect by everyone familiar with statistics. As long as you are mostly rational, the concept of "global warming" just cannot and shouldn't possibly affect what you do with your gloves! ;-)

Tuesday 26 February 2013

Learning physics is futile without practicing

LHC news: CMS released two papers with 20/fb of the 8 TeV 2012 data. Remarkably enough, both the dijet paper (Figure 5) and the dilepton paper (Figure 5) show 2-sigma excesses for a possible new object of mass 1,750 GeV or so. They're much weaker signals than what is expected from the typical theories for the objects searched in these papers but the overlap of these excesses is still a bit interesting.
After a break, I answered a package of questions at the Physics Stack Exchange. It's sometimes fun and some of the questions are even interesting but there are also some omnipresent sources of frustration.

Let me mention some of them.

Pretty much every day, there is a question or two that tries to announce the discovery that quantum mechanics has been overthrown and/or may be employed to send faster-than-light signals, and so on. See e.g. user1247 yesterday. Or another question by the same user that tries to overthrow the postulates of quantum mechanics within quantum field theory "only". Or joshphysics who is convinced that observables can't be observable. And so on.




This is a topic that will probably never go away. A physics discussion forum that is open to the laymen must probably inevitably become a place for street rallies where protesters protest against what they hate about modern physics – and its very foundations, principles of quantum mechanics, are of course among the most hated aspects of modern physics.

Very rarely, one feels that they're starting to get the basic points which are really simple if you're impartial. All physically meaningful questions are questions about observables. All observables are represented by linear operators on the Hilbert space. The a priori allowed values of their measurements are given by the eigenvalues. Only the probabilities of each possible outcome may be predicted through a simple and universal formula, the Born rule – in one form or another. This is the framework for modern physics.

If a question can't be formulated in this way, as a question about probabilities of observables (linear operators), it doesn't belong to science. If an answer to a question uses different rules than the quantum-evolution-based and Born-rule-backed calculations of probabilities of propositions about observables (or some legitimately derived approximate theories that ultimately boil down to quantum mechanics), it's wrong. These claims have no exceptions, loopholes, and they don't allow any excuses. Everything else are "technical details" – what the observables are, how they commute or not commute with each other, how they act on various states and how they evolve – and one needs lots of experience. But the conceptual foundations may be summarized in the few principles fully captured by the few sentences above.




The foundations of quantum mechanics are almost never taught correctly, especially outside top universities. The teacher usually never understands modern physics himself or herself so even if the material is presented in some way, it's sold along with a completely wrong focus and "comments surrounding the material". It leads the students to reinforce their misconceptions. It leads them to believe that the foundations of quantum mechanics are a controversial optional luxury that may be replaced by something else at any point. It leads them to believe that \(|\psi(x)|^2\) doesn't have to be interpreted as the probability density.

Perhaps it's "ugly" and it may also be the number of sex partners that a woman has had, a voltage in a vibrator, or anything else. They think that everything goes and anything goes. They don't care that the probabilistic interpretation of the objects is a directly observable experimental fact and any modification of it would instantly invalidate the whole theory. Virtually every question about quantum mechanics starts with some deeply emotional words proving the writer's discomfort with some very basic principles of quantum mechanics.

Sometimes I tell myself not to answer because it's completely unproductive – it's just beyond the people's abilities; the emotions sometimes look so deep that I decide that the question was asked by a psychiatrically ill person who should better not be told the truth because it may lead to dangerous situations. Sometimes I react differently when I tell myself: It's just not possible that the folks are this breathtakingly stupid and can't understand these simple points. In the latter case, I am usually destined to a journey towards a pedagogical failure. The people simply are breathtakingly stupid, prejudiced without any limitations, closed-minded, thoroughly incompatible with modern physics.

But that doesn't mean that everything is pleasing about the questions by those who shut up, who kind of correctly use the formalism of quantum mechanics, and who ask different questions. As the title of this blog entry indicates, this blog entry is about the practicing, exercises, particular examples, applications of abstract physical knowledge. Needless to say, this is ultimately what physics and science is good for. It's the bulk through which physics manifests itself in the human society.

Physicists have the working knowledge of all the things in the Universe – OK, I mean all the important things in the Universe – but they only have it if they can actually think. If they know not only the laws and principles of physics but if they also operationally mean what the laws and principles mean and if they have mastered a sufficient chunk of maths that is needed to connect the statements about physical phenomena with each other and with the mathematical expressions, structures, and propositions.

At the same moment, I would agree that too much exercising becomes useless, too. Physics – and research-level maths – is ultimately not about following some procedures and protocols infinitely many times. We want to find new ones. When we have mastered some old classes of problems, we understandably lose our interest in them. This is how physics differs from chess or recreational mathematics. We just don't want to play chess millions of times because it's ultimately almost the same thing. We don't want to solve billions of Sudokus. A person who has the curiosity of a physicist simply wants to learn new things that are qualitatively different from the things he has already learned.

So I want to say that it is indeed natural if a physicist doesn't want to spend too much time with practicing the same thing. Engineers or athletes spend much more time by doing the same things all the time – which may ultimately be a good idea for practical or financial reasons. A physicist wants to get as far as he can in his mastery of the Universe or its chosen part. On the other hand, and this is what the title is about, a certain amount of practicing is simply necessary even for the most exercise-hating physicists because it's needed to guarantee that the knowledge is genuine and usable.

When we study a course or a book, it doesn't mean that we must understand every letter of it. An analogous claim holds for research, too. We shouldn't get completely stuck and terminate all learning or further thinking when we encounter first equation or question that seems confusing to us. After some attempts to lift the fog, we must live with it, remember that the confusion was there, and try to continue, probably using the assumption that the answer obtained differently (or the claim by the author of a textbook or the instructor in a course) is right.

There are also gaps that may seem technically demanding and we may sometimes uncritically accept claims by the instructor, textbook, or fellow researchers that "if you calculate this and that, you obtain this". Because of assorted sociological criteria, such a claim may look plausible and we sometimes need to save the time. I would say that a good scientist should ultimately rely on almost no examples of "blind faith" of this kind. He or she should verify and/or "rediscover" everything that he or she wants to use as a starting point for further research or calculations.

However, in some cases, e.g. when you're learning a body of knowledge someone has packaged for you and you were not necessarily interested in every product in the package, it's natural to skip certain aspects, especially if the other material doesn't seem to depend on them (much). But the gaps in our knowledge that is created in this way should stay "under control". You must "almost know" that if you will need to learn some method or verify the proof of a claim or something of the sort, you will return "here or there" and spend roughly XY hours of time and you will be done.



I would argue that these gaps must be "mostly exceptions", perhaps like the holes in a Swiss cheese that still holds together. I don't say that the critical percentage is 50% but there is some rough percentage and if the number of holes is just too large, the Swiss cheese of your knowledge decays into hundreds of Swiss cheese balls that don't hold together.

When the amount of ignorance and the number of holes grow too large, you not only fail to know many particular things but you also lose the idea about how many things you're actually ignorant about and how to ask questions that would give you a chance to fill the holes, and so on. Your physics knowledge becomes unusable.

I had this feeling when user6818 asked four questions about Polchinski's textbook on string theory. How to derive the Green's functions on a sphere, disk, projective sphere, why this is canceled, and so on? The questions are based on Chapter 6, Volume I.

Superficially, they're legitimate, even high-brow questions. What's problematic about them is that every person – even person who knows no physics, not to mention string theory – could ask such questions. You pick an equation in a book and ask "Why is it true?" But in physics, statements such as equations for Green's functions don't have simple, generally understandable (by everyone), and self-explanatory one-sentence answers. They're derivations that may be compressed or need to be inflated depending on one's knowledge or experience or the lack of it, derivations that always depend on lots of other background.

When one asks why is [relatively straightforward] eqn. (6.2.17) right – without specifying any details about his or her confusion etc. – it suggests that he or she has made no attempt to derive the equation. And it seems that he or she probably can't solve or prove even similar, simpler equations. And although I don't have any rigorous proof, I would bet that these worries are justified by the facts – in this case and many others.

To meaningfully answer such questions and to have an idea how many details the answer should discuss, one needs to know how much the person who asks something actually understands. Does he understand why the logarithm solves the Poisson equation in two dimensions? Can he use substitutions while solving differential equations similar to the known ones? Does he know how to calculate Gaussian integrals? Does he understand why the holomorphic functions have a vanishing Laplacian? Does he know that the sphere is conformally equivalent to the plane, that the disk is conformally equivalent to the half-plane, that the projective sphere may be obtained by identifying opposite points on the sphere as well? Does he know what the boundary conditions at infinity must be for the plane to conformally represent the sphere? Does he really want to explain the blind mechanical calculation proving some independence on the conformal factor or the conceptual reasons? Does he understand the general concept of "solving differential equations", especially the fact that there exists no mechanical procedure that would lead to the solution of any equation?

There are tons of questions – the list above isn't complete. If the answer to some of the question(s) above is "No", the thing may be given a meaningful answer. But if someone just asks why is (6.2.17) and three more formulae right, it looks like the answers to all the questions above are "No". What I mean is that the person must be taught complex numbers, calculus, integrals, conformal transformations, symmetries in physics, two-dimensional Riemann manifolds and their relationships, and many other things pretty much from scratch, and in a more detailed way than Polchinski's book. (Of course, the equation (6.2.17) isn't the first equation where the ignorance about any of these topics should show up – but that is just another detail that makes the question about a seemingly random equation in the middle of the book slightly more surprising.) That's a big task, however, because Joe Polchinski spent a decade by writing his perfectionist textbook on string theory so you may need 50+ years to write the more detailed one.

But would it make sense? I don't think so. If someone really needed to explain all the things above – and I don't claim it's the case of user6818 – he or she simply shouldn't study Chapter 6 of Polchinski's book because he doesn't have the background for that. It's useless to learn some material if you can't use it. And if you can't understand how the material is derived – and the derivation is just an application of a "simpler" material you should have known before – then it shows that you probably can't apply this material (because you can't apply even simpler one) so it's useless to learn it.

The number of questions on Physics Stack Exchange where the answer could very well be 100 times longer than the question itself is significant. (A particular user named Anirbit has asked 60+ questions that mostly fall into this category: "Explain everything in a paper to me".) I am afraid that it is a waste of time to be answering these questions – questions of the type "explain every line of a paper or chapter to me" even though the paper or book have already been written with the assumption that they speak for themselves. A meaningful communication and explanation only occurs when the two sides are at least a little bit on the same frequency and/or if the "teacher" has some idea about the things that the "student" knows. Without such a context, it is impossible to teach physics. And if you teach physics to someone who will ask you "Why it is so?" almost after every claim he sees (and sometimes again – many times – after you already answer it), it's like training armless boys to become construction workers. A teacher may build a house out of bricks but it won't make any impact on the skills of the student simply because he can't do it. So if the "pedagogical house" is useless, you may better avoid this futile pedagogic exercise.

In this respect, physics differs from literature or many other subjects – that often include natural sciences – where the structure of the background isn't hierarchical or is much less hierarchical than in physics. In other words, you don't need much and you may immediately learn some isolated insight from advanced research. You may have never read a book but you may memorize two sentences from a play by Shakespeare and some naive people will instantly think that you're a cultural human being. And so on.

But in physics, this doesn't work. String theory is arguably the tip of a pyramid of knowledge that has almost as many floors as the Empire State Building, if I count if in a fine-grained way. Memorization of an isolated insight or rule is almost worthless in physics because the meaning and power only emerges when many prerequisites are understood.

Needless to say, this is why some people hate physics – and maths – at school. If I don't count gyms and similar things, almost all other subjects at school are about memorization, a universal method that requires lots of RAM (or hard disk space) and almost no CPU or GPU, if I compare the students to computers. At most, some subjects require that the students learn how to follow a relatively mechanical procedure or two to "derive something".

Paradoxically enough, it's the same people who don't like maths and physics for their "requirements of creativity and practicing" who most frequently complain that maths and physics are mechanical, dull, narrow-minded, isolated from practice, and that they reduce people to mindless mechanical machines. When you look rationally at the situation, you notice that the truth is exactly the opposite. These critics of maths and physics are the mindless unthinking machines that only do mechanical things and they hate maths and physics exactly because they can't be mastered in this way! ;-)

But I was thinking about folks who would never open Chapter 6 of Polchinski's textbook, of course. ;-)

Let me return back and say that to learn a subject such as quantum field theory or string theory, one needs to practice, rediscover, verify his own predictions (against well-known insights if not experiments), and think about the implications a lot. Ignorance about particular things is permissible if not inevitable in science. But even ignorance has to be tamed and brought "under some control". We must have an idea how many things we probably misunderstand, how many things about them may be known to other people, how or where to find the answers if necessary, how much time it may take to find the answers or verify some results, and – perhaps most importantly – we must roughly know to what extent the things we're ignorant about may affect the things we think we know (where are the boundaries of our ignorance).

The holes in the Swiss cheese should never break our knowledge into a large collection of disconnected marbles. If that happens, physics ceases to be physics. It ceases to be the lively mechanism to find and incorporate all important insights about everything in the Universe.

And that's the memo.

Monday 25 February 2013

Mauritia: microcontinents must have been around for eons

And pairs of opposing large continents may have been more typical than a single unified one

Tons of articles including one at NPR have been written about a new finding.

Lava sands with zircon xenocrysts found on the beaches of Mauritius (island which is East from Madagascar) support the idea of a microcontinent dubbed Mauritia that existed between the continents (...) of Madagascar and India for tens of millions of years sometime 70 million years ago. I don't want to be excessively accurate because I don't think that their reconstructed layout may be trusted this accurately.



Mauritia is supposed to be just a tiny sliver inserted between India and Madagascar in the left upper corner. At that time, the liberals in San Francisco belonged to the African union with Congo and were Hispanics. Baltic states haven't been occupied by Stalin yet and they had the Amazon forest in their yard. ;-)

The papers by Torsvik et al. and Niocaill et al. have appeared in Nature Geoscience; they're linked to at this Nature review. Although the precise arguments leading to the dating and other claims aren't comprehensible to me, it still seems like too much hype given the importance of the finding.




When we first hear about the continental drift, we are usually told about the supercontinent named Pangaea that was around between 300 million and 200 million BC. That universal continent was surrounded by Panthalassa, the universal global sea, the story says.

But was it this simple?




The first thing to notice is that 200-300 million years is just 4-6 percent of the age of the Earth. So the era of Pangaea is a relatively recent phenomenon. Why would all the continents be unified into one this recently even though they're split again? The key part of the answer is that Pangaea wasn't the first supercontinent; on the contrary, it was the most recent one (and the first one to be historically debated). The plates have been rearranging and splitting and merging for a long time and as much as billions of years ago, many other supercontinents existed.

They perhaps included Vaalbara, Ur, Kenorland, Rodinia (Russian for Fatherland, roughly speaking). These supercontinents incorporated most of the landmass when they were around. But there were also supercontinents that contained about 1/2 of the landmass when they were around – these were the Cold War periods of the supercontinent cycle. ;-) Gondwana and Laurasia – 510-180 million years BC or so – are the most famous example. Gondwana was composed of components including South America and Africa that fit together so neatly. Some other hypothetical continents in the past have been submerged. And there could have been ordinary-size continents whose names coincide with countries and other regions – Kazakhstania, North China, Siberia, India, and others.

Many of the details are unknown and many of the known details may be wrong. That's equally true about the future projections of supercontinents. Will there be Amasia (China will merge with the U.S., not only when it comes to their currency union haha), Novopangaea, or Pangaea Ultima? These are highly inequivalent scenarios how the contemporary continents may merge in the future (hundreds of millions of years in the future).

The idea of supercontinents that covered "almost everything" is very tempting. In some sense, it is as almost as tempting as the unification of forces (and other concepts) in physics. But when we return to relatively recent eras, it may be misleading and I would guess that it probably is. There's really no reason why 95% of the landmass should be connected at any moment less than 500 million years ago – when the Earth was already 90 percent of the actual current age old.

These days, we have continents such as Australia and large islands such as the Greenland. And the rest is divided to several mostly disconnected parts, too. I guess that at most moments in the past, the decomposition was comparable so there always existed separate continents and icelands that occupied a similar percentage of the global landmass.

Let me mention that the difference between continents and islands goes beyond the continents' being larger. There should also be an intrinsic geological difference and indeed, there are at least two major ones. First, continents should be made of low-density rocks so that they "float" while islands are just extensions of ocean floor that happen to reach above the sea level in some cases and should be composed of heavier rocks. Needless to say, I think it's preposterous to imagine that there is always a clear separation between the two concepts. Second, landmasses that sit at their own tectonic plate (e.g. Australia) should better be called continents.

So I find it "more likely than not" that even during the most unified moments of Rodinia or Pangaea, there used to be disconnected continents that were larger than the Greenland, for example, and perhaps many smaller continents existed at various moments, too.

The very first years of the Earth could paint a different story, however. Yes, I do think that the mountains used to be taller and steeper; the Earth is getting rounder after every earthquake as some potential energy of the rocks is converted to heat. And yes, I think that there was a chance that the continents and islands were less fragmented than they are today. However, for some reason, I find it a bit more likely that there were two major landmasses – nearly opposing each other – when the Earth was very, very young.

What's the reason behind this idiosyncratic claim of mine? When we approximate the early solid Earth by a random perturbed ellipsoid, its longest semiaxis is likely to show the positions of the two major supercontinents – on the opposite sides of the globe. Because the rocks are generically heavier than water, I find the idea of water on one side and rocks on the other side to be highly imbalanced. Much like tides, it seems more sensible to assume that the rocks get out of the water level – an equipotential surface – at two opposing places at the same moment.

To a large extent, I would make the same guess about most "highly unified" moments in the geological history, both in the past and in the future. It really does seem to me that the people who try to reconstruct the details of the continental drift don't fully incorporate the change of the equipotential surfaces – gravity – caused by the accumulation of the landmasses. If they did so, they would arguably realize that the sea level near excessively large continents inevitably goes up (so the mountains on too large continents get lowered relatively to the surrounding sea level) while the sea level goes down on the opposite side of the globe which makes it more likely that some continent or island will emerge on the opposite side of the globe.

Do you agree with that? The main loophole I may imagine is that the original solid Earth was highly non-uniform so the actual center of mass could have been shifted away from the "apparent geometric center" by a significant distance, making it likely that the continent would only appear on one side. I haven't tried to quantify how strong this effect could have been, relatively speaking.

These days, the Earth is kind of balanced. The Eastern Hemisphere contains the bulk of Eurasia and Africa but the Western Hemisphere boasts Americas. The Northern Hemisphere contains significantly more landmass than the Southern one but even this difference isn't "overwhelming". Moreover, the average thickness of ice in the Antarctica is 2 km which helps to add at least some balance.

A pro-firewall paper

The black hole firewall saga has continued with several new papers. Since the last blog entry about the topic, three Japanese authors proposed something that they call a self-consistent model of the black hole evaporation, probably without any firewalls. Because a starting point is semiclassical gravity, the paper can't be self-consistent, however.



Rodolfo Gambini and Jorge Pullin propose that loop quantum gravity "solves" the firewall problem by producing some new degrees of freedom. They extend the LQG algebra to a Lie algebra. I guess other LQG proponents won't like such a heretical modification but one must realize that in LQG, one may add, modify, or erase any degrees of freedom and any terms in the constraints and equations of motion because they're completely ill-defined and arbitrary and they don't change the quality of the theory because of the GIGO principle (garbage in, garbage out).

These adjectives must be considered on top of the fact that regardless of the choices, LQG is inconsistent and amazingly dumb, too. At any rate, an attempt to find new degrees of freedom in a theory is very modern – and I would say stringy. Kudos to the authors for that.

Today, there's a new pro-firewall paper.




Steven G. Avery and Borun D. Chowdhury explicitly try to disprove some recent anti-firewall papers, especially the excellent Papadodimas-Raju paper (on firewall considerations and doubled operators in the context of AdS/CFT) as well as the Harlow-Hayden claim that speed limitation on quantum computers hold and are essential to save us from firewall paradoxes.
Firewalls in AdS/CFT
I was trying to read the paper but it just doesn't click. They're trying to offer lots of ambitious claims but I can't see any evidence backing these claims so far. Can you help me?




The basic claim of these two authors is that AMPS is right and firewalls have to exist because the AdS/CFT dual of a thermal state in the CFT is a firewall. That's great and understandable. However, I was trying to find out why they think so and I just can't see anything rational in that paper that would explain their opinion.

These authors must be thanked for noticing that a basic criticism of the firewall meme is that \({\mathcal A}={\mathcal C}\), approximately: the degrees of freedom in the radiation and in the black hole interior aren't "two wives" that would violate the monogamy rule for quantum entanglement because they're – partly or entirely – the same woman! However, Avery and Chowdhury don't like this observation – which has been at the heart of the black hole complementarity from the beginning.

Nevertheless, they seem to misunderstand or overlook all the other important insights that have been unmasked in the context of the AMPS research and, equally importantly, all the arguments meant to support their conclusion that there has to be a firewall at the horizon of the bulk dual of a thermal state seem vague, dull, and emotional to me. I just don't see any real arguments. Instead, what I see are numerous repetition of the word "bizarre". So they argue against the resolutions in the following way:
Taking the next logical step, we let an arbitrary system play the role of the early radiation. In other words, we imagine coupling a source/sink to the CFT, allowing them to equilibrate and thereby become entangled, and then decouple them. Next, we couple a source to the CFT to create an infalling observer. The \({\mathcal A}={\mathcal C}\) argument in this case would mean that the degrees of freedom of the source/sink that purifies the CFT are available to the infalling observer, allowing her free infall. Since the systems are decoupled, this seems to be a bizarre state of affairs given that we are talking about arbitrary (decoupled) systems giving universal free infall. We thus conclude that the dual to a thermal state in the CFT is a firewall!
Well, the individual parts of the spacetime in the black hole spacetime never quite decouple and never become quite independent so the toy model that they study is self-evidently not equivalent to the case of a real black hole – one that is formed, one that adopts infalling observers, and that later evaporates. Whatever they derive about this system can't be trusted to be a valid conclusion about a full-fledged, genuine black hole.

To make you sure that the word "bizarre" appears thrice in the paper, here are the remaining two copies:
We simply do not have the other CFT's degrees of freedom that are necessary for free infall. The equivalent of Papadodimas-Raju and Harlow-Hayden argument would be that the degrees of freedom of \({\mathcal H}_S\) (which is the equivalent of \({\mathcal H}_A\) for the evaporating branes) nevertheless come into play. However, given that we have decoupled the source/sink from the CFT this seems rather bizarre. Furthermore, the CFT could have been thermalized by an arbitrary system which may not be described by a CFT at all. It seems rather bizarre that an observer falling into the CFT system would still be able to access the degrees of freedom of HS universally, irrespective of the properties of the latter system. [Said differently...]
And so on. One question is whether firewalls are forced upon us by a valid argument. I think that the answer is No because the black hole interior may always be viewed as a "dead end extrapolation" of the degrees of freedom outside the black hole and whatever the observers measure inside a black hole will never get out so these measurements simply can't lead to any contradictions, whatever their results are.

But even if I imagined myself to be agnostic about the existence of firewalls, I think that I would still find the logic of the text above incomprehensible – a polite word for "fallacious". They study a toy model in which they manually replace the CFT by something entirely different and then they claim that it's bizarre because the CFT is replaced by something general. But everything they call "bizarre" was created by themselves so why do they complain about it?

In the case of AdS/CFT black holes, all the dynamics anywhere in the spacetime is encoded in a CFT. If someone claims that string theory i.e. quantum gravity in an AdS space reconciles all the requirements nicely without any firewalls, he is only making a statement about the way how the degrees of freedom in a CFT may be picked, evolved, and interpreted. Such a defender of the consistency of quantum gravity without firewalls clearly makes no statement about non-quantum-gravity theories that have something else instead of a CFT. In fact, he will probably agree that the consistency of quantum gravity is such a delicate and sensitive feature that if you modify almost anything about it, the whole structure will become inconsistent. So if you find a bizarre feature of such "mutated" theories, it's your personal problem, surely not a problem of quantum gravity or defenders of its consistency (without firewalls)!

Also, these authors try to demonize nonlocality of any kind. It seems obvious – and I think that all sensible experts as of 2013 agree – that some nonlocality is present when black holes evaporate. This is needed to avoid Hawking's semiclassical conclusion that the information simply can't get out so the evolution of the initial star to the final Hawking radiation has to be non-unitary.

In reality, the nonlocality encountered in these situations is tiny and an exponentially tiny nonlocal effects are enough to reconcile all the principles. However, these folks – and I think it's not just Avery-Chowdhury but also the authors of AMPS and probably others – seem to work in some "yes/no" dogmatic way. When some nonlocality is needed, they say "everything has gone awry" and they immediately make huge claims.

What they forget is, among other things, that the existence of a firewall requires a huge violation of locality, too. In fact, the violation of locality and causality needed to produce a firewall at the place of the event horizon is much larger than the nonlocality believed by Raju-Papadodimas, by me, and by many others. The claim that the nonlocality imposed upon us is tiny is the very main point of the Raju-Papadodimas paper and many others! The firewall proponents seem to use perfect locality of some sort as their main motivation or argument (that's also why Avery and Chowdhury think that the "completely decoupled heat bath" is a valid model of a black hole, after all; in the real world, the interior's non-decoupling from the exterior and from the radiation is a key principle and the essence of the black hole complementarity expressed in different words) but then they derive, using their assumptions, that the locality is actually violated brutally (by the existence of the firewalls) and they don't seem to care that their conclusions are inconsistent with their assumptions which means that their "whole theoretical framework" is inconsistent gibberish.

Again, the firewall defenders seem to (correctly) conclude that the black holes can't preserve all the quantum-information principles with a perfect locality – so they immediately make the jump and conclude that the nonlocality must be so huge that it doesn't allow you to enter the black hole interior at all (at least not if you want to stay alive). But they actually never prove anything of the sort. In fact, they never really define what a "firewall" is and they never quantify how strong effects it is actually supposed to have. They're entering some bizarre "fundamentalist", Yes/No discourse. I am just not getting it. When it comes to firewalls, they don't seem to be thinking as physicists at all.

Let me copy the rest of a paragraph I have already posted:
[...of the latter system.] Said differently the evolution of a perturbation created with support on the CFT beyond the horizon depends not only on how the CFT is entangled with some other system but also on the Hamiltonian of the combined system. We thus conclude that for generic System 2 the infalling observer hits a firewall. This is shown in Figure 6.
We learn that "We thus conclude [something]". That's nice that they "conclude" something except that "something" doesn't follow from the previous sentences and it isn't even well-defined. In principle, the evolution of any degrees of freedom in a black hole spacetime may depend on any other degrees of freedom – there is room for some nonlocality – except that we must always ask how strong the nonlocality is, how it can be parameterized, whether it may be operationally measured, what variables may parameterically suppress it, and so on. Apologies for this analogy but their approach to the "firewalls are real" claim is analogous to the climate alarmists' approach to the "global warming is real" meme. The content of the sentence isn't well-defined. It apparently doesn't even have to be well-defined and whatever its meaning is, the claim doesn't have to be rationally justified, anyway. It's just not science I can understand – in my understanding, science is composed of propositions about things that are in principle observable by a well-defined protocol and links between such propositions justified by flawless logic.

Their convergence towards the sentence "We conclude that there are firewalls" looks pretty much isomorphic to this well-known cartoon:



"Step one. Step two: here a miracle happens. Now, we conclude that there are firewalls." Very nice but could you please be more explicit in Step two? ;-)

I have my doubts about certain aspects of the Papadodimas-Raju claims and constructions as well but I feel that it's due to some localized misunderstandings of mine – and perhaps less likely, localized mistakes by them – and these problems could be "perturbatively fixed" (which may or may not change the major conclusions). However, when I read the paper by Avery-Chowdhury, I just don't recognize it as rational thinking of the type I know. There's no place for me to start. I understand what they want to conclude but I can't find any calculations or analyses of anything that could possibly be related to these things.

Saturday 23 February 2013

Mauna Loa carbon dioxide: a fit

I wanted to find a nice function that gives a satisfactory description of the carbon dioxide concentration as a function of time.



Mauna Loa, one of the five volcanoes underlying Hawaii, a middle Pacific state of the U.S., is the most standardized place where the concentration is measured. See this NOAA page on Mauna Loa weekly and the raw data I have used.




If you open the page with the raw data, you will get over 2,000 weekly concentrations of carbon dioxide in ppmv (parts per million of volume – or equivalently, parts per million of the number of molecules) between May 1974 and February 2013. The page usefully offers you the decimal year. For example, February 10th, 2013 is written as \[

2013.1110 = 2013+\frac{40.5}{365}

\] because February 10th is the 41st day of the year with 365 days and the 41st day is offset by the average between 40 (number of days before it) and 41 (the ID). Think about it, it's natural: the extra one-half labels the noon, the middle of the day. The next step is to ignore the historical data and the entries –999.99 for the concentrations.




I did various things but the most important result I want to show you is a nonlinear fit for the concentration as a function of \(y\), the year that includes the fractional part remembering the date. Note that in the past, I offered you my "exponentially increasing" formula for the carbon dioxide concentration as a function of the year, ignoring the date – the seasonal variation:\[

c = 280 + 22.3 \exp\left(\frac{{\rm year}-1920}{57}\right)

\] However, we want to include the seasonally oscillating part of the graph. The following nonlinear fit is self-explanatory; all the "complicated" real numbers in it (280 isn't one of them) were optimized by the NonlinearModelFit command in Mathematica.\[

\eq{
{\rm conc}(x) &= 280 + \exp(4.359 + 0.02088 x)\\
&- 0.8263 \cos[2\pi x] + \\
&+ 0.6033 \cos[4\pi x] + 0.0255 \cos[6\pi x] - \\
&- 0.0208 \cos[8\pi x] + 0.0166 \cos[10\pi x] - \\
&- 0.0135 \cos[12\pi x] + \\
&+ 2.8359 \sin[2\pi x] - \\
&- 0.5413 \sin[4\pi x] - 0.0957 \sin[6\pi x] + \\
&+ 0.0430 \sin[8\pi x] + 0.0218 \sin[10\pi x] - \\
&- 0.0002 \sin[12\pi x],\\
\\
x &= y - 1994.
}

\] Sorry, for the strange brackets and other imperfections of the \(\rm \LaTeX\). You see that the base concentration was chosen to be 280 ppm and the increase was prescribed to be exponential. In the exponent, the slope is\[

0.0208774 = 1/47.899

\] so the \(e\)-folding time seems to be shorter than those 57 years that are optimal for the interpolation from the beginning of the Industrial Revolution. You could say that according to the 1974-2013 data, it seems that the time in which the CO2 excess above 280 ppm gets multiplied by \(e=2.718\dots\) is shorter than 50 years. Roughly speaking, the same thing holds for the annual emissions. Here is the 1974-2014 graph.



But what I find interesting is the dependence on the date or seasons, i.e. the fractional part of the year \(y\). My nonlinear fit was a general Fourier expansion up to the 6th higher harmonics – 12 coefficients in total (six for cosines, six for sines). For your convenience, here are two cycles of the cosine/sine part of the fit:



Apologies that the last, negative half-wave isn't filled black. I don't want to spend hours with similar things. The shape of the curve hasn't substantially changed when I added the 6th harmonic cosines and sines. But you see that it dramatically deviates from a simple shifted sine. It's a rather complicated curve. Incidentally, I have checked that I was visually unable to see the difference between the seasonal curve (fit) extracted from the first 20 years of Mauna Loa and the last 20 years of the dataset.

A naive person could think that the Northern and Southern hemispheres should cancel and there shouldn't be a difference between springs and autumns. However, that's wrong because what's primarily responsible for the seasonal variations are life processes on land and the land masses are mostly on the Northern Hemisphere (Eurasia and North America are large). So the graph pretty much emulates what the Northern Hemispheres wants to do to the CO2 while our friends in Australia and Antarctica are just negligible parrots, penguins, and kangaroos who are emulating what we're doing half a year later. ;-)

During winter, the plants – the players that are capable of absorbing CO2 – are losing their vitality and activity so the CO2 is increasing up to the maximum near 3.12 ppm (plus the long-term running average) for \(x\in 0.36+\ZZ\) – something like May 10th. That's where the trend is reverted and Northern Hemisphere plants start to blossom and absorb more CO2 than the annual average. That's why the CO2 drops up to the minimum around –3.58 ppm (plus the long-term running average) for \(x\in 0.74+\ZZ\), something like September 27th. Then the seasonal CO2 anomaly starts to go up again, and so on.

What I also find interesting is the slight "hole" near \(x\in 0.15+\ZZ\). The graph really refuses to be smooth at that point. Also, you may notice that \(0.74-0.36=0.38\) is substantially less than \(0.5\) which means that the decrease of CO2 during the Northern summers is faster than the increase of CO2 in the rest of the year. In other words, when plants start to boom and consume CO2, they may do so more quickly and efficiently than when they're failing to boom. ;-)

The latest reading is 396.74 from February 10th, 2013. In 2013, this is bound to increase by 3.6 ppm or so by May 10th or so. In other words, the maximum of Mauna Loa CO2 for 2013 in May will slightly exceed 400 ppm for the first time around mid May – about 400.3 ppm will be the annual maximum. I guess that we will hear about it again. If the alarmists weren't preparing this story for May 2013 yet, one of them who is a TRF reader at the same moment will surely steal the idea from your humble correspondent!

The sine-and-cosine part of the fit (see the last graph with two cycles) vanishes around January 8th and July 20th which are the dates for which the actual measured concentrations may be interpreted as the "long-term running means" with the seasonal variations removed. On July 22nd, 2012, they had 393.98 pm. On January 6th, 2013, it was 395.53 ppm. The latter is close to what we have "now" when the seasonal variations are suppressed.

Let me mention that between May and September, the seasonal variations contribute the drop by \(3.12+3.58=6.70\) ppm of CO2. If you could make plants thrive in the winter as well, you could easily subtract something like 13 ppm of CO2 from the atmosphere every year, well above the 2 ppm by which we are increasing the concentration every year (it's 1/2 of 4 ppm we are adding; the other half is already being absorbed by the enhanced consumption of CO2 due to the elevated concentrations).



Did you know that just 6,500 years ago, Britain was connected to Europe in this way? The British Euroskeptics were weaker than today. Hunters were working hard in the land just in between Britain and Denmark. On Wednesday night, I didn't want to believe those claims – I was imagining that the author of the story was envisioning some huge continental drift in just thousands of years ;-) – but when you think about it, it's completely natural that there had to be a land, Doggerland, because big chunks of the Northern Sea are extremely shallow, around 30 meters, and the sea level jumped by more than 100 meters in the last 20,000 years as the continental ice sheets melted.

During the next ice age, sometimes around 60,000 AD, Doggerland could become a land again. It would be fun to know what kind of people (or creatures) will be living there and in what way they will be communicating. Alternatively, we may rebuild Doggerland by landfills and other constructions much earlier, perhaps in 2020 AD.

Wednesday 20 February 2013

Unstable Universes: guest blog

Guest blog by David Berenstein, Assoc. Prof. in Santa Barbara

It’s a fine day for the Universe to die, and to be new again! Well, maybe not, but the Internet is abuzz with a reincarnation of the unstable universe story. (You can also see it here, or here, the whole thing is trending in Google). In other words, this is known as tunneling between vacua. And if you have followed the news about the Landscape of vacua in string theory, this should be old news (that we may live in a unstable Universe, which we don’t know). For some reason, this wheel gets reinvented again and again with different names. All you need is one paper, or conference, or talk to make it sound exciting, and then it’s Coming attraction: the end of the Universe …. a couple of billion years in the future.




The basic idea is very similar to superheated water, and the formation of water bubbles in the hot water. What you have to imagine is that you are in a situation where you have  a first order phase transition between two phases. Call them phase A and B for lack of a better word (superheated water and water vapor), and you have to assume that the energy density in phase A is larger than the energy density in phase B, and that you happened to get a big chunk of material in the phase A. This can be done in some microwave ovens and you can have  water explosions if you don’t watch out.




Now let us assume that someone happened to nucleate a small (spherical) bubble of phase B inside phase A, and that you want to estimate the energy of the new configuration. You can make the approximation that the wall separating the two phases is thin for simplicity, and that there is an associated wall (surface) tension \(\sigma\) to account for any energy that you need to use to transition between the phases. The energy difference (or difference between free energies) of the configuration with the bubble and the one without the bubble is\[

\Delta E_{tot} = (\rho_B-\rho_A)V + \sigma\Sigma

\] Where \(\rho_{A,B}\) are the energy densities of phase A, phase B, \(V\) is the volume of region B, and \(\Sigma\) is the surface area between the two phases.

If \(\Delta E_{tot}\gt 0\), then the surface term has more energy stored in it than the volume term. In the limit where we shrink the bubble to zero size, we get no energy difference. For big volumes, the volume term wins over the area, and we get a net lowering of the energy, so the system would not have enough energy in it to restore the region filled with phase B with phase A. In between there is a Goldilocks bubble that has the exact same energy of the initial configuration.



Microwave-oven superheated water explodes (boils instantly) when impurities such as sugar are added. Let's hope there is no cosmic sugar, otherwise it could speed up the Armageddon (tunneling).

So if we look carefully, there is an energy barrier between being able to nucleate a large enough Goldilocks bubble so that there is no net change in energy from a situation with no bubble. If the bubbles are too small, they tend to shrink, and if the bubbles are big they start to grow even bigger.

There are two standard ways to get past such an energy barrier. In the first way, we use thermal fluctuations. In the second one (the more fun one, since it can happen even at zero temperature), we use quantum tunneling to get from no bubble, to bubble. Once we have the bubble it expands.

Now, you might ask, what does this have to do with the Universe dying?

Well, imagine the whole Universe is filled with phase A, but there is a phase B lurking around with less energy density. If a bubble of phase B happens to nucleate, then such a bubble will expand (usually it will accelerate very quickly to reach the maximum speed in the universe: the speed if light) and get bigger as time goes by eating everything in its way (including us). The Universe filled with phase A gets eaten up by a universe with phase B. We call that the end of the Universe A.

You need to add a little bit more information to make this story somewhat consistent with (classical) gravity, but not too much. This was done by Coleman and De Luccia, way back in 1987. You can find some information about this history here. Incidentally, this has been used to describe how inflating universes might be nucleated from nothing, and people who study the Landscape of string vacua have been trying to understand how this tunneling between vacua might seed the Universe we see in some form or another from a process where these tunneling events explore all possibilities.



Guth's "old inflation" involved tunneling.

You can reincarnate that into Today’s version of “The end is near, but not too near”. We know the end is not too near, because if it was, it would have already happened. I’m going to skip this statistical estimate: all  you have to understand is that the expected time that it would take to statistically nucleate that bubble somewhere has to be at least the age of the currently known universe (give or take). I think the only reason this got any traction was because the Higgs potential in just the Standard model, with no dark matter,  with no nothing more in all its possible incarnations is involved in it somehow.

Next week: see baby Universe being born! Isn’t it cute? That’s the last thing you’ll ever see: Now you die!

Fine print: Ab initio calculations of the “vacuum energies” and “tunneling rates” between various phases are not model-independent. It could be that the age of the current Universe is in the trillions or quadrillions of years if a few details are changed. And all of these details depend on the physics at energy scales much larger than the standard model, the precise details of which we don’t know much at all. The main reason these numbers can change so much is because a tunneling rate is calculated by taking the exponential of a negative number. Order one changes in the quantity we exponentiate lead to huge changes in estimates for lifetimes.

If you like TRF guest blogs, don't overlook the guest category.

Ludwig Boltzmann: a birthday

Off-topic: Yuri Milner, Facebook's Zuckerberg, and Google's Brin launched a Life Sciences counterpart of the Milner Prize, the same money. Because it's about life sciences, the chairman of the foundation is the chairman of Apple.
Ludwig Boltzmann was born on February 20th, 1844, in Vienna, the capital of the Austrian Empire. He hanged himself 62 years later, on September 5th, 1906, near Trieste, (then) also in the Austrian Empire, where he was on vacations with his wife Henriette von Aigentler and a daughter. They had 3 daughters and 2 sons; Boltzmann probably suffered from undiagnosed bipolar disorder.

I consider Boltzmann to be not only the #1 person behind classical statistical physics but also the latest "forefather" of quantum mechanics. His name appears in something like 100 TRF blog entries.




Fine. His grandfather was a clock manufacturer while his father Ludwig Georg Boltzmann (who died when his son-physicist was 15) was an IRS official. He got some good teachers such as Loschmidt and Stefan (yes, they share the Stefan-Boltzmann law) and was exposed to the contributions by 19th century giants such as Maxwell rather early on. He collaborated with Kirchhoff and Helmholtz, among others. On the contrary, he was later the adviser to Lise Meitner, Paul Ehrenfest, and many others.

Already when he was 20+ years old, he wrote a dissertation on kinetic theory of gases, the main subject he revolutionized. Graz, proper Austria's second largest city (an important one for Slovenes), was his most successful workplace.




All his major contributions to physics are linked to statistical mechanics – the microscopic "explanation" of the laws of thermodynamics. They include the Boltzmann [transport] equation for a probability distribution on the phase space\[

\frac{\partial f}{\partial t} + \frac{\mathbf{p}}{m}\cdot\nabla f + \mathbf{F}\cdot\frac{\partial f}{\partial \mathbf{p}} = \left(\frac{\partial f}{\partial t} \right)_\mathrm{collisions}

\] but also the Stefan-Boltzmann law for the total radiated energy (scaling as the fourth power of the absolute temperature; \(4\) is indeed generalized to the dimension of the spacetime if you change it)\[

j^{\star} = \sigma T^{4}.

\] and the Boltzmann distribution saying that Nature exponentially suppresses the likelihood that things jump to higher energy levels – the colder the temperature is, the more speedy the suppression becomes:\[

{N_i \over N} = {g_i e^{-E_i/(k_BT)} \over Z(T)}.

\] Most famously, his tomb offers the visitors his statistical interpretation of the entropy:



In this equation, \(S=k\cdot \log W\), and in many other equations, we see Boltzmann's constant \[

k=k_B= 1.38\times 10^{-23}\,{\rm J/K}.

\] This is the ultimate constant to convert between kelvins (temperature) and joules (energy per a degree of freedom such as an atomic one etc.). In other words, it's the conversion factor between statistical physics and thermodynamics; mature physicists set it equal to one, \(k=1\), much like in the case of \(c=\hbar=1\). The numerical value is small because people had only been familiar with the "thermodynamic limit" in which the number of atoms is very large, effectively infinite. In this \(N\to\infty\) limit, the thermal energy per atom becomes comparable to \(kT\) which is numerically about as small as \(k\) itself. A large number of atoms (or photons) form a "continuum" and the energy is smooth.

Let me mention that \(W\) in Boltzmann's tomb trademark equation looks simple but it stands for a rather complicated word, Wahrscheinlichkeit. That simply means "probability" although a more correct explanation is "frequency of occurrence of a macrostate: how many microstates correspond to it" (the natural probability of one particular microstate is the inverse of this number, so its logarithm is the same thing as the logarithm of the number with a minus sign).

You may also wonder why the entropy is denoted \(S\). Well, I can tell you something. The symbol as well as the word "entropy" were introduced by Clausius in 1865. The word "entropy" was deliberately chosen to be similar to "energy". The word "ἐνέργεια" i.e. "energeia" means "activity" or "operation" in Greek; similarly, "τροπη" i.e. "trope" is a transformation. The symbol \(S\) wasn't explained but given the fact that the front page of a major related article by Clausius mentioned Sadi Carnot as the ultimate guru in the theory of heat (he was in the first wave of proponents of entropy), it's likely that \(S\) actually stands for "Sadi".

Ludwig Boltzmann mastered lots of the combinatorial exercises we often learn in the context of the classical statistical physics (factorials to calculate the number of arrangements, and so on). But he has spent many years with efforts to prove the second law of thermodynamics. A particular proof of this sort was the proof of the H-theorem in 1872.

Controversies about the provability of the second law

Even though Boltzmann's proof of the H-theorem is obviously right and is a template to prove the second law of thermodynamics in any microscopic theory or any formalism (including quantum mechanics), it has been attacked by irrational criticisms from the very beginning. Poor Boltzmann spent a lot of energy and created lots of entropy by the defense of his important insight and I am sure that the crackpots who criticized him contributed to his decision to commit suicide.

The so-called Loschmidt irreversibility paradox is named after Boltzmann's former teacher, Johann Joseph Loschmidt (born in Carlsbad, Czech lands), and it is a deep misunderstanding of the origin of the second law of thermodynamics (and the arrow of time). The basic logic behind this "paradox" is that the microscopic laws are time-reversal-symmetric (or at least CPT-symmetric, if you discuss a generic Lorentz-invariant quantum field theory; the impact of the CPT symmetry is almost identical). So it shouldn't be possible to derive time-reversal-asymmetric conclusions such as the second law of thermodynamics (entropy is increasing with time but not decreasing with time).

Well, if you state it in this way, it looks tautologically impossible to prove the second law. However, what this argument completely misses is the fact that the second law of thermodynamics is a statement about statistical or probabilistic quantities such as the entropy. And to derive such statements, we actually need more than just the "microscopic dynamical laws" of physics; we also need the probability calculus. The probability calculus applied to statements about events in time is intrinsically time-reversal-asymmetric. And this asymmetry, the "logical arrow of time", is imprinted to time-reversal asymmetries in other contexts.

For example, Boltzmann's H-theorem proves that the thermodynamic arrow of time (the direction of time in which the entropy increases) is inevitably and provably aligned with the logical arrow of time.

Technically, the critics were attacking the assumption of molecular chaos and were claiming that it was the key assumption that made Boltzmann's proof vacuous or invalid or whatever they would say. But this is a complete misunderstanding of this assumption. Molecular chaos is just a technical assumption about the velocity of the gas molecules etc. – they're uncorrelated etc. in the initial state – which allows one to calculate certain things analytically.

But even if we introduced arbitrary correlations or other features of the initial probability distribution, it would still be guaranteed – up to negligible, exponentially supertiny probabilities – that the entropy is actually going to increase with time! If you want the entropy to decrease with time, it is not enough to start with an initial state that contains correlations between velocities. You need to start with some totally unnatural correlations between the positions and velocities that are exponentially unlikely and that happen to evolve in a way that reduces entropy for some time. The only way to "calculate" what the correlations required to start a decline of the entropy are is to actually evolve a desired low-entropy final state backwards in time. But this can't occur naturally.

The very fact that the entropy increases in the real world has nothing whatsoever to do with any details of the molecular chaos assumption. The actual reason why the second law – a maximally time-reversal-asymmetric fact about Nature – holds is that it is a claim of probabilistic character. And probability calculus has an inevitable, intrinsic, logical arrow of time. When you calculate the probabilities for a transition between macrostates \(A\to B\), you need to sum the probabilities over the possible final microstates \(B_j\), but you need to average (not sum) over the possible initial microstates \(A_i\). Think about the origin of this summing and averaging. They are unavoidable consequences of "pure logic". For the final (mutually exclusive) microstates, the probabilities are simply summed; for the initial states, the fixed "total prior probability" must be divided among many microstates.

Averaging and summing are different things. As explained in dozens of TRF posts, this difference is reflected by an extra factor of \(1/N_A\sim \exp(-S_A)\) and guarantees that\[

\frac{P(A\to B)}{P(B^*\to A^*)} \sim \exp[(S_B-S_A)/k].

\] The asterisks represent the time-reversal- or CPT-transformed states. (The signs of velocities/momenta etc. are reverted.)

Because the entropy difference \(S_B-S_A\) is typically incomparably larger than \(k\), Boltzmann's constant, and because the probabilities can't exceed one, it's clear that the right hand side above is either zero or infinity in the thermodynamic limit and only one of the probabilities from the numerator or denominator (the probability of the process where the entropy increases) may be nonzero. This is the actual reason why the second law of thermodynamics holds.

Glimpses of the quantum, Bohrian thinking about the world

Why was it so hard for the people to understand these things? And why it's so hard to some physicists – such as Brian Greene or Sean Carroll – even today, more than 100 after these discoveries were made? Well, I think that the reason is the same as the reason why certain physicists (including the two I have mentioned) can't understand the foundations of quantum mechanics. In fact, they face almost the same problem. Let me explain why.

Niels Bohr is the author of a quote that's been mentioned on this blog many times:
There is no quantum world. There is only an abstract physical description. It is wrong to think that the task of physics is to find out how nature is. Physics concerns what we can say about nature...

As quoted in "The philosophy of Niels Bohr" by Aage Petersen, in the Bulletin of the Atomic Scientists Vol. 19, No. 7 (September 1963); The Genius of Science: A Portrait Gallery (2000) by Abraham Pais, p. 24, and Niels Bohr: Reflections on Subject and Object (2001) by Paul McEvoy, p. 291
I have mentioned that this is the key psychological obstacle for many people in the context of quantum mechanics. They're permanently looking for a classical model. There is an objective reality, they believe, described by the values of some mathematical objects (functions): the mathematical functions and the objective reality are isomorphic to each other and observations are only passive reflections of this underlying reality.

In quantum mechanics, things are different. The fundamental things are propositions that we can make about observables (properties of physical systems) at various moments. The laws of quantum mechanics relate the truth value (and, more generally, probability) of these propositions directly and there is no way to reduce them to a classical model or objective reality in between. Although all of us laughed when we were kids and we were told about philosophers who question the existence of objective reality, it's nevertheless true that at the fundamental level, objective reality doesn't exist. It is just an emergent, approximate concept.

But what I haven't emphasized sufficiently often is that Bohr's quote actually applies to classical statistical physics as well. Many people misunderstand the proposition-based character of physics in this context which leads them to invent lots of irrational criticisms against classical statistical physics, too. What's going on?

The critics are still imagining that Nature is found in a particular, deterministically evolving microstate, and they try to evaluate the second of law of thermodynamics from this viewpoint. In classical physics, it is tolerable to imagine that there is a particular, deterministically evolving microstate behind all the information about Nature; in quantum physics, it isn't tolerable, as Bohr's quote above pointed out.

However, even in classical statistical physics, the existence of such a particular, deterministically evolving microstate is completely irrelevant for the validity of the second law of thermodynamics. Why? Because the second law of thermodynamics isn't a statement about a particular microstate in the phase space (or Hilbert space) at all! It is an intrinsically statistical statement about a collection of such microstates or about a probability distribution and its evolution.

Let me give you an example. The second law says, for example, that...
...if you place a hot bowl of soup on a cold table and measure the soup-table temperature difference 20 minutes later, it's almost guaranteed that you will get a smaller number than you obtained at the beginning.
Let's analyze it a little bit. The first thing to notice is that the sentence above is a proposition, not an object. It is a probabilistic proposition and quantitatively speaking, it is true because we may show that for the proposition to be wrong, the entropy would have to decrease but the probability of such a process is exponentially tiny, something like \(p\sim \exp(-10^{26})\). The smallness of this number is what we mean by "almost guaranteed".

Fine. How it is possible that such a time-reveral-asymmetric proposition follows from the time-reversal-symmetric microscopic dynamical laws in physics? To see the answer, we must realize that the proposition deals with macroscopic objects such as a hot bowl of soup and table. It's important to notice that these phrases don't represent any particular microstates – precise arrangements of atoms. It's very important to acknowledge that these phrases represent macrostates – statistical mixtures of microstates that look macroscopically (almost) indistinguishable. That's true for the soup in the initial state as well as the soup in the final state.

It is a technical detail whether these statistical mixtures are "uniform" (the same probability for all of the microstates) or not; that's just the difference between microcanonical and canonical or grand canonical ensembles. These mixtures don't have to be uniform. What matters is that there are many microstates that have comparably large probabilities to be realizations of the concepts such as a hot bowl of soup. That's why the proposition "soup will cool down" above is a proposition about a transition between an initial state and a final state. And it is a proposition that is appropriately "statistically averaged or summed" over the microstates.

As I have emphasized above (and in dozens of older TRF blog entries), the right way to statistically treat such combined propositions about many microstates or macrostates is to sum over the possible final microstates, but average over the possible initial microstates (with weights identified with some "prior probabilities" that depend on other subjective choices and knowledge, but you won't lose much if you assume that all initial microstates in a set are equally likely). When you sum-and-average these transition probabilities for microstates of the soup+table, you will get a result that is totally time-reversal-asymmetric. The entropy increases because the summing/averaging asymmetry favors a larger number of "fellow microstates" for the final state and a lower number of "fellow microstates" for the initial state.

It's the mathematical logic, pure probability calculus applied to propositions about things that occur at various moments of time, that is the source of the time-reversal asymmetry. One doesn't need any time-reversal asymmetry of the microscopic laws. Indeed, these laws are time-reversal-symmetric (or at least CPT-invariant in the case of quantum field theories but the CPT-invariance plays the same role).

There is nothing paradoxical about the validity of the claim "soup will cool down". Every sane person knows that it's true. And Nature doesn't need any time-reversal-asymmetric terms in the microscopic equations of motion for the elementary particles to cool down the damn soup! It just cools down because of basic statistical considerations.

Things would be different if we made a statement about a particular microstate of the hot bowl of soup. Would a particular microstate of soup (in contact with a cold table) evolve into a microstate that looks like hotter soup or colder soup? Now, this question isn't completely well-defined. You would have to tell me what microstate you are actually asking about. And by this comment, I really mean that you would have to tell me the \(10^{26}\) positions and velocities of elementary particles in the soup with amazing precision that I need to make the prediction.

No one ever does that in the real world and it isn't really needed because we know that with insanely unlikely exceptions, whatever the initial state of the soup is, the soup will just cool down within a second, after a minute, it will always cool down. A macroscopic decrease of the entropy is virtually impossible. It will never happen anywhere in the Universe during its lifetime (except for possibly super long timescales such as the Poincaré recurrence time).

What about the unlikely exceptions? Yes, a tiny fraction of the states, \(1/\exp[(S_B-S_A)/k]\) of them, will evolve in a way that decreases the entropy. But these exceptional states can't be isolated by any natural condition for their velocities that would only look what their values are "now" (in the initial state). The only way to define these special states is to say that these are the states that will just happen to evolve into low-entropy final states.

Indeed, if you define the initial state of the soup in this way (it is a microstate that evolves into a lower-entropy final state by the microscopic equations of motion), the right statement about its evolution will be that it will evolve into a lower-entropy final state. But this proposition will be an uninteresting tautology. What the actual second law of thermodynamics is concerned with is something entirely different: realizable initial states of hot soup and the phrase "soup" that is simply defined as a statistical mixture of these microstates which forces you to use statistical and probabilistic methods to evaluate the probabilities and truth values of propositions!

The critics are also entirely wrong that the same comments would apply to the final state. It is not true that the final microstate state of the soup is "generic". Instead, among the equally high-entropy states, it is an extremely special microstate because it has evolved from a lower-entropy initial state and there are just "few" of these low-entropy initial microstates. We are told that it has evolved; it is a part of the homework exercise we were supposed to solve!

When we discuss the evolution of a bowl of soup on the table, it would be totally incorrect to think that "bowl of soup" in the final state denotes an equal statistical mixture of all conceivable similar high-entropy microstates of the soup and the table. Indeed, the very formulation of the problem says that the bowl of soup was sitting on the table so the final soup did evolve from an initial state, and it must therefore be a special microstate.

The key difference between the initial state and the final state is that it is legitimate to assume that all the allowed microstates in the initial state are comparably likely; but it is not legitimate to assume that all the macroscopically similar final microstates are equally likely. The latter claim about the final state is illegitimate simply because we aren't allowed to choose the probabilities of the final microstates; they are – by the very definition of the adjective "final" or the noun "future" – determined from the probability distributions in the initial state and the properties of the initial state in general. The future evolves from the past, not vice versa!

On the other hand, the initial state evolved from some states at even earlier instants of time and the formulation of the problem says nothing about those. That's why it's allowed to organize our knowledge about the initial state as a statistical mixture of microstates of a certain kind – where all the microstates are comparably represented. That's right because the only thing that we know about the initial state is that it is a hot bowl of soup etc.; this state doesn't have any special "micro" properties. The final microstate does have some special "micro" properties (correlations) because – as the description of the very problem says – this final state evolved from an initial state that was also a soup. We can't revert this statement and say that the initial state evolved from the final state because – by definition of the words "initial" and "final", it just hasn't.

As you can see, the critics reach wrong conclusions because they're sloppy about what we know and what we do not know. But it's a part of their philosophy to be sloppy about "what we know" because they think that the knowledge is an irrelevant spiritual subjectivist solipsist stuff that has nothing to do with physics. So they behave as if they knew the exact microstate. But that's a complete fallacy. They do not know the exact microstate and if they assume that they do, and that the initial state is very special, they inevitably reach wrong conclusions that are easily falsified by observations. Statistical claims about the soup or any objects in thermodynamics are all about our knowledge and ignorance. It's very important to distinguish what we know and what we do not know and what we partially (probabilistically) know and what the probabilities are.

Classical statistical physics is about the careful derivations of true (or extremely likely) claims about systems with many degrees of freedom out of some other true (or extremely likely) assumptions that we were told to be valid. It is all about propositions. Quantum mechanics has upgraded this principle to a new level because it became impossible – even in principle – to assume that there is a particular "objective reality" (microstate at each moment) at all. Nevertheless, the feature – that physics is about making right propositions, not about mindlessly visualizing a "model of the precise thing" – was already present in classical statistical physics because even classical statistical physics tells us that interesting statements about Nature (or soup) are statements encoding partial and probabilistic knowledge of some features of Nature (or soup) and we should be very interested in how these statements are related to each other.

So the deluded folks who helped to drive Ludwig Boltzmann to suicide were direct predecessors of the contemporary anti-quantum zealots in the same sense in which Ludwig Boltzmann himself was a forefather of modern physics, a Niels Bohr prototype.

And that's the memo.