Monday 31 December 2012

Prediction isn't the right method to learn about the past

Happy New Year 2013 = 33 * 61!

The last day of the year is a natural moment for a blog entry about time. At various moments, I wanted to write about the things that the year 2012 brought us.

The most important event in science was the discovery of the \(126\GeV\) Higgs boson (something that made me $500 richer but that's of course the least important consequence of the discovery) but those of us who were following the events and thinking about them rationally have known about the \(126\GeV\) Higgs boson since December 2011.

Lots of other generic popular science sources recall the landing of Curiosity and other things. But let's discuss something else. Something related to time.




Cara Santa Maria of The Huffington Post (I thought that Santa Maria was a ship, not a car) posted an article about the arrow of time and embedded the following video interview with Sean Carroll.



Clearly, he hasn't learned or understood anything at all over those years. Maybe it is difficult to get a man to understand something when his job depends on not understanding it. ;-) Once again, we hear that the hottest thing in cosmology is the fact that the early Universe had a low entropy (in reality, it really follows from a defining property of the entropy which has been known from the first moment when entropy was introduced in the 19th century).

The picture with the most concentrated wrongness appears around 2:24 in the video above:



Starting from the dot at the "present", Carroll proposes to predict the future and to "predict the past" [sic]. In both cases, the entropy increases relatively to the entropy of the present state.

A very similar picture appears in Brian Greene's book The Fabric of the Cosmos. Brian's picture is even worse because he suggests that the graph of the entropy is smooth, like \(S=(t-t_0)^2\), so its derivative vanishes at \(t=t_0\). It surely has no reason to vanish. Moreover, Brian omits the helpful part of the graph "actual past".

Now, look at the picture again. You see that Carroll "predicts the past" but his "prediction" for the entropy completely and severely disagrees with the "actual past" (whatever is the way how he determined that the entropy was "actually" lower in the past, he wasn't able to derive this elementary fact because his derivation led to the wrong result "predicted past"; he must have some above-the-science method to find the right answers without science even when his scientific methods produce wrong predictions).



Prague clearly resembled a military front again last night.

In science, when your prediction disagrees with the facts, you must abandon your theory. Instead, Sean Carroll just doesn't care. He isn't thinking as a scientist at all. The disagreement between his predictive framework and the empirical fact means nothing for him; he just continues to use and promote his wrong predictive framework, nevertheless.

It's easy to see why his "prediction" of the past is wrong. The reason is that he is using the same method – prediction – that we use to predict the future. He thinks about the past in the same way as if it were the future. However, the very term
"prediction of the past"
is a logical oxymoron. It is exactly as inconsistent a sequence of words as
"sweeten your tea by adding lemon".
You just can't make your tea any sweeter by adding lemon! Instead, you need sugar, stupid. In the same way, it is wrong to use the particular method of "prediction" when you want to say/guess/reconstruct/determine something about the past. The method of "prediction" is, by definition, only good for learning something about the moment \(t_2\) out of the data about the physical system at time \(t_1\) when \(t_2\gt t_1\): you may only predict a later moment (a moment in the future, if we talk about predictions that are being made now) out of an earlier one, not vice versa!

All successfully verified predictions in science – where we use the usual methodology of predictions – satisfy this property that the predicted moment occurs later than the moment(s) at which some facts are known and inserted as input to the problem. If you use the methodology in the opposite way, it just doesn't work! This method of determining the past is as wrong as an attempt to sweeten your tea by lemon. The wrong graph of the entropy in the past on the picture above is the easiest – and a rather universal – way to see that the methodology doesn't work for "predictions of the past".

Instead, if you want to say something valid about the past, you need to use a different methodology: retrodiction. But retrodictions obey completely different rules than predictions. Predictions produce objective values of probabilities of future events out of known facts about the past; in this sense, predictions "emulate" what Nature Herself is doing when She actually decides what to do with the world at a later moment out of the state at an earlier moment, when She is evolving the world. On the other hand, retrodictions can never produce any objective probabilities at all. The reason is that retrodictions are a form of Bayesian inference

Bayesian inference is a method to update our opinions about the probability of a hypothesis once we see some new evidence. Now, the state (or a statement about some properties) of the physical system in the past is an example of a "hypothesis" and the data collected now (at a later moment) are an example of the "evidence".

What's important is that the Bayesian inference is a "reverse process" or a solution to an "inverse problem". The straightforward calculation starts from a hypothesis (an initial state is a part of a hypothesis about evolution) and this hypothesis predicts objective probabilities for the later moment, for the future, if you wish. These probabilities are objectively calculable because the future literally evolves out of the earlier moment (the past).

But it is not guaranteed that you may revert this evolution – or this reasoning. And indeed, in general, you can't. In fact, in statistical physics, you can't. And in quantum physics, you can't do it, either. The reason is that whenever you discuss the fate of any facts or measurements that may only be predicted statistically – and it is true both in quantum mechanics as well as in statistical physics (even in classical statistical physics) – things are simply irreversible.

If you start with a hot tea on the table, you may predict when the tea-desk temperature difference drops below 1 Celsius degree. However, if you start with a tea that is as cold as the desk, you can't say when it was 60 °C hot. This problem simply has no unique solution because the evolution isn't one-to-one, it isn't reversible. Whatever is the moment when the tea is boiling and poured to the cup, it will ultimately end up as a cold tea.

People such as Sean Carroll or Brian Greene correctly notice that the microscopic laws of Nature are time-reversal-invariant (more precisely, CPT-invariant if we want to include subtle asymmetries of the weak nuclear force) but they're overinterpreting or misinterpreting this fact. This symmetry doesn't mean that every statement about the future and past may be simply reverted upside down. It only means that the microscopic evolution of particular microstates – pure states – to particular other microstates – pure states – may be reverted.

But no probabilistic statements may actually be reverted in this naive way. They can't be reverted for the same reason why \(A\Rightarrow B\) is inequivalent to the logical proposition \(B\Rightarrow A\). The laws of Nature imply facts of the type \({\rm Past}\Rightarrow{\rm Future}\) but these facts can't be translated to \({\rm Future}\Rightarrow{\rm Past}\) because you would have to check all other conceivable initial states in the past and prove that all of them imply something about the future (i.e. evolve to states in the future that still obey a certain special condition) – which is virtually never the case. The past and the future play asymmetric roles in mathematical logic because of the \(A\)-\(B\) asymmetry of the logical proposition \(A\Rightarrow B\), the implication.

To deal with the microstates only – for which the time-reversal symmetry holds – means to deal with equivalences \(A\Leftrightarrow B\) only. But this template doesn't allow us to make any realistic statements about physics because the pure states "equivalent" to some states in the past (the future states that evolve from them) are complicated probabilistic superpositions or mixtures that can't be measured. Whenever we make some measurement, we need to talk about microstates that aren't inequivalent to some natural states/information at an earlier moment which is why we need the statements of the type \(A\Rightarrow B\) almost all the time and these implications simply violate the \(A\)-\(B\) symmetry.

In particular, if you fail to specify the precise coordinates and velocities of all atoms in your tea, or if you're talking about a large/nonzero entropy of your tea at all, then you are clearly not talking about a particular microstate. You are only talking about some ensembles of operationally indistinguishable microstates (which is why the entropy is nonzero) or, equivalently, about partial, probably macroscopic properties of your tea. And statements of this sort – for example all statements about the entropy of the tea or the tea-desk temperature difference – simply refuse to be time-reversal-invariant! Lots of friction forces, viscosity, diffusion, and other first-time-derivative terms breaking the time reversal symmetry inevitably emerge in the effective laws controlling these quantities and propositions. All the laws that govern the macroscopic quantities average and/or sum over the microstates and the right way to do so inevitably breaks the past-future symmetry "maximally". For example (and it is the most important example), the entropy-decreasing processes are exponentially less likely than their time-reversed partners that increase the entropy.

As I have emphasized many times, the asymmetry arises because the calculated probabilities must be averaged over the initial microstates but summed over the final microstates. Averaging and summing isn't quite the same thing and this difference is what favors the higher-entropy final states.

There is one more consequence I have emphasized less often. The averaging (over initial state) requires "weights". If you have a finite number \(N\) of microstates, you may assign the weights \(p_i=1/N\) to each of them. However, it's not necessarily the choice you want to make or believe. There may exist evidence that the actual probabilities of initial microstates \(p_i\) – the prior probabilities – are not equal to each other. The only thing that will hold is\[

\sum_i p_i = 1.

\] The possible initial microstates differ, at least in principle. You may accumulate evidence \(E\) – it means a logical proposition you know to be true because you just observed something that proves it – which will force you to change your beliefs about the probabilities of possible initial states according to Bayes' theorem:\[

P(H_i|E) = \frac{P(H_i)\cdot P(E|H_i)}{P(E)}

\] The vertical line means "given". So the probability of the \(i\)-th hypothesis (the hypothesis that the initial state was the \(i\)-th state) given the evidence (which means "after the evidence was taken into account") is equal to the prior probability \(P(H_i)\) of the initial state (the probability believed before the evidence was taken into account) multiplied by the probability that the just observed evidence \(E\) occurs according to the hypothesis \(H_i\) and divided by the normalization factor \(P(E)\), the "marginal likelihood", which must be chosen so that the total probability of all mutually excluding hypotheses remains equal to one:\[

\sum_i P(H_i|E) = \sum_i \frac{P(H_i)\cdot P(E|H_i)}{P(E)} = 1.

\] Note that \(P(H_i|E)\) and \(P(E|H_i)\) aren't the same thing (another potential critical mistake that the people believing in a naive "time reversal symmetry" are probably making all the time as well) but they're proportional to each other. The hypothesis (initial microstate) for which the observed evidence is more likely becomes more likely by itself; the initial states that imply that the evidence (known to be true) cannot occur at all are excluded.

A particular observer has collected certain kinds of evidence \(E_j\) and he has some subjective knowledge which determines \(P(H_i|E_{\rm all})\). It's important that these probabilities of the hypotheses are subjective, they depend on the evidence that a particular observer has accumulated and labeled trustworthy and legitimate. They become prior probabilities when a new piece of evidence emerges. And indeed, one of the most notorious properties of the prior probabilities is that they are totally subjective and there's no way for everyone to agree about the "right priors". There aren't any objective "right priors".



Except for the Czechoslovak communist malls, Priors, which had to be believed to be objectively right. However, Prior is an acronym for "Přijdeš rychle i odejdeš rychle" (You quickly arrive as well as quickly depart) which quantified the product selection.

That's why the retrodicted probabilities of initial states \(p_i=P(H_i)\) always depend on some subjective choices. What we think about the past inevitably depends on other things we have learned about the past. This is a totally new property of retrodictions that doesn't exist for predictions. Predictions may be probabilistic (and in quantum mechanics and statistical physics, they are inevitably "just" probabilistic) but the predicted probabilities are objectively calculable for certain input data. The formulae that objectively determine these probabilities are known as the laws of physics. But the retrodicted probabilities of the past are not only probabilistic; their values inevitably depend on the subjective knowledge, too!

Of course, when the past is determined by the correct method – the method of retrodictions which is a form of Bayesian inference – we will find out that the lower-entropy states are exponentially favored. We won't be able to become certain about any property of the Universe in the past but some most universal facts such as the increasing entropy will of course follow from this Bayesian inference. In particular, the correctly "retrodicted past entropy" will more or less coincide with the "actual past" curve.

I think that even the laymen implicitly know how to reconstruct the past. They know that it's a "reverse problem" of a sort and they secretly use the Bayes theorem even if they don't know the Bayes formula and other pieces of mathematics. They are aware of the fact that the tea-desk temperature difference was higher in the past exactly because this difference is decreasing with time. More generally, they know that the entropy was lower in the past exactly because the entropy is increasing, was increasing, and will be increasing with time. They know that determining the past by the same logic by which we predict or expect the future is wrong, stupid, and it contradicts common sense.

Too bad that Sean Carroll hasn't been able to get this basic piece of common sense yet, after a decade of futile attempts to understand the basics of statistical physics.

And that's the memo.

Saturday 29 December 2012

Richard Dawkins vs Peter Higgs

The reverse fundamentalist vs the peaceful atheist

Two days ago, the Daily Mail, the Guardian, and other mostly British outlets amplified an amusing yet potentially serious battle between two famous scientists, Richard Dawkins and Peter Higgs:
Battle of the professors: Richard Dawkins branded a fundamentalist by expert behind the 'God particle' (The Daily Mail)

Peter Higgs criticises Richard Dawkins over anti-religious 'fundamentalism' (The Guardian)

Google News (other sources)
As you surely know, Dawkins is a proud militant atheist. In fact, he is a self-described Darwin's rottweiler. Last week, he concluded that it was worse to educate a child in a Catholic family than to let it be sexually raped by priests. ;-)



Peter Higgs has decided that the discovery of "his" boson has made him powerful enough so that his criticism may matter and in an interview with El Mundo (Spain, video), he criticized Dawkins as a "fundamentalist" for his "embarrassing" attacks on religion – or, if you wish, attacks on one of the individuals after whom the Higgs boson is also sometimes named, namely Mr God. ;-)

Incidentally, I think that many people's hateful reactions to the innocent term "God particle" reflects their anti-religious fundamentalism, too.




What do I think about those matters? It's mostly true that when it comes to similar major questions, I haven't really changed my opinions significantly since I was a teenager. Many TRF readers have noticed the robust consistency of the views and principles presented on this blog between October 2004 when it was created and December 2012 when this blog entry is being written.

However, while I haven't changed my mind about the truth, I may have changed my mind about the question "who deserves to be wrestled with and how much".

As an undergrad, I was kind of an active scientific skeptic – I mean an activist against paranormal phenomena. Needless to say, I still totally agree with the people who are doing this job (the Sisyphus movement in Czechia, for example) but throughout the years, I decided that this fight is sort of futile because the reasons behind the people's belief in similar stupidities are either biological or comparably "hard-wired and unfixable". Using plain English, the widespread belief in the paranormal phenomena ultimately reflects people's stupidity.

This is not a perfect explanation of what's going on: some smart people may be obsessed by something which means that they ultimately have different reasons than stupidity – some ethical urge to prove something to themselves or to be more spiritual or otherwise "better" – but you may view this deviation from the zeroth approximation "stupidity hypothesis" to be just pure noise because there are also many stupid people who believe the right things even though they wouldn't have a chance to rediscover them themselves – they were just brainwashed to believe true things by smarter folks.

To a large extent, I think that these two groups are guaranteed to be equally large in a free enough society. That pretty much means that it makes no sense to try to "dramatically" change the degree of people's belief in science and similar things. One may improve the world by small incremental ants' everyday work – using the words of Thomas Garrigue Masaryk, the first president of Czechoslovakia – but the "big percentages" reflecting nations' understanding of things ultimately boil down to biology or other causes we can't change too much, at least not too quickly.

While I would have good relationships to Christians at my university, I was also a sort of "unmasked opponent" of some religious attitudes. This of course became an important issue when we fell in love with a girl who undoubtedly belonged between the top 3 most fanatical believers in the fundamentalist Christianity that you could have found in the Czech Republic. ;-)

As recently as in 2000, I couldn't resist to attend talks by people like Eugenie Scott – the 2000 talk I attended took place in Santa Cruz, California, if I remember well – and I was a huge fan of such organizations and even of herself. (One of the reasons why my excitement was decreasing in the following years was that I gradually realized that what she was saying was kind of trivial – millions of people surely understood such things – which is why I realized she wasn't "special" or "at the top of science" in any sense whatsoever, despite her being a good speaker etc.) If you appreciate that I hadn't changed my opinions about politics much either, it should be easy for you to determine that I just couldn't possibly have any significant trouble with the organized political Left in the U.S. up to around 2000. In fact, you could say that I hadn't had any such trouble until 2004 – no problems for the first 7 years I spent in the U.S.

America seemed like a country of the free to me – much like my homeland since 1989. My ex-adviser Tom Banks would sometimes show me a petition to sign – like a petition attacking John Ashcroft for absolutely no good reason. As soon as I explained to him that I had no sympathy for such ideologically driven harassment of the people by the leftists, and clarified some background linked to my country's history that made this attitude of mine unchangeable, he understood it and he always respected my politics. This was pretty much true for everyone whom I had noticed and who has mattered.

Eva Silverstein who would be a postdoc at Rutgers in the late 1990s would loudly show her being offended when I told Tolik Morozov, for example, that the average female brain had fewer neural cells than the average male brain – but I just never felt oppressed by similar politically correct crackpots hiding their heads in the sand in the name of an insane ideology (I mean Eva Silverstein in this case). All these troubles started in 2004 – probably because members of Harvard faculty are the first ones who are "seriously monitored" whether they obey the politically correct speech codes and thought codes. Only in 2004, I started to notice that the atmosphere in the U.S. Academia is pretty much as totalitarian as the atmosphere in the Stalinist Soviet bloc if not more so; unlike most of the folks at the schools in the socialist bloc, the U.S. leftists actually seemed to believe all this incredible stinky left-wing garbage!

But let me return to their relationship to religion.

I think that some left-wingers' anti-religious activism is often dishonest, severely overstepping the actual insights that may be backed by science, and the targets of their criticism are cherry-picked with a political goal in mind. Even though I agree with a vast majority of e.g. Sean Carroll's "right answers to the religious questions", I think that his (and other people's) focus on the religious targets isn't a sign of his passion for the truth but a symptom of a political bias.

Christians are arguably wrong about many things but so are many non-Christians – not only Muslims and believers in other religious orthodoxies but also atheists and members of various political groups. I do think that evolution is one of the most important pillars of the scientific explanation of Nature. On the other hand, it's not the only one and I am absolutely convinced that tons of (if not most of) left-wing atheists often and regularly display their misunderstanding of things that are perhaps "less fundamental" in the structure of pure science but they are far less abstract and more important for actual decisions we have do in our actual lives.

Whether Jesus Christ could have been born of a virgin is an amusing academic question and science surely chooses one of the answers to be sensible and the opposite answer to be silly. It is bizarre if a scientist chooses the latter answer – but as long as such idiosyncrasies are "kept in check" (e.g. by being described as eternally rare exceptions to the laws of physics), so that they don't prevent one from understanding lots of important things, a scientist-believer may still be a good or great scientist in pretty much all of science. On the other hand, if a scientist decides to believe that groups of people – defined by their sex, race, ethnicity, and other characteristsics – have the same statistical distribution of various quantities such as IQ, it is a comparably universalistic stupidity that is moreover far less abstract than Virgin Mary's virginity. It affects people's interpretations of real events that take place today – and not 2000 years ago – and leads them to right or wrong political and other decisions.

I find it utterly hypocritical if someone spends his activist life by constantly attacking Christians because of their disagreement with some solid insights of science but he or she doesn't spend a second by criticizing the feminist or other politically correct crackpots whose contradictions with science are at least equally striking. And I don't have to talk about the politically correct crackpots. I may talk about people like Muslims. Their opposition to some rudimentary insights of science is arguably far more dramatic than the Christians' opposition. Nevertheless, they are almost never criticized by the typical organized left-wing activists in the Western Academia. (Let me mention that Steven Weinberg's integrity in particular should be applauded because he's an atheist who surely doesn't try to idealize the Muslims – and others.) As far as I can say, this proves the lack of honesty of most of these left-wing activists. They're not driven by the passion for the truth; they are driven by political goals that are ultimately shaped by their predetermined ideas – ideas that are completely independent of the insights in science – how the society should work. They only use science as an occasional convenient hostage and tool if I have to avoid the term "whore".

We could discuss specific examples of demagogy that believers sometimes offer; and specific examples of demagogy and spin that "anti-fundamentalists" such as Richard Dawkins offer. I think that every person who impartially observes these debates must have encountered many such examples on both sides so it doesn't make much sense to randomly pick examples.

Instead, my point is that I agree with Peter Higgs that people like Richard Dawkins are fundamentalists in a similar sense as the believers themselves – despite the fact that they are arguably right much more often than the believers (a comparison that may change as time goes by, however). The general character of answers to the "big questions" is always predetermined – and this comment applies to both of these opposing groups. Every statement that is positively correlated with the vague concept of God has to be supported by the obedient believers; and it has to be spitted upon by the politically correct anti-believers.

Whether or not the second attitude seems to be more successful in the incorporation of the scientific insights of the last 20 or 100 or 500 years, both of these approaches are equally fundamentalist – and both of them are intrinsically unscientific. Science isn't defined by its goal to show that every idea positively correlated with the vague concept of God is wrong much like science should never have been defined by its consistency with God. Science is simply independent of these prejudices – both of them and many others. Science impartially evaluates the empirical data and the right conclusions aren't and can't be determined a priori.

We could argue that many patently wrong opinions about physics – including the anti-quantum zeal – are linked to the "anti-fundamentalism" of the leftists. People like Mephisto can't ever understand quantum mechanics because they don't want to. They don't want to because the right understanding of quantum mechanics isn't convenient for an overall package of talking points they eternally want to use in order to show their alleged superiority over the believers and others. They have just decided that the world must fundamentally be a reflection of an "objective reality" in the classical sense and if they allowed themselves to learn something else, their whole belief of system – the "value of their lives" – would be shattered. The only problem is that many of these beliefs and talking points are just demonstrably wrong. They're victims of various delusions in the same sense as many believers.

I don't want to overgeneralize this observation – and Mephisto's situation and the situation of dozens of others I know – because as far as I can say, many right-wingers and Christianity-oriented folks fail to understand quantum mechanics properly, too. ;-) But what I want to generalize is the statement that science only allows us to support certain claims but not others; no conclusions are quite clear from the beginning; and there's no reason why all the future scientific insights should still belong to a preconceived intellectual straitjacket, whether this straitjacket looks like a religious one or an anti-religious one.

There are people among the believers and there are people among the anti-believers who just fail to understand this simple point (about the inadmissibility of dogmas in science) which is why they're fundamentally analogous to each other and why Peter Higgs' criticism of Richard Dawkins is justified whether or not the percentage of correct statements presented by Richard Dawkins is above 50%.

And that's the memo.

Wednesday 26 December 2012

Amplitudes, permutations, Grassmannians

Back in July, I mentioned some new highly intriguing results by a group of physicists and mathematicians including Nima Arkani-Hamed and others:
Permutations join twistor minirevolution
That report echoed a talk at Strings 2012. Finally, the paper is out.



One of the 263 figures in the paper.

On the 154 pages of their new article,
Scattering Amplitudes and the Positive Grassmannian (PDF),
Nima Arkani-Hamed, Jacob L. Bourjaily, Freddy Cachazo, Alexander B. Goncharov, Alexander Postnikov, and Jaroslav Trnka expose the power of their new formalism based on the positive Grassmannians (a Grassmannian in this sense is a space of \(k\)-dimensional hyperplans in an \(n\)-dimensional hyperspace; the positivity condition means that all minors i.e. sign-corrected subdeterminants or minors have the positive sign).




The physical building blocks are UV and IR divergences and they are composed into magnificent structures by new Grassmannian-valued variables whose treatment may be described in terms of polytopes, permutations, and otherwise. Conformal theories are mostly analyzed in twistor variables because it is "wise" to do so. However, I feel that twistors have been downgraded to secondary technical tools in this research.

Their methods apply to 2-dimensional integrable systems, the 3-dimensional ABJM theory of the membrane minirevolution, the 4-dimensional maximally supersymmetric gauge theory, and others. They make the full Yangian (or related) symmetries manifest while obscuring locality. So far, I don't quite understand whether the appearance of these very different theories in one papers is just a symptom of some superficial mathematical similarity in the tools that can be used – this is what mathematicians often market as unification but I would never buy it – or whether the unifying picture is more profound.

It's a long paper and the length itself will probably discourage many people from reading it, especially people who resemble your humble correspondent by the expectation that a deeper understanding of a system should ultimately make it possible to write more concise explanations of theories, but a quick reading makes it self-evident that it's a paper that badly deserves to be read. So please try to look at it.

Update: I decided to read the paper rather systematically. It's beautifully written, many things that were only vaguely clear to me or that I had to "guess" are clearly articulated. I will probably postpone the evaluation of my potential far-reaching corollaries and interpretations (that I've spent quite some time thinking about) to a future date, after I am confident that I understand the known stuff properly.

Tuesday 25 December 2012

Christmas rumor: \(105\GeV\) dimuon excess at 5 sigma

Update: The best physicist on the territory of Argentina right now, Paul Frampton, wrote me that the signal could perhaps be a sign of a dilepton from the 331 models which enhance the electroweak \(SU(2)_L\) to an \(SU(3)_L\). The lower limit on the mass of such states also seems to be \(1\TeV\) but weaker coupling constants could perhaps work. Check e.g. this 2000 paper for a quick review of the particle content of the 331 models or, even more relevantly, this 1992 paper discussing dileptons in 331 models and highlighting Paul Frampton's own pioneering contributions (thanks, Joseph S.!).

Also, the 331 gauge group may be embedded into \(E_6\) GUT which offers additional possible explanations of like-sign dileptons, for example a leptophobic \(Z'\) boson.
I hope that the TRF readers are enjoying their Christmas, their Saturnalia, their Hanukkah, or at least their first days after the winter solstice (except for TRF readers in Islamic countries where all such pagan holidays are banned: those readers are wished to survive instead).



We have gotten used to the news from the LHC that meticulously and precisely confirms every small feature of every graph of every final state that may possibly occur when a proton pair collides. Because of this "habit" of ours, the following rumor may sound shocking, stunning, unbelievable.

Well, it's a rumor – and one posted at a highly unreliable place – so it should remain unbelievable for some time and at least to some extent. But it's interesting enough so that I can't miss it because if the rumor is true, it's an amazing Christmas gift from the LHC.

Phil Gibbs claims that he has been browsing through some really stinky garbage at a notorious crackpots' discussion forum led by an immoral sourball when...




...when he saw a rumor by the user named "crossing symmetry" who wants to remain anonymous, who claims to have an ATLAS friend, and who says that he was told that the ATLAS detector has seen a very strong, 5.02-sigma signal (probably a local significance) in the graph of the number of like-sign dimuon events localized near the invariant mass\[

m(\mu^\pm\mu^\pm)=105\GeV.

\] It is supposed to be based on 14 events.

Note that the total electric charge of these pairs of muons or antimuons is equal to \(Q=+2\) or \(Q=-2\) which makes the suggested intermediate particle rather unusual. Moreover, it shouldn't be an "ordinary" doubly charged Higgs boson that would belong into an electroweak triplet (well, I wouldn't call any doubly charged Higgs boson ordinary so the quotes were needed) because the lower limit on those beasts' mass has been already set to \(300\)-\(400\GeV\).



The combination of the strange charge with the strange apparent lepton number of the new particle, also \(L=\pm 2\), strengthens my belief that this rumor is a Christmas chimera but I would be lying if I told you that I am not trying to search for models that would incorporate such an animal. ;-)



The rumor passes the basic test. Those 14 events are supposed to arise from \(13/{\rm fb}\) of the \(8\TeV\) 2012 data. One may look at \(4.7/{\rm fb}\) of the \(7\TeV\) 2011 data which was evaluated in a recent October 2012 preprint and one sees some highly suggestive excesses near \(70\GeV\) and \(100\GeV\) on Figure 1b copied above. It seems totally plausible to me that the excesses above have grown to the 5-sigma excess grown in the rumor. Note that these excesses would be independent so one could vaguely combine the 2-sigma excess on the picture above (2011) with the 5-sigma excess from the rumor (2012) to get a 6-sigma excess (assuming the masses agree well enough) which would arguably be enough for 5 sigma even after the look-elsewhere taxation.

In fact, you may see significant excesses in a similar region reported already on Figure 1 of this January 2012 paper based on \(1.6/{\rm fb}\) of the 2011 data only, too.

OK, now, let me mention that the like-sign dimuon events have been searched for as evidence for supersymmetric models. See, for example, the 2004 Fermilab thesis by Yurkewicz. It was not written by Katie Yurkewicz if you happen to know her but it was dedicated to her by an Adam with the same last name. ;-)

In this thesis, you may learn about something that Phil Gibbs has already mentioned. Like-sign dimuon events are usually looked at in situations when this lepton pair may be a subset of a three (or more) final leptons – multileptons – and the condition that two leptons have the same charge is a convenient constraint that eliminates the "way too ordinary" events that boil down to opposite-sign leptons.

However, I must tell you that in the most conventional SUSY models, one doesn't expect a resonance – a sharply determined value of "the mass" – because the new particles are being created in pairs and the invariant mass of the pair is inevitably continuous (but bounded by an inequality to lie above a threshold).

If you want to see a cute nostalgic paper on this issue, look at this 1990 paper on the production of gluinos at the Superconducting Supercollider (murdered 3-4 years later). We learn from the abstract that "The like-sign dimuon signature displays the Majorana property of gluinos." You surely want more than just an abstract. Here you have a 2002 thesis by Chadd Smith. You may learn about a similar dilepton search in which gluino pairs are produced. The (like-sign) correlation between the charges of the resulting muons arising from the two gluinos boils down to the Majorana spinor properties of the gluinos (if they're Majorana).

You may also look for the 2010 diploma thesis by Jason Mansour who tried to use the same signature to search for the associated production of charginos and neutralinos at the D0 experiment.

It would be too brutal to think that \(105\GeV\) could be the combined mass of the gluino pair. But for some other gauginos, this super-low mass could be at least conceivable. The resonant character of the excess could be due to some fast enough decrease of the signal above the threshold but let me admit that yes, I am just offering you some wishful thinking not supported by solid calculations at this point. Phenomenologists should be able to clarify these issues quickly.

Sunday 23 December 2012

Obama gives medal to Drell, Gates, Mazur

I hate honors but this list is kind of interesting. Barack Obama gave the National Medal of Science to 12 scientists and the National Medal of Technology and Innovation to 11 technologists and innovators:
Obama names 23 scientists and innovators as medal winners (Cosmic Log)
The scientists include Sidney Drell, an accomplished violinist, hadron collisions specialist, and arms control expert at SLAC who is also the father of Persis Drell, the current SLAC director.




The generation that is not quite the youngest one knows Sidney Drell's name from one more context: we know him as a co-author of the Bjorken-Drell textbook on quantum field theory, Relativistic Quantum Mechanics (1964) and Relativistic quantum fields (1965). It was the ultimate mainstream standard for such a textbook before it could have been surpassed in this place by Peskin and Schroeder.

It was a hard reading for me when I was a high school kid. But it was arguably the first book from which I understood that the existence of photons – light quanta – follows from the "ordinary" quantization procedure applied to the electromagnetic field. I wonder how hard these things are for generic beginners. They have looked self-evident to me for quite some time but unlike other sources of difficulties people have, I can understand that this one requires some concentration.

Another winner is Sylvester James Gates, a top supergravitist and string theorist. Incidentally, if you have never tried to watch his 24 lectures Superstring Theory: the DNA of Reality (2006), you should give it a try. They look cute:

(Video excerpts no longer available for free. Search for "DNA of Reality Gates" at amazon.com.)

Finally, the list of the science medal winners includes Barry Mazur, Harvard mathematician whom I know very well from numerous dinners in the Society of Fellows. He is a nice and sensitive man who sees many emotional things in number theory, geometry, and their relationships. ;-)

Friday 21 December 2012

SciAm, firewalls, and deterioration of the physics community

Jennifer Ouellette wrote a nice piece on black hole firewalls for the Simons Foundation and for Scientific American:
Black Hole Firewalls Confound Theoretical Physicists (via Synch).
Well, more precisely, it's nice and informative if you assume that her task was to uncritically promote the views of Joe Polchinski, Leonard Susskind, Raphael Bousso, and a few others. From a more objective viewpoint, the article's main message is wrong and the text misinterprets the state of the research, too.



Somewhat but not entirely typical Czech skeptical and blasphemous attitude to Christmas. Xindl X: Christmas Eve arrived when I guzzled at home. He feels like being in shackles, much like his Christmas tree. He also has a tip and hanging balls. No reason to celebrate another lost year, he wants to return to the Saturnalia again. By the New Year, he switched from guzzling to light drugs.

Over the last decade or so, my great respect for some of the most famous names in high-energy physics was diminishing and this trend has become undeniable by now. It seems to me that my previous worries about the apparent deterioration of meritocracy within the field have turned out to be a tangible reality.




Oullette explains what a black hole is, that nothing should happen at the event horizon, that it Hawking radiates, how the Hawking radiation may be thought of, and that various arguments leading to information loss and other undesirable things have been identified as incorrect once the black hole complementarity was appreciated.

However, when it comes to the AMPS thought experiment, it just uncritically parrots the wrong statements by Polchinski et al.:
The interior (A) and the near exterior (B) have to be almost maximally entangled for the space near the horizon to feel empty; the near exterior (B) is almost maximally entangled with some qubits inside the Hawking radiation (C) because the Hawking radiation's ability to entangle the infalling and outgoing qubits. Because of the monogamy of the entanglement (at most one maximum entanglement may incorporate (B) at the same time), some assumptions have to be invalid. The unitarity should be preserved which means that the A-B entanglement has to be sacrificed and the space near the horizon isn't empty: it contains a firewall that burns the infalling observer.
That may sound good but, as repeatedly explained on this blog, this argument is wrong for a simple reason. The degrees of freedom in (A) and those in (C) aren't independent and non-overlapping. It is the very point of the black hole complementarity that the degrees of freedom in (A) are a scrambled subset of those in (C). The degrees of freedom in (A) are just another way to pick observable, coarse-grained degrees of freedom and "consistent histories" within the same Hilbert space. So the entanglement of (B) with "both" (A) and (C) isn't contradictory in any sense: it's the entanglement with the same degrees of freedom described twice.

Among the 25 papers that currently cite the original firewall paper by AMPS, this point is understood by a majority of the papers. A majority of them does explain that AMPS is wrong and they add various things and more detailed descriptions of this point – and by the details, they largely differ from the other papers.

But despite these papers' being not only right but belonging to a majority and despite Scientific American's celebration of "majorities" in other disciplines of science, you won't learn about the very existence of this majority at all. You won't hear about a single argument explaining why the AMPS reasoning is invalid. Even if you disagreed that e.g. Raju and Papadodimas understand the black hole information issues more correctly than Polchinski does, you should still agree that Joe's indication that these Harvard-trained physicists' 72-page-long work (and other papers) doesn't exist at all is disrespectful, to say the least. In fact, you're actively told by Oullette that no such argument exists! And you surely don't learn about any work that is more important and more valuable than AMPS that AMPS couldn't even ask because they already made a grave mistake in the first steps. More seriously, I think it's not really Jennifer's fault.



The most played video of 2012, Psy: Gangnam Style by a South Korean rapper, has collected more than one billion views. The version above contains some more familiar and friendly characters than the unknown South Korean non-scientist.

It seems clear to me that this imbalanced perspective was incorporated to the article by the main "informers" among the scientists who communicated with Jennifer. This conclusion of mine partly boils down to the amazing self-glorification of Joe Polchinski in particular. So we're learning that if there's a mistake, the mistake is not obvious, AMPS is a "mighty fine paradox" that is "destined to join the ranks of classic thought experiments in physics" and it's the "most exciting thing that happened since [Bousso] entered physics". Holy cow. The mistake is obvious. AMPS simply assume that complementarity can't hold by insisting on separate parts of the wave function that are responsible for observations inside and outside. That's a wrong assumption, so it's not shocking that various corollaries such as the "firewall" at the horizon are wrong, too. This wrong assumed denial of complementarity is as wrong as the assumption that simultaneity has to be absolute – an assumption made by those who "debunk" Einstein's relativity; the error is in step 1 and means that they just didn't understand the original insights.

I am grateful to remember some of the times when the progress in theoretical physics was so intense and the process of discovery of manifestly right new insights was so effective that wrong papers would simply disappear and were ignored. Individual great contributors were celebrated but the number of "manhours" invested into verification and extension of important results was so high that it wasn't really possible for a group of people – not even the most famous ones – to deliberately spread wrong results. But gradually, over the following years, some physicists eager to make "new revolutions" entered the mode in which they became happy to publish revolutionary claims even if they had to know that they were incorrect or at least they required a great amount of skewed self-brainwashing to be believed.

Moreover, and this is even more worrisome, many famous physicists apparently decided that they don't want the real progress in physics to continue and they just want to use their relative fame to convince their environment that whatever stupidity they are writing papers about right now is the most important thing happening in the field. And they seemingly decided they wanted to be the last golden generation. I feel that many of those folks don't want the next generation to emerge and thrive at all. The remarkable omission – well, downright denial – of some of the great work in Oullette's article forces me to adopt this conclusion.

And the omitted work isn't great just by some subjective evaluation of the content. It's also written by some of the greatest minds of their generation that have been shaped at the greatest universities, that have joined faculties of other universities in some cases by now, and that have proved (usually already as students) their ability to understand everything that Polchinski and others have ever written and their ability to find comparable new insights and go beyond them, too. But when you look at magazines such as Scientific American, it seems clear that there's pretty much no one left in the physics establishment – and in the journalistic and P.R. environment surrounding the physics community – who is interested in genuine progress anymore.

What I no longer see in the physics community is the passion for the truth, at least among the folks who are most visible in the media. I feel that the most competent folks working on similar research are social-engineered from above and from outside to languish and remain invisible. It seems to me that the achieved physicists are gradually switching to the production of random, cheap, and wrong ideas of the type that "everyone may understand" and their unquestionable defense and promotion.

Fifteen years ago, 't Hooft would start to write lots of preposterous papers about hydrodynamic models replacing quantum mechanics and all this amazing junk. At that time, I didn't care because I saw a thriving community that didn't have to respond to those things because it was busy with genuine advances. This thriving community had leaders – some of the very names described negatively in this blog entry surely belonged to it.

But whether or not it is a viable strategy for the "real deal" physicists to ignore the bad apples is a subtle question that depends on the balance of influence at various places. I never liked the anthropic hype but its influence on the community seemed tolerable because there existed natural authorities that were defending the kind of research I would consider valuable who could have been heard, and so on.

Sometime around 2006 when hardcore crackpots such as Lee Smolin and Peter Woit "charmed" the media with their uninterrupted stream of shameful and hostile lies, I decided that the balance had been almost irreversibly destroyed in favor of the bad apples. At that time, I was scared to see that the only place in which most of the top physicists had the courage to even mention the basic fact that Lee Smolin is a crackpot was a closed room somewhere at the KITP in Santa Barbara. Journalist George Johnson was stunned because it was the first time when he learned that Lee Smolin was a crank at all; this elementary fact had to be classified. The likes of Smolin have literally hijacked the environment of science journalism and the part of public and the scientific public that is getting information from it. It was a highly unhealthy development.

It didn't end up with the general spitting on the greatest advances in theoretical physics of the last 30 years. Many people, including those whom you would surely not count as Smolin's soulmates, also began to invent crackpot theories – or at least not too justified theories whose main point is to pompously reject some important principles of physics, whether or not they have sufficient evidence to do so. So Petr Hořava came with his non-relativistic "Lifshitz" models of gravity. That was still OK. Erik Verlinde's "gravity as an entropic force" was of course much worse.

AMPS isn't as bad or as obviously wrong as "gravity as an entropic force" but it's still wrong and what's worse about it is that it is pushed by some of the names that are more famous than Erik Verlinde's name. None of those bad apples would really destroy an otherwise healthy research community but the main problem I see is that the bad apples can no longer be efficiently wrestled with. Or it's not happening. It doesn't look like anyone cares at all. Instead, it seems to me that people are defending their subjective and increasingly non-quantitative (and often downright wrong) ideas and these people's connectedness to the journalists and other folks outside the research community itself and the related populism – instead of the scientific evaluation by those who actually understand the things as experts – have become the key determinants of success.

Yuri Milner showed it's possible to change the balance in the good direction, too. Still, it's questionable how much it helps the current research. The average breakthroughs celebrated by his Fundamental Physics Prize are about 20 years old. Of course, one could say that it's because those advances that took place 20 or 30 years ago are more important than most advances in the last 5 or 10 years. Maybe, the Milner Prize will inevitably exhaust the great contributors from the 1980s or 1990s and it will have to switch to very recent research. When it happens, I hope it will still be decided by similar people and according to similar criteria.

Even if one concludes that the advances done several decades ago are more important than those found in the latest 5 or 10 years, and I am inclined to agree, it's no excuse for physicists to abandon meritocracy and to transform their fields into a dumping ground of garbage. Instead, it's their duty to meticulously continue to produce new research and filter the research offered by others so that the best seeds are found and allowed to thrive, whether these seeds are larger or smaller than the seeds found several decades ago.

Incidentally, George Musser presented Polchinski's point of view on his SciAm blog equally uncritically a week ago, too. Too bad that no one is trying to find out what some other famous folks – e.g. Witten – have to say about those matters.

Thursday 20 December 2012

Subir Sachdev on AdS/CMT

Yes, it's the date of another major failed end of the world ;-)

My ex-colleague and fellow superhero, condensed matter physicist Subir Sachdev wrote a neat article for a mostly bad magazine called Scientific American,
Strange and Stringy
It does a good job in explaining one skill of string theory from a viewpoint of someone who was definitely not trained as a string theorist. In condensed matter physics, there are various phases of matter displaying numerous kinds of behavior (and critical behavior) and things can get very complicated. However, under somewhat general circumstances, when the complexity becomes really extreme, there is another, alternative description of the situation that becomes easy, at least once you know its formalism.




So like any proper solid state physicist, Sachdev doesn't care about the right theories at the Planck scale – or any other scale that is more fundamental than the characteristic scale of the phenomena in the common materials, for that matter. But even in this seemingly mundane realm that looks as non-fundamental as Radiohead, one may apply relationships that were found by physicists who analyzed a consistent description of quantum gravitational phenomena – who studied string theory in its familiar regime.

Sachdev makes some provoking points – e.g. that he had to explain basics of condensed matter physics to some string theorists using the same caricatures as he's using, approximately speaking, for the kids in the kindergarten. ;-)

I have some doubts whether the dual AdS-like description of condensed matter physics and related problems may ever be considered an "ultimate reliable description". But string theory surely gives us a new perspectives from which one may look at various physical situations and Subir Sachdev is among those who are exploiting these new opportunities very often.

Incidentally, Backreaction reviews some at least superficially clever attempts by Cliff Burgess and others to solve the cosmological constant problem within the large extra dimensions scenario. A cosmic string – or, more generally, a co-dimension-2 brane – induces a deficit angle in the surrounding spacetime but leaves the cosmic string essentially undisturbed. Such co-dimension-2 branes on a sphere may conspire to produce a rugby ball geometry which adjusts itself so that the effective 4D curvature seems to be (nearly?) zero.



Off-topic: For her 60th anniversary of reign, Queen Elizabeth received a chunk of Antarctica and a new rendition of the royal anthem (the video above).

Czechia opposes harsh anti-smoking EU policies

European health commissioner Tonio Borg (Malta) – who has been in this job just for one month – is behind the latest insanely harsh EU proposals to fight against the smokers. See e.g. The Guardian.



Horror-like pictures – such as the Australian graphics above – could become mandatory across the Old Continent. Flavored cigarettes – with menthol, vanilla, strawberries etc. – would also be banned, much like slim (and "natural" and "organic") cigarettes and much like packages with fewer than 20 cigarettes. That's no detail; for example, just slim and menthol cigarettes make 38% of cigarettes in Poland.




Also, it would be illegal to write "lite" at light brands of the cigarettes. In general, the firms' logos have to become almost invisible, too. Tobacco companies generally say it won't help, destroy the brands, and energize the black market.

I haven't smoked for 35 years and even before that, it was just half a breath after which I caughed ;-). But I still can't overlook that the justifications of these brutal policies are cherry-picked, indefensible, and just downright irrational. For example, Borg "justified" the flavor ban as follows:
If it's tobacco, it should look like tobacco and taste like tobacco.
The only problem is that menthol cigarettes aren't just tobacco. They're tobacco with menthol, a filter, and other things. Because the assumption of the proposition above isn't satisfied, the proposition is irrelevant for the real world. Well, if you look carefully, you will see that Borg's proposition is, to the extent that it is right, just a tautology. The truly valid underlying comment would be
It is looks like tobacco and tastes like tobacco (and a few other conditions), then it is tobacco.
You may consider this sentence to be an operational definition of tobacco or, in Bill Clinton's jargon, you may view this sentence as a definition of is, too. (You just have to add "If it smokes like tobacco and if it blows like tobacco" among the conditions, too.) Try to look at a different, analogous situation – a proposed law forcing the EU commissioners to look as what they are:
If it is an asshole, it should look, smell, and taste like an asshole.
The problem with the relevance of this proposition is that people such as Tomio Borg are not just the A-words. They are also trying to be politicians at the same moment. They're some bound states of various properties and entities, a compromise in between them. In fact, if we were lucky, the EU commissioners could resemble politicians more than the A-word in the future.

Quite generally, the philosophy behind all these bans is the desire to ban everything related to tobacco with no exceptions. That's why the policy tries to make it illegal to "bind" cigarettes with other things such as flavors, brands, types, selection, and so on. And that's why it wants to allow negative pictures only.

But the exclusively negative pictures completely misrepresent the actual balance of changes that the cigarettes bring to the smokers. Smokers usually smoke for very good reasons. They feel more free, relaxed, elegant, in charge of themselves. Most of the smokers just live the taste and flavor. To some extent, smoking even brings them some satisfaction as a form of a protest.

There are several motivations of this kind and be sure that the potential new smokers will learn about them whether or not the society hypocritically tries to turn these "positive words on smoking" into taboos. Such attempts to reduce the availability of the information to the youth is utterly naive.

Almost everyone knows that smoking probably reduces the life expectancy by several years. No one knows the exact figure. It's not enough to calculate the average age-at-death of smokers and non-smokers in a country because the actual difference between these two averages may be due to a completely different reason (a more genetic or socioeconomic one) that just happens to be systematically correlated with the smoking rate (note that correlation isn't causation; instead, both correlated quantities are likely to be effects of some common cause, a "third player").

If you order nations according to their life expectancy and their smoking rate, you will find out that there's no significant correlation between the percentage of smokers and the average life expectancy.

But even if everyone knew that smoking subtracted 5 or 10 years out of the average lifetime, and almost everyone believes it, anyway, most smokers would smoke nevertheless. Their cost-and-benefit analysis simply has some arguments on the opposite side as well and those often win.

I find the cigarette smoke somewhat annoying – and I also find it a bit annoying when it gets absorbed by clothes and survives. But it hasn't been a catastrophe for quite some time. There are other comparable annoyances and many greater annoyances. As a non-smoker, I actually find the horrifying graphics showing the diseases etc. to be more annoying than the cigarette smoke itself. One only walks around a newsstand, looks at some journals, and has to look at some of these vomit-making images? Why? What have I done to anyone?

Almost all smokers will decide to smoke, anyway. In addition to that, we will have some little kids scared by gloomy images at the tobacco shops, something they shouldn't have seen and, I think, something they have the right not to see.

Czech psychologists mostly believe that the logic behind the bans is crackpottery. For example, child psychiatrist Petr Pöthe thinks that the horrifying graphics won't repel teenagers. Their effect may actually be the opposite one:
Kids are not behaving as stupidly as adults often want to believe. Kids are thinking and they often have far more information at their disposal than the adults do. They are deciding according to various role models and attitudes. Something that is dangerous is, on the contrary, attractive for them. Health or their appearance in 15 years is something they don't care about at all.
This description wouldn't ever apply to me but I am confident that it's the right analysis of those typical teenagers who naturally become smokers. Dr Eva Králíková, a physician fighting tobacco addiction, disagrees and thinks that the graphic images may work.

Petr Pöthe above ultimately turns out to be a potential anti-smoking radical:
I doubt that we could measure in the lab that the kid is deciding according to the picture. Life works differently.

Primarily, we should struggle to be authentic. If we're saying it's a poison, it shouldn't be sold at all. It's wrong to sell something, get money from it, and to say that it's a poison at the same moment. This is an inconsistency that reduces the adult authorities' credibility among teenagers who may end up believing nothing that the adults say.
Dr Eva Králíková adds that a ban could be fine but it's impossible to ban something that a big portion of the population is addicted to. She outlines a decades-long struggle to eliminate smoking from the society.

I don't see much evidence that this is a realistic goal or a sensible expectation about a foreseeable future. And even if this goal could be achieved, I am afraid that it would be just one symptom of a highly manipulative, non-free society that may be under construction today. Sorry, I prefer a freer society with dozens of percent of smokers.



The prettiest public Christmas tree is one that decorates... of course, the royal town of Pilsen, Czech iDNES readers decided in a poll. Congratulations to us and Merry Christmas to you! :-)

Wednesday 19 December 2012

Hawking, Nurse, Rees, lords demand Alan Turing's pardon

The Atlantic and others reveal that Stephen Hawking along with Paul Nurse, Martin Rees, and many lords I don't know wrote a petition to David Cameron in which they urge the leader to formally rehabilitate a British hero and top computer scientist, Alan Turing, who was born 100 years ago.

Even though this man invented the concept of a Turing machine, a cornerstone of computer science, and led the team of Enigma codebreakers during the war, he wasn't immune against accusations of active homosexuality, especially because they were manifestly true.




Turing was sentenced to doses of estrogen that brought him impotency combined with breasts. He wasn't sure which of those things was more frustrating but whatever the answer was, he followed the example of his fairy-tale hero, Ms Snow White, and poisoned himself with an apple of an unhealthy color that later became the logo of Steve Jobs' company, the conceivable legend says.

Of course, I think that harassment of great men for their homosexuality is wrong. But I tend to agree that this pardon is mostly an example of a futile attempt to rewrite the history. What he did was illegal at that time and Turing, much like 100,000 other convicted men, did have to face a punishment. The Britons may be ashamed of that today but they can't change their history. Most of the convicted gays are dead by now and the country can't credibly compensate them in most cases – most of the historical wrongdoing, if you classify it in this way, is here in the spacetime to stay.



Also, I feel uncomfortable with the universal "we are more civilized than our ancestors" meme. If there is a disagreement between our generations and the generations of our faraway ancestors, it's a very tough question how to determine who is right. If you organize the vote today, you get one result. But if you had organized the poll 70 years ago, you would have gotten a different result. Our ancestors could officially declare that their descendants would have had been going to be (or whatever is the right tense) perverse sickos if they learned about the opinions that would prevail in their future. They could even adopt policies that would prevent the current sicko generation from being born. Acausality is a mess. ;-)

I am inclined to consider most of the evolution to be a sign of "positive progress" but I am very far from thinking that every single habit we have today and every single value or principle we cherish today is better than its inequivalent counterpart in the past. Many things are deteriorating. The identity of these things may be subjective to a certain extent but the general myth that "everything is better today" is a symptom of a "presentist chauvinism", nothing else.

The disagreement between the generations is a dispute that may only be fairly solved "from the viewpoint of the eternity" or from the "spacetime point of view", using a more physical jargon, due to its intertemporal character, and the polls relying on today's political balances violate this "eternalist neutrality" so they're inherently unfair, regardless of your opinions about this particular ethical question. In other words, our generation should remain modest and realize that we may only "control" the world at the present time (plus minus some error margin dictated by the inertia). We can't rewrite the history; and we can't order our distant ancestors in the future to protect all of our current ethical values. Our relevance and rights are limited and presentist rather than unlimited and eternalist!

Tuesday 18 December 2012

Energy from man-made tornadoes

Peter Thiel, a favorite venture capitalist of mine, just paid $300,000 to Louis Michaud, a Canadian inventor on the picture below who plans to build artificial tornadoes – the so-called [atmospheric] vortex engines: Wikipedia, Michaud's web – that may supply us with lots of energy.



This idea surely sounds provoking at first – way too close to a description of a perpetual motion machine – but I am already in a different stage in which I tend to think that this most elementary criticism is unjustified. However, it is still not clear how ambitious a change in the energy sector is being promised here.




The basic underlying mechanism is said to be the same as for a solar chimney. In the troposphere – atmospheric layer between the surface and the tropopause 10 km or so higher – the temperature generally decreases with the altitude. This gradient isn't far from the "adiabatic lapse rate". What is it?

The warm air heated from the Earth's surface at the bottom wants to go up because at the same pressure, it has a lower density than the cooler air. As it goes up, it expands, its pressure decreases, and so does the temperature. The calculation of these changes is enough to see that the temperature decreases by 6.5 °C per kilometer of height. Note that it's possible, albeit a bit counterintuitive, that the atmosphere may sustain a quasi-equilibrium with non-uniform temperatures. There's no paradox here, however: these non-uniformities are powered by the constantly added heat from the Sun and, indirectly, from the Earth's surface.

This behavior stops above the tropopause, in the stratosphere. The stratosphere is stratified – "separated" into horizontal layers that keep their altitude and don't mix. It's possible because the air temperature is actually increasing with the altitude in the stratosphere – because it's being heated by the Sun – so the cooler (and therefore denser, heavier) air at the bottom has no reason to go up.

Back to the troposphere. The mixing of the air that enforces the lapse rate also brings some circulation which is responsible for many kinds of weather phenomena and winds – and tornadoes are actually the most typical ones. The processes and gradients occurring in a tornado are pretty complicated, however, and your humble correspondent isn't able to evaluate all these things.

It's conceivable that the artificial tornadoes could replaces chimneys and increasing their efficiency by dozens of percent. When it comes to this "evolutionary" improvement, I am willing to bet that the existing chimneys are far from optimal, so some tornado-inspired improvement is likely to exist, whatever it is. However, it's plausible that one could do better (and it seems like the folks actually claim that this is the ambition here): the modestly elevated temperatures on the surface could be enough to play the role of the "hot burning coal" in a conventional power plant that powers the flow so if one could extract the mechanical energy from the man-made tornado, it could be energy obtained "almost for free".

It would be nice if someone told me whether this is physically possible at all. And if it is, whether there's an upper limit on the number of tornadoes or the energy per unit time that could be extracted from this truly attractive hypothetical source. ;-) For example, the restriction may arise because the man-made tornado would lower the lapse rate and would stop working when the lapse rate decreased beneath a certain threshold.

Thanks.

I have asked the same question at Physics Stack Exchange and there's already an interesting answer over there.

Anti-string delusions and political correctness

A science babe working for the Huffington Post left-wing website recorded a video interview with Mark Jackson who is now in Paris:



The title matches the first sentence of the interview and it is annoying.




Well, the title is:
'Superstring Theory Is A Bit Controversial,' Theoretical Physicist Mark Jackson Explains (VIDEO)
The interview has attracted over 500 comments.

While most of the interview is an okay and highly conventional layperson's introduction to string theory and cosmology (CMB), I would love to know who had the idea that the interview could – or had to? – start in this idiotic and totally dishonest way. Needless to say, there is absolutely nothing controversial about string theory – except among hopeless cranks and anti-science activists in general.

But someone must think that it's desirable if not required to include similar indefensible trash talk to reports about physics. I find the situation analogous to reports about the richest nations, the white race, or the wealthiest corporations – all things that are successful to the extent that they induce jealousy. When PC sources such as left-wing websites talk about them, they are obliged to repeat some negatively sounding myths about them, too.

In all incarnations, this political correctness is obnoxious. Sometimes I even feel it has become politically incorrect to state the self-evident fact that the critics of string theory are imbeciles.



Incidentally, one of the hottest stories (EN) in the Czech newspapers talks about Sheldon Cooper's success in saving the life of his and his friends by singing a carol (CZ I, II) about St Wenceslaus, the Czech patron, in the latest episode of TBBT, one which topped Thursday ratings with 16.7 million viewers. The journalists superficially focus on Sheldon's and Leonard's inability to pronounce the name Václav (Vuht-slaff, not Vaklaff).

The Czech readers also learn about a previous Czech-related comment by Sheldon: a decade ago, diarrhea came as quickly as the Nazi army occupied Czechoslovakia. They refer to the "Pancake Batter Anomaly" episode: ;-)
Penny: Studying abroad?
Sheldon Cooper: No, visiting professor. Anyway, the local cuisine was a little more sausage-based than I'm used to, and the result was an internal Blitzkrieg, with my lower intestine playing the part of Czechoslovakia.
Sheldon may mispronounce Czech names but he knows more about not just the Czech lands but also about Saturnalia and Christmas than the other characters combined. :-)

Monday 17 December 2012

Victor Hess, Joseph Henry: anniversaries

Victor Franz Hess (24 June 1883 – 17 December 1964) was born to a royal forester in Waldstein Castle, Styria (South Austria proper). He attended various local schools and was interested in radiation.

His key work was done between 1911-1913. Hess wanted to show that, as expected, the radiation we detect decreases with the altitude – because it originates in the Earth and gets absorbed by the atmosphere. So he took risky balloon trips to 5 kilometers and found out that instead, it was getting stronger. Well, "cosmic rays", as Robert Millikan called them in 1925 when he verified Hess' claims. Their discovery brought Hess the 1936 Nobel prize in physics.

He married a Jewish wife, had to flee the Third Reich because of that, and became a U.S. citizen. After her wife died, he married her last nurse.




Joseph Henry (December 17, 1797 – May 13, 1878) was born to Scottish immigrants in Albany, New York. His dad died soon, Joseph lived with his grandmom. He became an apprentice watchmaker and silversmith.

Instead of going to medicine, he decided to build roads. His poverty played various roles in all his academic advances. He was good, however, so they named him a professor of mathematics. He became soon interested in geomagnetism and magnets in general, built the strongest electromagnet. He invented the ideal arrangement of coils that could be used by telegraphs.

Henry studied lots of phenomena in which electricity and magnetism overlapped. For example, he built an early ancestor of the DC motor and discovered self-inductance (voltage in a spiral-shaped wire created by a change of the current going through). Also, he discovered mutual inductance independently of Michael Faraday (Faraday published it first). Add an old doorbell and electric relay.

Many scientists remained unappreciated during their lifetimes. That wasn't Henry's case. He was admired a lot – if not too much – and also became the first secretary of the Smithsonian Institution. His self-control, patience, gentle humor, and background not attracting jealousy helped. The SI unit of inductance is named after him, much like laboratories and a house at Princeton University. He has also made contributions to the determination of the temperatures at various places of the Sun – and to aeronautics.

Henry was an early insider whom Alexander Graham Bell made familiar with the telephone and he also tried to advise Bell – and offered some compliments at official events.

Off-topic: Jo Nova informed me that your humble correspondent was quoted in The Australian (related to the climate politics). See also her article on Lord Monckton.

Exorcising Maxwell's daemons

And the lowest allowed power consumption of PCs

In our discussions about information and heat, James Gallagher said some of the usual wrong things about irreversibility – for example, he believes that the proof of the H-theorem is invalid because of the molecular chaos assumption (this assumption is a pure technicality allowing explicit calculations but the overall conclusion, the increasing entropy, is independent of any such Ansatz!).

However, he has also made a statement about an algorithm to reduce the entropy with the help of his PC:
I mean I can simulate deterministic dynamical systems on my computer and reverse all the dynamics at any time - which MUST then result in a decreasing entropy if the previous system had increasing entropy.
I assure you, James, that your method doesn't work. What you suggested has been known as Maxwell's daemon and the 20th century analyses have made it clear that no such proposed device may actually reduce the total entropy.




This blog has discussed Maxwell's daemon many times. See, for example,
Feynman on the arrow of time

Arrow of time understood for 100 years

Maxwell's daemon cannot do useful work
Recall that in his 1964 Messenger Lectures at Cornell, Feynman showed that Maxwell's daemon couldn't work because the "wheels with teeth" that were meant to undo the balance of a physical system ultimately worked so that they transmitted energy/heat from the warmer body to the cooler one, as expected, so the entropy goes up whether you like it or not.

Because James' comment shows that people, including frequent visitors of physics blogs, still haven't noticed that such devices cannot work, I decided to write one more blog entry of the sort and add some interesting related ideas that haven't been written on TRF yet.



Maxwell's daemon is a hypothetical agent or device that operates inside a thermodynamic system – in this case a vessel divided to two parts – and it does something intelligent in order to encourage processes that "naturally" occur in the opposite direction only.

For example, the daemon may open the door whenever a faster, hotter molecule is coming from the left, so that it gets to the right, and it may similarly encourage slower, cooler molecules to be concentrated in the left part. Or it may just try to concentrate all the gas molecules in one part. Or sort them in another way.

Whatever details we choose, the point is that the daemon is reducing the entropy of the gas (or another object). It is effectively able to increase temperature differences between the two parts of the vessel even though in Nature, temperatures tend to get more uniform as time goes by. Or the daemon may do something else that doesn't seem to happen naturally.

If such a daemon were possible, the advantage would be clear. We could construct the perpetual motion machine of the second kind because we could increase the temperature differences between the two parts of the vessel and use a part of the difference to do mechanical work. Well, in the case of the daemon that just spontaneously concentrates all the molecules in one part, we would seemingly construct the perpetual motion machine of the first kind because the pressure difference could be used to do mechanical work "immediately". However, if you look carefully, it would still be the perpetual motion machine of the second kind because the temperature of the gas would go down as you would be extracting work out of it.

Now, can the daemon exist? You may say that Maxwell's daemon is a metaphor for the government. So the people who are deluded and believe that the government may "social-engineer" things that work more effectively than the free market, may also believe that Maxwell's daemon that violates the second law may be produced and launched. Well, it cannot, those of us who understand the basic laws of thermodynamics and economics know.

But for a while, Maxwell's daemon may have been viewed as another "giant" who may perhaps "beat" the laws of thermodynamics. It wasn't quite clear who would win: the laws of thermodynamics, or Maxwell's proposed cleverness? Note that Maxwell designed the thought experiment as an example of a new effect that becomes possible when we replace the approximate laws of thermodynamics by their precise microscopic realization in terms of statistical physics. Such improvements of the foundations of physics often lead to new possibilities, so Maxwell's daemon could have been possible, couldn't it?

Leo Szilárd published the first article saying "No, the daemon won't work" in 1929. The paper said many nice things that were partly right. Equally importantly, the connection between the entropy and information appeared very clearly in that paper. He was able to say that doing something with the information of 1 bit was changing some entropy (or some part of it) by\[

S = k\ln 2.

\] The entropy equals Boltzmann's constant multiplied by the natural logarithm of two (nats). Also, he would figure out that some operations doing something with one bit created or moved energy or heat \(\Delta E = \pm kT\ln 2\) somewhere. I am being deliberately vague here because at this vague level, Szilárd was right.

(Did you know that Szilárd has mastered the method of getting grants by blackmailing that he would publish a paper on how to build your nuclear bomb? He was a proponent of bribing politicians to improve the world, too.)

The exact details about the moment when the entropy increases and guarantees that the total entropy can't go down were slightly wrong in his paper. He essentially believed that the entropy associated with the daemon (=the expenses of the government) had to increase when the information about the molecules was being accumulated.

These days, it seems that a different accounting of the entropy increase is much more convincing. The more precise explanation what's going on emerged as Landauer's principle in the 1961 paper by Rolf Landauer of IBM. See also a 1981 paper by Charles Bennett or newer lectures by John Preskill for some modern comments.

Landauer realized that the entropy measures the information about the microscopic arrangement that has been lost, that has become inaccessible. And this part of the information is actually not increasing when a computer accumulates the information about the molecules in the vessel. Instead, it is being lost when the computer erases its memory which it needs to do at some point before it accumulates new data – at least assuming that the computer's memory is finite.

(If the computer has an insanely high or "infinite" memory, one may assign its state an entropy in such a way that the second law will continue to hold even though the memory never has to be erased. At that moment, the validity of the second law may look vacuous or convention-dependent but that only occurs if we assume some unphysical assumptions.)

It's funny to look at the computers' minimal power consumption dictated by Landauer's principle. If your fast computer (well, one you may have in a few years) needs to erase one trillion bits per second, \(kT\ln 2\) per bit will tell you that the minimal consumption of such a computer at the room temperature is 2.85 nanowatts. Clearly, the actual microprocessors and memory chips still consume many orders of magnitude more energy than that – which is why it makes no practical sense to try to construct "reversible computers" that don't erase things and that could circumvent the Landauer bound. The experts say that such "reversible computation" is possible in principle. I have some modest doubts about it but I won't clarify them.

While the Landauer limit is much smaller a power consumption than what we seem to see in the real world, I feel that it may be necessary for the power consumption to be much higher than the Landauer limit if the computers are supposed to work flawlessly. For the bits of information inside the computer to behave as classical bits, I think it is necessary to copy them many times – in the sense of decoherence. Only when the information is copied many times, the classical-bit approximation becomes OK. For this reason, I would guess that the minimal consumption of reliable enough classical computers will always be greater than the Landauer minimum by at least an order of magnitude.

Nevertheless, we still have lots of room to reduce the power consumption. And one additional simple way to reduce \(kT\ln 2\) is to reduce the temperature \(T\). Highly cooled computers could consume less energy if the Landauer bound ever became relevant. Of course, this discussion isn't useful for your PC at home because the cooling systems needed to get this low would probably consume much more energy than your PC so you wouldn't get much.

In his belief that it's easy to "reverse any evolution" with the help of a computer and therefore to reduce entropy, James Gallagher makes a kind of isomorphic mistake as the proponents of Keynesianism, socialism, communism, and related crackpot theories in economics do. They overlook the expenses of the government and the inefficiency that the government itself brings to the system. Well, that's a pretty serious mistake.

Analogously, James thinks that the computer operates "for free" and doesn't create any entropy. But the point here is that the computer is a physical object, much like the government bureaucrats are people who still behave to maximize their utility function. When one does a proper analysis that includes the computer or the government offices, it becomes completely clear that Maxwell's daemon and the government simply cannot work to transform their utopia into reality.

James was thinking about a device that probably measures the positions and velocities – or the quantum state? – of all the molecules in the vessel and does a "global calculation" before it adjusts the motion of all the molecules in the vessel. This may look ambitious or different than Maxwell's daemon but if it is looked at rationally, it's easy to figure out that it's just another realization of Maxwell's daemon, and a highly inefficient one. To reconstruct the state of a kilogram of a classical gas, one would need something like \(10^{26}\) bits of information – well, a big multiple of that because the velocities would have to be known exponentially accurately. But to reserve the room in your PC memory for this huge inflow of data, you would have to erase a huge number of bits and the entropy would go up dramatically as a result.

Some people still keep on suggesting that Maxwell's daemon could work. A notable example is this 1998 paper by John Earman and John Norton in which they claim that the debunking of Maxwell's daemon is based on the circular reasoning. I find such texts deeply confusing. If you read the paper, they seem to deny the tight relationship between the entropy and information in general. This seems utterly indefensible to me.

In their "circularity" argumentation, they apparently criticize the "Maxwell's daemon won't work" arguments for their making assumptions about the behavior of the entropy of the daemon. Well, because Maxwell – and others – failed to present a precise model of the daemon's inner workings, the people who analyze this gadget must make some assumptions. In the particular models one may construct, it may be seen that the erasure of information does increase the entropy.

But even though it's impossible to "describe all the details" about the processes in a gadget – general Maxwell's daemon – that no one has clearly defined (it should be a task for the proponents of the daemon, shouldn't it?), it's still true that we have proofs that the entropy is increasing that don't seem to depend on the non-existence of computers, memory chips, and other devices at all. This simply means that these proofs have to apply to computers, too. A microprocessor is just another physical object. A memory chip is a physical object, too. And the fact that the entropy can't decrease in the thermodynamic limit may be proved quite generally, so it must apply to computers, too.

At least, without some convincing example showing a loophole, an example that hasn't been debunked yet, it seems silly to me not to consider the second law to be a general fact. Earman and Norton may try to criticize people with my opinion as people defending dogmas but it's a dogma analogous to \(2+2=4\) – it's demonstrably true although one may still be forced to work hard to debunk increasingly sophisticated attempts to prove that \(2+2\neq 4\).

To imagine that some "clever engines" may stand "above" the laws of physics is just a ludicrous religious belief, an elementary fallacy. It's a similar fallacy as the fallacy that the planned economies and their goverments may deny the general laws of economics. They cannot. People and even politicians are physical objects, too. Of course that if one envisions a divine daemon that doesn't have to be subject to the laws of physics, many things become possible. But in my opinion, the dreaming about such divinities doesn't belong to natural science. And because Earman's and Norton's argumentation is so analogous to the claims "science will never be able to prove the non-existence of supernatural beings and scientists are just bigots who have to assume the non-existence of supernatural beings and adjust their rules to defend their belief", I think that this argumentation isn't scientific, either.

Note that I have used this thermodynamics-economics analogy many times in this text. I actually believe that thermodynamics is the discipline of physics that is most analogous to economics. Some left-wing types who hate the free markets love to imagine that economics isn't a science in any sense. Well, it is a science and many of its statistical assertions, especially various inequalities, are closely analogous to various inequalities one may establish in thermodynamics.