Saturday 30 June 2012

U.K.: energy smart meters will monitor your sex habits

Marc Morano has pointed out the following interesting article in the Guardian:
Energy smart meters are a threat to privacy, says watchdog
By 2019, British citizens are obliged to install new, "smart" devices to measure the energy consumption. The detailed consumption patterns will be evaluated locally and sent to an energy-saving official dedicated to your street in the utility's headquarters.



Note that the device allows you the home temperature either at 16 °C or 20 °C.

She or he will tell you whether you should take a shower or a bath and how much energy you save by having sex in the morning rather than evening, in your bedroom or your living room. Based on the graphs from individual light bulbs and other devices in your apartment, she or he will recommend you a new diet and a new partner, too.




The detailed data of each inhabitant of a house will be discussed at the daily meetings of the Green Party. The most outrageous violators will be published in the media. So this is an answer to the question: What if the climate alarm is a big hoax and what if we create a "better world" for nothing? ;-)

More seriously, I am occasionally shocked what kind of policies is still being pursued somewhere. Where did this stuff come from? Who has approved it? We often think that we have already won, the climate alarmists have been shown to be crooks and removed from the political process; it's a matter of time when we arrest some of the most notorious alarmists who have actually been mining money out of this huge scam.

But we also frequently encounter some of their brainchildren that haven't been killed yet. Someone should make a detailed research into all these time bombs that are waiting for us, all these insane policies recommended by unhinged Big Brother environmentalists that have already been initiated and that are waiting somewhere to be realized and which will be realized unless we will be more cautious.

It is no other person's business to be deciding whether you should have a shower or a bath, not to mention more intimate questions. Whether one consumes more energy than the other is just an issue to be considered by the person who is paying for the energy. And be sure that even without smart meters, people generally know that baths are more expensive than showers. (I personally know the consumption of all the electric devices in my apartment.) But unless we want to merge our countries with North Korea, such elementary things can't be outlawed.

The title of this blog entry was meant to be an eye-catcher. However, it is true that if someone may evaluate the detailed graphs of your electricity consumption – and perhaps with some more detailed descriptions – in your apartment, he or she will know a lot about what's happening in it. I do agree with the Germany consumers mentioned in the Guardian article that it's creepy. And there are people out there who are vastly more sensitive about privacy than I am.

Klaus vs Dallara on European citizenship

For days ago, the Center for Economics and Politics (CEP), a Czech libertarian think tank, organized a seminar called The Eurozone Sovereign Debt Crisis: A Way Forward.

The host was no one else than CEP's founder, Czech President Václav Klaus (who recently corrected some soccer newsmen – very aptly – and who was just critical about the initial movie of the International Film Festival in Carlsbad that just began), and the main other star was Charles Dallara, a banker who has held very important positions in the administrations of Reagan and Bush Sr.



A 46-minute video from the event.

I have previously attended such CEP seminars, both as a regular guy in the audience and a panelist.

You may view the friendly but sharp disagreements between Dallara and Klaus as a rather apt characterization of the politically correct wing of the U.S. Republican Party.




Dallara's comments in the first 10 minutes are somewhat boring. I couldn't listen to all of it. But it was clearly all about the messianic complex at the European level – the need to send the money to every government in Europe that may need them. Europe is one organism, a Gaia, and it must overcome all of its non-uniformities and act as a whole. Blah blah blah.

So you will surely prefer to jump to 10:00-17:00 and especially 37:45-43:40 where you can get a glimpse of Klaus the debater. He chastised Dallara and pointed out that Dallara could easily serve as a member of the European Commission – and believe me, that's a rather tough insult – because he is proposing all the things that are unacceptable for us – including a pancontinental collectivism and a European citizenship that doesn't really exist. We're talking about the taxpayer money and the taxpayers really, but really don't want to send the money to Spain! Not even one oyro (Klaus emphasizes it's a German-run currency). ;-)

Klaus mentioned that he felt no solidarity with Spain or Finland or any other country in Europe – they're analogous to Malaysia in this respect. In fact, Klaus feels more compassion with Malaysia, especially because it was hurt by some International Monetary Fund policies a few years ago. ;-) Of course, I disagree with Klaus' suggestion that all the Greek troubles may be blamed upon the EU – Greece is actually a self-sufficient European cradle of the culture of entitlement and many related things that are so wrong about the contemporary Europe – but that's where my disagreements stop.

There are many people in the GOP and analogous parties in other Western countries who genuinely want to do good things. And a part of this disagreement boils down to the Americans' opinions that Europe is analogous to (equally unified as) America. However, at some point, the American Republicans should also think twice whether they haven't abandoned all the basic values that used to define conservative parties such as the GOP. You know, I am primarily talking about the personal freedom, about the protection of the citizens against the robbery and despotic acts by the government (an ever bigger one).

Via Czech Parliamentary Letters.

Stop squark rumors and R-parity violation

At the beginning of this year, Murray Gell-Mann as well as the rest of us were exposed to gossip about stop squarks seen at the LHC. In the four months that followed, no discovery was announced, some observations that could have made the discovery were published and they didn't make it, and no new comprehensible leaks were added.

However, I've been watching various phenomenology papers for suggestions that some people could know more about these things than others; and for possible proposed models for which the assumed validity of the gossip was a key pillar although the authors obviously didn't admit it. (I don't claim that the authors of any particular paper mentioned below know some detailed gossip!)

It's been getting increasingly likely from my perspective that if the gossip is true, the processes in which the stop squarks appeared were probably processes requiring R-parity-violating supersymmetry.




Just to remind you, the R-parity is a quantity that may be equal either to \(+1\) or \(-1\). In fact, it is equal to \(+1\) for all the known particles of the Standard Model and \(-1\) for all of their superpartners. One may calculate the R-party by an explicit formula\[

P_R = (-1)^{2J+3B+L}.

\] Well, any odd coefficient such as \(\pm 3\) in front of \(L\) would do the same job and it could be more natural in particular models. You may check that the quantity above is positive (even) for all known particles: the fermions have an odd \(2J\) but they also have an odd \(3B+L\) because they're either quarks carrying a baryon number or leptons carrying a lepton number. And odd plus odd equals even. In the same way, all particles with an even \(2J\), namely bosons, carry vanishing lepton and baryon numbers so all the terms in the exponents are even.

The superpartner of a given particle has the same \(B,L\) but its \(2J\) differs by one so it is obvious that all superpartners of the Standard Model particles have a negative (odd) R-parity.
Related, Frank Wilczek: The Nobel-winning co-father of QCD just wrote a newly updated guest blog for PBS where he explains the Higgs phenomena, says that God deserves the full credit or blame for them, and explains why he will be heartbroken if the LHC won't discovery supersymmetry (Nature would turn out to have a very different taste than Frank Wilczek haha), among other things. ;-)
It's been often assumed that the R-parity is conserved. If it were so, the lightest particle with \(P_R=-1\) i.e. the lightest superpartner (LSP) would be stable because the conservation law for energy and for R-parity prevents it from all decays: there aren't any lighter particles with the same value of \(P_R\) and the same value is required. The light LSP could be a gravitino or neutralino (either closer to a bino or to a neutral wino).

Such a particle (LSP) could therefore constitute the bulk of the dark matter! For the neutralino, the R-parity conservation is rather critical if it wants to be employed as a dark matter particle.

However, the R-parity is likely to be broken, at least a little bit. After all, the numbers \(B,L\) or their combinations aren't exactly conserved quantum numbers. Evaporating black holes (and probably lighter objects as well) must be able to change their values (the conservation laws can't be exact because there would have to be new long-range forces analogous to electromagnetism if the symmetries were exact but there aren't any because they would destroy the tests of the equivalence principle). So the exponential is rather likely to be un-conserved, too. Now, the question is whether the R-parity violation is strong or weak.

Again, people typically assume – or derive from a deeper starting point (but I don't understand any of these derivations myself) – that the R-parity violation must be so weak that it can't be seen by the present accelerators. For example, that's a proposition behind the models by Gordon Kane, Bobby Acharya, and others. The conservation of R-parity makes it easier to preserve the longevity of the proton. While the baryon and lepton number conservation laws are allowed to be violated, the R-parity still bans some "intermediate steps" involving virtual R-parity-odd particles and makes the decay harder.

However, if one allows R-parity to be strongly violated, one should better only allow R-parity-violating terms that also violate the baryon number but that preserve the lepton number; or only terms that also violate the lepton number but that preserve the baryon number. If both lepton number and baryon number were violated, the proton decay would almost certainly be rapid, in a flagrant contradiction with the observations.

It's my impression that the baryon-number-violating R-parity-violating (BNV RPV) models have been gaining an upper hand in recent months. A cool feature is that the stop squarks' virtual activity could explain not only the stop squark gossip but also the Tevatron forward-backward asymmetry: a squark-like particle in a \(t\)-channel has always been the favorite explanation of mine for this only recent "new physics" claim by the Tevatron that wasn't self-evidently nonsensical.

Now, let me scare you with six recent 2012 papers about related issues:
April: Arnold et al.
April: Perez
May: Dreiner et al.
May: Allanach et al.
June: Dupuis et al.
June: Brust et al.
Left-wing readers will appreciate that the recent 3 months were treated in a nice, egalitarian way; all the other months were put down in a gas chamber.

In particular, Allanach et al. and Dupuis et al. analyzed models with intermediate stop and sbottom squarks in the \(t\)-channel and they concluded that these effects could explain the large Tevatron forward-backward top-antitop asymmetry while preserving peace with the negative results of all other searches! Dupuis et al. was published after Allanach et al. but they claim to be better because they also consider the atomic parity violation constraint.

Brust et al. love the RPV BNV SUSY models – one of the few viable incarnations of weak-scale SUSY preserving naturalness, if we use their words. Note that R-parity violation explains the negative result in all the searches that rely on missing energy: there's not much missing energy in these decays because the LSP (missing energy) doesn't have to be emitted (the only reason why it has to be emitted otherwise is to conserve R-parity).

Also, several TRF articles such as this one discuss some 2-sigma bumps supporting the idea that the R-parity-violating supersymmetry exists in Nature.

One more reason leads me to believe that the gossip makes more sense with the R-parity violation. It was rather strong and specific – it almost looked like the folks could measure the mass of the stop although we weren't told what it was. This is only possible if the energies can be measured rather accurately i.e. if a big part of the stop energy isn't wasted for the LSP. But that's only possible if the stop squark violates the R-parity during its decay. So it's plausible that they just observe e.g. 2-top or 4-top events without missing energy in which the tops together with some other well-defined jets may be added up to sharply determine the stop mass.

At any rate, the basic model that e.g. Allanach et al. use to explain the forward-backward asymmetry is very simple. They rely on one R-parity-violating, baryon-number-violating term in the superpotential only,\[

W = \frac{\lambda''_{313}}{2}\bar T_R \bar D_R \bar B_R

\] This terms is able to induce various detailed interactions between the quarks and their superpartners. However, the only one that matters for the forward-backward asymmetry are two identical vertices in the following story:
A right-handed down-quark moves to the right (inside a proton), its antiparticle moves to the left (in the Tevatron antiproton). Using the cubic vertex above, the down-quark changes to a right-handed top-quark moving roughly in the same direction – this transformation is accompanied by the emission of a right-handed sbottom antisquark (yes, it violates both \(P_R\) as well as \(B\), by one). This sbottom antisquark is absorbed by the right-handed down anti-quark for it to morph into a right-handed top anti-quark. So the top-quark tends to fly in the same direction as the down-quark (and therefore the proton); and similarly for the antiquarks (and antiproton).
So it's just the exchange of a sbottom squark in the \(t\)-channel. What I am puzzled by is why the down-quark changes to a top-quark. Shouldn't the vertex change it to a top-antiquark? Just look at it carefully. This would produce the negative sign of the asymmetry! I just sent a question and/or correction to Ben Allanach.

Such a mistake would surely make the model less appealing (it sort of looks like that Dupuis et al. don't have this bug and they claim to achieve similar results) but from a broader perspective, these models have many appealing features. However, if they were true, theorists would have to return to the drawing board when it comes to the explanation of the dark matter. All the hints in the direct searches and the gamma-ray lines would have to go away. At most long-lived gravitinos would remain somewhat viable dark matter candidates.

Friday 29 June 2012

On the importance of conformal field theories

Scale-invariant theories may look like a too special, measure-zero subset of all quantum field theories. But in the scheme of the world, they play a very important role which is why you should dedicate more than a measure-zero fraction of your thinking life to them.

In this text, I will review some of their basic properties, virtues, and applications.




When we look at Nature naively and superficially, its laws look scale-invariant. One may study a hamburger of a given weight. However, it seems that you may also triple the hamburger's weight and the physical phenomena will be mathematically isomorphic. Just some quantities have to be tripled.

(Sean Carroll has tried to revive the old idea of David Gross to sell particles to corporations in order to get funding for science; search for David Rockefeller quark in that paper. The Higgs boson could be sold to McDonald's, yielding a McDonald's boson. However, anti-corporate activist Sean Carroll has failed to notice that McDonald's actually deserves to own the particle for free because much like the God particle, McDonald's is what gives its consumers their mass.)

The scale invariance seems to work

For example, if you triple the radii of all planets, the masses will increase 27-fold if you keep the density fixed. To keep the apparent sizes constant, you must also triple the distances. The forces between these planets will increase by the factor of \(27\times 27 / 3^2 = 81\), accelerations by the factor of \(27/3^2=3\). If you realize that the centrifugal acceleration is \(r\omega^2\) and \(r\) was tripled, you may actually get the same angular frequency \(\omega\).

With different assumptions than a constant density and the prevailing gravitational force, you might be forced to scale \(\omega\) and times in a different way, and so forth.

But it's broken

However, when you look at the world more carefully and you uncover its microscopic and fundamental building blocks, and maybe even earlier than that, you will notice that no such scale invariance actually exists. A 27-times-heavier planet contains 27-times-more atoms; this integer is different so the two planets are definitely not mathematically isomorphic.

And atoms can't be expanded. Atoms of a given kind always have the same mass. They have the same radius (in the ground state). As the Universe expands, the atoms' size remains the same so the number of atoms that may "fit" into some region between galaxies is genuinely increasing. The atoms emit spectral lines with the same frequency and wavelength. After all, that's why we may define one second as the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom. Even well-behaved tiny animated GIFs blink once in a second and no healthy computer should ever change the frequency. ;-)



And even the Gulliver is a myth. More precisely, and the experts in literature could correct me, the Gulliver is a normal man but the Lilliputians are a myth. ;-) You can't just scale the size of organisms. Such a change of the size has consequences. Too small or too large copies of a mammal would have too weak bones to support the body, they would face too strong a resistance of the air, they couldn't effectively preserve their body temperature different from the environment's temperature, and so on, and so on.



LUMOback is telling you when you slouch and when you're lazy. And these folks are able to use the LUMO trademark to collect tens of thousands of dollars for their project. ;-) Via Christian E.

The period of some atomic radiation is constant and may be measured very accurately which is why it's a convenient benchmark powering the atomic clocks – the cornerstone of our definition of a unit of time. However, it's not necessarily the most fundamental quantity with the units of time. In particle physics, the de Broglie wave associated with an important particle at rest yields a "somewhat" more fundamental frequency than the caesium atom. In quantum gravity, the Planck time may be the most natural unit of time.

Scale invariance in classical and quantum field theories

How do we find out that some laws of physics are scale–invariant? Well, there won't be any preferred length scale in the phenomena predicted by a theory if the fundamental equations won't have a preferred length scale. They must be free of explicit finite parameters with units of length; but they must also be free of parameters whose powers could be multiplied to obtain a finite result with the units of length.

For example, the Lagrangian density of Maxwell's electromagnetism is\[

\LL_{\rm Maxwell} = -\frac 14 F_{\mu\nu}F^{\mu\nu}.

\] The sign is determined by some physical constraints: the energy must be bounded from below. The factor of \(1/4\) is a convention. It is actually more convenient than if the factor were \(1\) but the conceptual difference is really small. What's important is that there are no dimensionful parameters. If you wrote electromagnetism with such parameters, e.g. with \(\epsilon_0\) and \(c\), you could get rid of them by a choice of units and rescaling of the fundamental fields. And whenever it's possible to get rid of the parameters in this way, the theory is scale-invariant.

When we deal with a scale-invariant theory, it doesn't mean that all objects are dimensionless. Quite on the contrary: most of the quantities are dimensionful. The scale invariance is actually needed if you want to be able to assign the units to quantities in a fully consistent way. When you have a theory with a characteristic length scale, i.e. a non-scale-invariant theory, you may express all distances in the units of the fundamental length unit (e.g. Planck length). All distances effectively become dimensionless and the dimensional analysis tells you nothing.

However, in a scale-invariant theory, you may assign the lengths and spatial coordinates the units of length, e.g. one meter. The partial derivatives \(\partial/\partial x\) will have the units of the inverse length. Because \(S\) must be dimensionless in a quantum theory – \(iS\) appears in the exponent in Feynman's path integral – it follows that the Lagrangian density has to have units of \({\rm length}^{-4}\). That's because \(S=\int\dd^4 x \,\LL\). I have implicitly switched to a quantum field theory now and set \(\hbar=c=1\) which still prevents us from making lengths or times (or, inversely, momenta and energy) dimensionless.

In the electromagnetic case, the units of \(\LL\) are \({\rm length}^{-4}={\rm mass}^4\) which means that \(F_{\mu\nu}=\partial_\mu A_\nu-\partial_\nu A_\mu\) has to have the units of \({\rm mass}^2\). And because \(\partial_\mu\) has the units of \({\rm mass}\) i.e. \({\rm length}^{-1}\), we see that the same thing holds for \(A_\mu\). In this way, all degrees of freedom may be assigned unambiguous units of \({\rm mass}^\Delta\) where \(\Delta\) is the so-called dimension of the degree of freedom (a particularly widespread concept for quantum fields).

In classical field theory, the dimensions would always be rational – most typically, \(\Delta\) would be integer or, slightly less often, a half-integer. However, in quantum field theory, the dimension of operators often likes to be an irrational number. Whenever perturbation theory around a classical limit works, these irrational numbers may be written as the classical dimensions plus "quantum corrections to the dimension", also known as the anomalous dimensions. The leading contributions to such anomalous dimensions in QED are often proportional to the fine-structure constant, \(\alpha\approx 1/137.036\). This correction to the dimension may be calculated from some appropriate Feynman diagrams.

At any rate, all fields – including composite fields – may be assigned some well-defined dimensions \(\Delta\). If distances triple, the fields must be rescaled by \(3^{-\Delta}\).

Now, how many scale-invariant theories are there? Do we know some of them? Well, among the renormalizable theories such as the Standard Model, there is always a "natural" cousin or starting point, a quantum field theory that is classically scale-invariant. The actual full-fledged theory with mass scales is obtained by adding some mass terms and similar terms to the Lagrangian. It's easy to be explicit what we mean. The classically scale-invariant theory is simply obtained by keeping the kinetic terms – the terms with the derivatives – only and erasing the mass terms as well as all other terms with coefficients with units of a positive power of length.

It means that to get a scale-invariant theory, you keep \(F_{\mu\nu}F^{\mu\nu}\), \(\partial_\mu \phi\cdot \partial^\mu\phi\), \(\bar\psi\gamma^\mu \partial_\mu \psi\) etc. but you erase all other terms, especially \(m^2 A_\mu A^\mu\), \(m^2\phi^2\), \(m\bar\psi\psi\), and so on. Is there a justification why we can neglect those terms? Yes. We're effectively sending \(m\to 0\) i.e. we're assuming that \(m\) is negligible. Can \(m\) be negligible? It depends whom you compare it with. It's negligible relatively to much greater masses/energies. That's why the mass terms and similar terms may be neglected (when we study processes) at very high energies i.e. very short distances. That's where the kinetic terms are much more important than the mass terms.

I said that by omitting the mass terms, we only get "classically" scale-invariant theories. What does "classically" mean here? Well, such theories aren't necessarily scale-invariant at the quantum level. The mechanism that breaks the scale invariance of classically invariant theories is known as the dimensional transmutation. It has a similar origin as the anomalous dimensions mentioned above. Roughly speaking, the Lagrangian density of QCD, \(-{\rm Tr}(F_{\mu\nu} F^{\mu\nu})/2g^2\), no longer has the units of \({\rm mass}^4\) but slightly different units, so a dimensionful parameter \(M^{\delta\Delta}\) balancing the anomalous dimension has to be added in front of it. In this way, the previously dimensionless coefficient \(1/2g^2\) that defined mathematically inequivalent theories is transformed into a dimensionful parameter \(M\) – which is the QCD scale in the QCD case – and the rescaling of the coefficient may be emulated by a change of the energy scale. So different values of the coefficient are mathematically equivalent after the dimensional transmutation because the modified physics may be "undone" by a change of the energy scale of the processes.

Ability of fixed points to be unique up to a few parameters

In the previous paragraphs I discussed a method to obtain a scale-invariant quantum field theory by erasing all the mass terms in an arbitrary quantum field theory. This procedure is actually more fundamental than how it may look. The scale-invariant theory is a legitimate fundamental starting point to obtain almost any important quantum field theory we know.

The reason is that the ultimate short-distance limit of a generic consistent quantum field theory has to be scale-invariant. When we study energies higher than all typical energy scales in a quantum field theory, all these energy scales associated with the given scale-non-invariant theory may be neglected and we're bound to get a scale-invariant theory in the limit. Such a limiting short-distance theory is known as the UV fixed point. The adjective UV, or ultraviolet, means that we talk about short-distance physics. The term "fixed point" means that it is a scale-invariant theory: it is invariant (fixed) under the scaling of the distances (deriving longer-distance physics from short-distance physics) which is the basic operation we do in the so-called Renormalization Group, a modern framework to classify quantum field theories and their deviations from scale invariance in particular.

A funny thing is that scale-invariant theories are very restricted. For example, the \(\NNN=4\) gauge theory is unique for a chosen gauge group (up to the coupling constant and the \(\theta\)-angle). Its six-dimensional ancestor, the (2,0) superconformal field theory, is also scale-invariant and it is completely unique for chosen discrete data.

In other cases, the space of possible scale-invariant field theories is described by a small number of parameters. Even when we study the deformations of these theories that don't break the renormalizability, we only add a relatively small number of parameters. The condition that an arbitrary quantum field theory is renormalizable (and consistent up to arbitrarily high energy scales) is pretty much equivalent to the claim that one may derive its ultimate short-distance limit which is a UV fixed point.

It's this existence of the scale-invariant short-distance limit that makes our quantum field theories such as the Standard Model predictive. We may prove that there's only a finite number of parameters that don't spoil the renormalizability i.e. the ability to extrapolate the theory up to arbitrarily short distances. And when we extrapolate the theory to vanishingly short distances, we inevitably obtain one of the rare scale-invariant theories which only depend on a small number of parameters (and some discrete data).

So the scale-invariant theories aren't just an interesting subclass of quantum field theories that differs from the rest; for each consistent scale-non-invariant quantum field theory, there exists an important scale-invariant theory, namely the short-distance limit of it. There is another scale-invariant theory for each quantum field theory, namely its ultimate long-distance limit. Both of these limits may be – and often are – non-interacting theories because the coupling constants of QCD-like theories may slowly diminish. Also, the long-distance limit, the infrared fixed point, is often a theory with no degrees of freedom: it is "empty". For example, QCD predicts the so-called "mass gap" – the claim that all of its particles that may exist in isolation have a finite mass (the mass can't be made zero or arbitrarily small). So if you only study particle modes that survive below a certain very low energy, you get none.

No doubts about that: scale-invariant theories are very important for a proper understanding of all quantum field theories. They also play a key role in string theory, at least for two very different reasons – perturbative string theory and the AdS/CFT correspondence. These roles will be discussed in the rest of this blog entry.

Scale-invariant vs conformal

Before I jump on these stringy issues, let me spend a few minutes with a subtle difference between two adjectives, "scale-invariant" and "conformal". A scale-invariant theory is one that has the following virtue: for every allowed object, process, or history (one that obeys the equations of motion or one that is sufficiently likely according to some probabilistic rules), it also allows objects, processes, and history that differ from the original one by a scaling (or these processes are equally likely).



A conformal i.e. angle-preserving map. In two Euclidean dimensions, it's equivalently a (locally) holomorphic function of a complex variable. Note that all the intersections have 90-degree internal angles.

The adjective "conformal" is a priori more constraining: for a given objects, processes, and histories, a conformal theory must also guarantee that all processes, objects, and histories that differ from the original one by any (possibly nonlinear) transformation of space(time) that preserves the angles (between lines, measured locally) – any conformal map – must also be allowed. Conformality is a stronger condition because scaling is one of the transformations that clearly preserve angles but there are many more transformations like that.

While conformality is a priori a stronger condition than scale invariance, it turns out – and one can prove – that given some very mild additional assumptions, every scale invariant theory is automatically conformally invariant. I won't be proving it here but it's intuitively plausible if you look what the theory implies for small regions of spacetime. In a small region of spacetime (e.g. in the small squares in the picture above), every conformal transformation reduces to a composition of a rotation (or Lorentz transformation) and a scaling. So if these transformations are symmetries of the theory and if the theory is local, it must also allow you to perform transformations that look "locally allowed" (locally, they are rotations combined with scaling) – it must allow conformal symmetries.

Now, what is the group of conformal transformations in a \(d\)-dimensional space? Start with the Euclidean one. The group of rotations is \(SO(d)\). What about the conformal group, the group of transformations preserving the angles? One may show that transformations such as \(r\to 1/r\), the inversion, preserve the angles. A conceptual way to see the whole group is the stereographic projection:


You may identify points in a \(d\)-dimensional flat space with points on a \(d\)-dimensional sphere – the boundary of a \((d+1)\)-dimensional ball – by the stereographic projection above. A funny thing you may easily prove is that this map preserves the angles. For example, if you project the Earth's surface from the North Pole to the plane tangent to the South Pole, you will get a map of the continents that will locally preserve all the angles (but not the areas!).

It follows that all the isometries of the sphere, \(SO(d+1)\), must generate some conformal maps of the plane. In fact, one may do the same thing with a projection of the plane from a hyperboloid or Lobachevsky plane – the sphere continued to a different signature. For this reason, the conformal group must contain both \(SO(d+1)\) and \(SO(d,1)\) as subgroups: it must be at least \(SO(d+1,1)\). One may show that there are no other transformations,too.

To get the conformal group, you write the rotational group as \(SO(m,n)\) and add one to each of the two numbers to get \(SO(m+1,n+1)\). Because the Minkowski space is a continuation of the sphere, a continuation of the procedures above proves that the same thing holds for the conformal group of a \(d\)-dimensional spacetime. While the Lorentz group is \(SO(d-1,1)\), the conformal group is \(SO(d,2)\). Yes, it has two time dimensions. This group \(SO(d,2)\) contains not only the Lorentz group but also the translations, scaling, and some extra transformations known as the special conformal transformations.

Role of conformal field theories in AdS/CFT

The group \(SO(d,2)\) is the conformal group of a \(d\)-dimensional Minkowski space. However, it's also the Lorentz symmetry of a \(d+2\)-dimensional spacetime with two timelike dimensions. In fact, we don't need the whole \((d+2)\)-dimensional spacetime. It's enough to consider all of its points with a fixed value of\[

x_\mu x^\mu = \pm R^2,\quad x^\mu \in \RR^{d+2}.

\] For a properly chosen sign on the right hand side (correlated with the convention for your signature), this is a "hyperboloid" with the signature of the induced metric that has \(d\) spatial dimensions and \(1\) time dimension. This hyperboloids is nothing else than the anti de Sitter (AdS) space, namely the \((d+1)\)-dimensional one.

So for every conformal transformation on the \(d\)-dimensional flat space, there is an isometry of the \(AdS_{d+1}\) anti de Sitter space. If we start with a conformal theory in \(d\) dimensions, there could also be a theory on \(AdS_{d+1}\) that is invariant under the isometries of this anti de Sitter space. Juan Maldacena was famously able to realize this thing – during his research of thermodynamics of black holes and black branes in string theory – and find strong evidence in favor of his correspondence.

For every (non-gravitational, renormalizable, healthy) conformal (quantum) field theory in a flat space, there exists a consistent quantum gravitational (and therefore non-scale-invariant) theory (i.e. a vacuum of string/M-theory) living in a curved spacetime with one extra dimension, the \((d+1)\)-dimensional anti de Sitter space, and vice versa. I was predecided to keep this section short. Although the AdS/CFT is arguably the most important development in theoretical physics of the last 15 years, it's been dedicated enough space and I realize that if I started to explain various aspects of this map, we could easily end up with a document that has 261 pages much like the AdS/CFT Bible by OGOMA, not to be confused with the author of a footnote in a paper about the curvature of the constitutional space. His name was OBAMA. ;-)

The AdS/CFT correspondence is important because it makes the holography in quantum gravity manifest, at least for the AdS spacetimes. Also, it allows us to define previously vague and mysterious theories of quantum gravity in terms of seemingly less mysterious non-gravitational (and conformal) quantum field theories. Well, it also allows us to study complex phenomena in hot environments predicted by conformal field theories in terms of more penetrable – and essentially classical – general relativity in a higher-dimensional space. Complex phenomena including low-energy physics heroes such as the quark-gluon plasma, superconductors, Fermi liquids, non-Fermi liquids, and Navier-Stokes fluids may be studied in terms of simple black holes in a higher-dimensional curved spacetime.

Role of 2D conformal field theories in perturbative string theory

Because of the chronology of the history of physics, it would have been logical to discuss the role of conformal field theories in perturbative string theory before the AdS/CFT. I chose the ordering I chose because the AdS/CFT is closer to the "more field-theoretical" aspects of string theory than the world sheet CFTs underlying perturbative string theory – something that is as intrinsically stringy as you can get. That's why the discussion of two-dimensional CFTs was finally localized in the last section of this blog entry.

The textbooks of string theory typically tell you that strings generalize point-like particles. While point-like particles have one-dimensional world lines in the spacetime, strings analogously leave two-dimensional world sheets as the histories of their motion in spacetime.

If you want point-like particles to interact with each other, you need "vertices" of Feynman diagrams; you need points on the world lines from where more than two external world lines depart. Such a vertex is a singular point and this singularity on the world lines – the vertices in the Feynman diagrams themselves – are the ultimate cause of the violent short-distance behavior of point-like-particle-based quantum field theories, especially if there are too many vertices in a diagram and if they have too many external lines.



An idea to defeat this sick short-distance behavior is to have higher-dimensional extended elementary building blocks. In that case, you may construct "smooth versions" of the Feynman diagrams on the left – but the "smooth versions" have the property that they're locally made of the same smooth higher-dimensional surface. The pants diagram on the right side of the picture – the two-dimensional world sheet depicted by this illustration – has no singular points.

However, there's a catch: if the fundamental building blocks have more than 2 spatial dimensions, the internal theory describing the world volume themselves is a higher-dimensional theory and to describe interesting interacting higher-dimensional theories like that, you will need "something like quantum field theory" which will have a potentially pathological behavior that is analogous to the behavior of quantum field theories in ordinary higher-dimensional spacetimes.

Two dimensions of the world sheet is the unique compromise for which you may tame the short-distance behavior in the spacetime – because you don't need "singular" Feynman vertices and the histories are made of the same "smooth world sheet" everywhere; but the world sheet theory itself is under control, too, because it has a low enough dimension.

In fact, an even more accurate explanation of the uniqueness of two-dimensional world sheets is that you may get rid of the world volume gravity in that case. Imagine that you embed your higher-dimensional object into a spacetime by specifying coordinates \(X^\mu(\xi^i)\) where \(\xi^i\) [pronounce: "xi"] are \(d\) coordinates parameterizing the world line, world sheet, or world volume. With such functions, you may always calculate the induced metric on the world volume\[

h_{ij} = g_{\mu\nu} \partial_i X^\mu \partial_j X^\nu.

\] If you also calculate \[

ds^2 = h_{ij} d\xi^i d\xi^j

\] using the induced metric \(h_{ij}\) for some small interval on the world sheet \(d\xi^i\), you will get the same result as if you calculate it using the original spacetime metric \(g_{\mu\nu}\) with the corresponding changes of the coordinates \(dX^\mu = \partial_i X^\mu\cdot d\xi^i\). It's kind of a trivial statement; you may either add the partial derivatives while calculating \(dX^\mu\) from \(d\xi^i\) or while calculating \(h_{ij}\) from \(g_{\mu\nu}\).

So even if you decide that the induced metric isn't a "fundamental degree of freedom" on the world volume, it's still there and if the shape of the string or brane is dynamical, this induced metric field is dynamical, too. You deal with a theory that has a dynamical geometry – in this sense, it is a theory of gravity. However, something really cool happens for two-dimensional world sheets: you may always reparametrize \(\xi^i\) by a change of the world sheet coordinates – given by two functions \(\xi^{\prime i} (\xi^j)\) – and bring the induced metric to the form\[

h_{ij} = e^{2\varphi(\xi^k)} \delta_{ij}

\] where \(\delta\) is the flat Euclidean metric; you may use \(\eta_{ij}\) in the case of the Minkowski signature, too. So up to a local rescaling, the metric is flat! It's easy to see why you can do such a thing. The two-dimensional metric tensor only has three independent components, \(h_{11},h_{12},h_{22}\) and the diffeomorphism depends on two functions so it allows you to remove two of the three components of the metric. The remaining one may be written as a scalar multiplying a chosen (predecided) metric such as the flat one. At least locally, i.e. for neighborhoods that are topologically disks, it is possible.

And if the world sheet theory is conformally invariant, it doesn't depend on the local rescaling. It doesn't depend on \(\varphi\), either. So every world sheet is conformally flat which means that as far as physics goes, it's flat! At least locally. There are obviously no problems with the quantization of gravity if the "spacetime", in this case the world sheet, is flat.

Many things simplify. For example, the Nambu-Goto action, generalizing the length of the world line of a point-like particle, is the proper area of the world sheet which is an integral of the square root of the determinant of the induced metric. This looks horribly non-polynomial etc. You may be scared of the idea to develop Feynman rules for such a theory. However, if you exploit the possibility to choose an auxiliary metric and make it flat by an appropriate diffeomorphism, the action actually reduces to a kindergarten quadratic Klein-Gordon action for the scalars \(X^\mu(\xi^i)\) propagating on the two-dimensional world sheet!

That's really cool because the resulting modes of the string inevitably contain things like the graviton polarizations in the spacetime. So you may describe gravity in terms of things that are as simple and as controllable as free Klein-Gordon fields in a different space, the two-dimensional world sheet.

I said that the conformal symmetry for a \(d\)-dimensional space is \(SO(d+1,1)\) and/or \(SO(d,2)\), depending on the signature. But in two dimensions, this group is actually enhaced to something larger: the group of all angle-preserving transformations is actually infinite-dimensional (as long as you don't care whether the map is one-to-one globally and you only demand that it acts nicely on an open set of the disk topology). In the Euclidean signature, they're nothing else than all the holomorphic functions of a complex variable; in the case of the Lorentzian signature, one may redefine \(\xi^+\) to any function of it and similarly for \(\xi^-\), the other lightlike coordinate. The variables \(\xi^\pm\) are the continuations of \(z,\bar z\), the complex variable and its conjugate. It's not shocking that we may redefine the lightlike coordinates by any functions and independently: the angles in a two-dimensional Minkowski space are given as soon as you announce what the two null directions are; but the scaling of each of them is inconsequential for the (Lorentzian) angles i.e. rapidities.

The conformal symmetry plays an important role for the consistency and finiteness of string theory, at least in the (manifestly Lorentz-)covariant description of perturbative string theory. States of a single string are uniquely mapped to local operators on the world sheet (that's true even for higher-dimensional CFTs); the OPEs (operator product expansions) encode the couplings between triplets of string states; the loop diagrams are obtained as integrals over the possible shapes of the world sheet of a given topology (genus) and these shapes reduce to finite-dimensional conformal equivalence classes. So all the loop diagrams may be expressed as finite-dimensional integrals over manifolds – spaces of possible shapes of the world sheet – which are under control.

Again, this article doesn't want to go too deeply to either of these topics because I don't want to reproduce Joe Polchinski's book on string theory here. Instead, this blog entry was meant to provide you with a big-picture framework answering the question "Why do the conformal field theories get so much attention?".

I feel that this kind of questions is often asked and there aren't too many answers to such questions in the standard education process etc.

And that's why I wrote this memo.

Modest Peter Higgs: yes, the boson is like Margaret Thatcher

Physics World offers us a 20-minute audio interview with Peter Higgs:
Peter Higgs in the spotlight
Among other things, he sensibly says that others may deserve to appear in the name for the Higgs mechanism but it's probably OK for the boson to be called after Higgs himself because he was the unique guy who promoted the boson's existence around 1964. He reviews some history involving a rejected paper etc.




At the end, he also endorses the analogy between the Higgs boson and Margaret Thatcher:



Margaret Thatcher appears in the room and things become heavy. Higgs just emphasizes that it is wrong to compare the process of acquiring mass to a syrup because the deceleration in a syrup is a dissipative process while the Higgs mechanism isn't.

If you don't know, the explanation of the Higgs boson as Margaret Thatcher was presented to the U.K. science minister in 1993 by David Miller from a university in London.



The Hunt for Higgs, BBC, 2012

By being so kind, Higgs is actually repaying a debt to the Fe Lady, a famous British chemist. Her defense of the LHC may have been crucial for the survival of the LHC project – and therefore for the looming discovery of the Higgs boson.
Margaret Thatcher was more circumspect when she wrong-footed sceptical Cabinet colleagues with her defence of public spending on the Large Hadron Collider. "Yes, but isn't it interesting?" was enough to stifle their objections. And her interest in the work at CERN was rewarded by Tim Berners-Lee establishing the groundwork for the World Wide Web. I've seen the original computer server with a note from Tim attached, instructing fellow scientists not to switch it off. Our lives have truly been revolutionised by his inventiveness.
Her soulmate Ronald Reagan initiated the SSC; however, that project had to continue through some more hostile years in the U.S. and it died.



Today, Google Czechia celebrates the birthday of Josef Ressel, the Czech-Austrian inventor of the propeller (and other things). He wasn't bad for a forest warden.

Thursday 28 June 2012

Why we should combine Higgs searches across experiments

Aidan Randle-Conde of the U.S. LHC blogs argues
Why we shouldn’t combine Higgs searches across experiments.
On July 4th, the CMS and ATLAS will present their current state of the Higgs searches. There is no reliable information on whether or not one of these experiments (or both) will claim the discovery of the Higgs boson – at least a 5-sigma signal.



The data collected by each experiment may be calculated to be approximately enough for a 5-sigma discovery right now so we will have to be surprised whether they have been "lucky" or not.

Some rumors coming from extremely unreliable places say that they have been lucky enough. Chances are not far from 50% that the rumors are right. However, if you believe that the 4.5-sigma combined excesses from 2011 show that the Higgs boson is there, it's guaranteed that there will be enough evidence to discover the Higgs boson in the combined charts that include the collisions recorded both inside the ATLAS detector as well as the CMS detector. I am confident that the combined significance level will be above 6 if not 7 sigma.

Aidan finds it inappropriate to combine the data. He argues as follows:
The next obvious step would be to combine the results from the two experiments and count the sigma. Despite being an obvious next step, this is the worst thing we could do at the moment. The Higgs field was postulated nearly 50 years ago, the LHC was proposed about 30 years ago, the experiments have been in design and development for about 20 years, and we’ve been taking data for about 18 months. Rushing to get a result a few weeks early is an act of impatience and frustration, and we should resist the temptation to get an answer now. Providing good quality physics results is more important than getting an answer we want.
From these paragraphs, assuming that Aidan is a rank-and-file member who must obey his bosses, we are learning that ATLAS and CMS will not combine their charts on July 4th (although the two Tevatron experiments did exactly that when they officially discovered the top quark). However, I think that his explanations why the two experiments' data shouldn't be combined are pure emotions. The quality isn't lowered in any way if one combines the results.




It's actually the duty – the only right attitude – for any scientist outside CMS and ATLAS to combine the results. Why? Simply because a fair and impartial scientist must take all the existing evidence for a theory or against a theory into account. In total, the LHC has accumulated about 20 inverse femtobarns of proton-proton collisions at 7 or 8 TeV – that translates to 1.4 quadrillion collisions – in 2011 and 2012. These collisions may be evaluated by two different teams, for various reasons I will discuss momentarily. But all of them are valuable data and no impartial scientist outside these collaborations may justify any kind of cherry-picking.

Also, it's a red herring to say that we "want" the Higgs. I don't "want" the Higgs. I may be much happier if the LHC discovered the Motl fermion instead of the Higgs boson. I just see clear data showing that the Higgs boson is there and this certainty will probably become even more extreme on July 4th.

It's simply a fact that if ATLAS ended up with a 4.5 sigma signal and CMS would have the same, we could still combine them in quadrature to get a 6-sigma or 7-sigma excess. One may reconstruct the number of events in each channel – counting both the collisions inside ATLAS and inside CMS – and see that the evidence against the null hypothesis (a Standard Model without any nearby Higgs boson) has reached a 6-sigma confidence level (or higher).

It would be nothing else than a pure denial of scientific facts if someone would reject the claim that the higgsless hypothesis has been excluded with the certainty required by particle physics. One may offer lots of moralizing words but all of them are irrational gibberish. Including all the known evidence is the only honest and accurate enough way to give the best possible answer at this moment to the question whether the experiments show that there is a Higgs boson near 125 GeV.
If we combine measurements from two different experiments we end up losing the vital crosscheck. The best way to proceed is two wait a few more weeks until both experiments can produce a 5 sigma discovery and see if these results agree.
This "crosscheck" comment is also irrational because if both experiments will show a near-5-sigma excess in similar channels (just imagine that none of them will get "there" separately), then the crosscheck will have been done and its result will have been "passed". There is no other "crosscheck" one needs to do when he determines that two experiments are saying pretty much the same thing about the existence of the Higgs boson. Waiting for another "crosscheck" and not knowing what this "crosscheck" is exactly supposed to be – except for seeing that the experiments agree within the error margins which will have been achieved – is nothing else than obstructionism, a nonsensical justification of the denial of one-half of the data.

Finally, I want to discuss the "experimental duality cult":
The reason we have two experiments at the LHC looking for the Higgs boson is because if one experiment makes a discovery then the other experiment can confirm or refute the discovery. This is why we have both D0 and CDF, both Belle and BaBar, both ATLAS and CMS, both UA2 and UA1 (where in the interest of fairness the orders the names are chosen at random.) Usually these pairs of experiments are neck and neck on any discovery or measurement, so when one experiment sees an effect but its counterpart doesn’t then it’s likely due to a problem with the analysis. Finding such a problem does not indicate poor scientific practice or incompetence, in fact it’s part of the scientific method to catch these little hiccups.
This is just plain bullshit. There is nothing essential for the scientific method and its quality about the maintaining of such redundandies. The actual reason why there are two major detectors at the LHC, ATLAS and CMS, is that the price tag of the LHC ring was nearly $10 billion while one big detector only cost $1 billion or so. So it would be waste of space and an imbalanced investment if the $10 billion ring only harbored one detector.

Instead, people decided to build two detectors, to double the number of collisions. The remaining choice was whether the detectors would follow the same design (which would be a bit cheaper) or different designs. It just became fashionable sometime in the 1980s to have detectors of different designs on the same accelerator but there's nothing "vital" about this diversity, either. If the LHC hosted two ATLAS detectors or two CMS detectors instead of one ATLAS and one CMS, we would know pretty much the same thing today, with pretty much the same reliability and data quality.



A Time-Lapse Video Shows Creation of Giant Higgs Boson Mural: on the side of the ATLAS control room (2010)

It's a pure convention to build detectors according to two designs; and it's a pure convention to have two teams of physicists working on them. One could also have three or 47 detectors and 470 teams like that. But one could also live with one. In fact, most of the key discoveries in physics (and science) occurred without any such redundancy. There was no Rutherford II who would be discovering the nucleus; there was no Eddington II who would travel to a different forest to observe the 1919 solar eclipse; I could continue like that.

Before the 1980s, even particle physics worked without such redundancies. When SLAC was discovering quarks and QCD in the late 1960s (deep inelastic scattering), they weren't employing two competing groups of experimenters. And it just worked, too. One could argue that the redundancy of the experiments is just another manifestation of the reduced efficiency of the contemporary institutionalized science, an example showing how the money is wasted in the public sector, a demonstration that the Big Science also tends to behave in the same way as the Big Government and it tries to employ as many people as possible even though they're not necessary. The Big Government and bureaucracy – and the Big Science is unfortunately similar in many cases – is able to reduce the efficiency of the process to an arbitrarily low level.

All these people are employed – and someone responsible for that may think that he or she is "nice" because of that – but they inevitably end up somewhat frustrated because even when they discover the Higgs boson, and a similarly vital particle is only discovered a few times in a century, each of the ATLAS and CMS member will only "possess" 1/6,000 of such a discovery. She or he is an irrelevant wheel in an unnecessarily large machine. Everyone knows that. The members know that, too. It's a price for someone's decision to give "jobs" to a maximum number of physicists. It's not the only price, of course.

Someone wants to double the number of the people who are getting salaries from the public sector? No problem, why don't we just double the number of otherwise equivalent experiments?

Concerning the inefficiencies that automatically come together with the Big Government, one could also argue that there exists a deliberate effort to delay and slow down the discoveries – it's implicit even in Aidan's text – so that the experimenters don't look "useless" in the following year or years (assuming that nothing else will be discovered which is a huge assumption). But that's an unacceptable policy, too. If discoveries may be made quickly, it's just wrong to artificially slow them down. It's a bad testimony about the efficiency of the system if some people manifestly think that they will be rewarded if their work is extra slow.

Experiments may check each other but is this possibility to check the other folks' work really worth the money? I would say it's not. CDF claimed crazy things and bumps and D0 later rejected them. But if D0 hadn't done it, sensible people would have thought that CDF made an error, anyway (I have surely recognized all the bogus CDF papers as soon as they published them). And these claims would be pretty much experimentally rejected by the LHC by today, anyway.

Incidentally, I also strongly disagree with Aidan's claim that "Finding such a problem does not indicate poor scientific practice or incompetence, in fact it’s part of the scientific method to catch these little hiccups." The numerous erroneous claims that the CDF has made in recent years – "little hiccups" is an over-the-edge euphemism – surely do show that their research work was poor in recent years. If the redundancy and competition is good for anything, it's good exactly for knowing that D0 has done a better job in the 145 GeV bump and some bizarre top-quark-related claims than the CDF did. In other words, CDF did a poor job and if the errors boiled down to human errors, then the disproved observations by the CDF show someone's incompetence, too. The scientific method indeed expects people to search for errors in their and other people's work; but once such errors are found, it does mean something about the competence and all functional scientific institutions will inevitably reflect this competence or incompetence in hiring and funding decisions! This is a part of the separation of the wheat from the chaff that is so essential in science! If D0 is doing things in a better way than the CDF, it's desirable to strengthen "things and people working in the D0 way" and suppress "things and people working in the CDF way". Saying something else is a denial of the good management, denial of natural selection, denial of almost all mechanisms that improve the society in the long run.

One may also say that the redundancy is neither sufficient for a reliable crosscheck nor useful when the crosscheck fails. Even if there are two experiments, they may make an error which is especially likely if all the experimenters are affected by the same lore and habits (or the detectors are plagued by similar design flaws). And on the contrary, if one experiment debunks another, we really don't know who is right. So strictly speaking, when D0 rejected the CDF claims about the 145 GeV Wjj bump, we should have said that it was a conflict and we don't know who is right so having two detectors didn't really help us to learn the truth. To decide who is probably right, we need some "theoretical expectations", anyway. D0 is right because it's much more likely for someone (CDF) to make an error that deviates from the Standard Model in a random way – there are many such potential errors – than for someone else (D0) to make the right unique error that is needed to exactly cancel Nature's deviation from the Standard Model. But with similar "theoretical expectations", I didn't really need D0 to know that CDF was wrong, anyway.

The ongoing discussion about the "advantages of the competition" is much more general. Some people want to social-engineer competition in the markets. Whenever a company is successful and large or dominant in some business, they want to hurt it, cut it, destroy it, subsidize its competitors, and so on. The idea is that such an intervention improves the products and consumers will benefit etc. However, there is no valid general rule of this sort. In some cases, one company is simply doing a better job than others and that's why it's dominant. This dominant company knows the best recipe to transform an invested dollar to a profit in the future. So any different allocation than the allocation that makes it dominant is misallocation; by hurting the dominant company, it also reduces the profitability and efficiency of the system as a whole.

What makes capitalism superior isn't "forced competition" that is as diverse as possible; what makes capitalism superior is the "freedom for the competition to appear". But if the conditions are such that there is no competition and things can be produced more efficiently by one company which makes everyone satisfied, then this is how it should be (at least temporarily, but the market decides how much long this situation may last).

Imagine that you fund N otherwise (nearly) equivalent experiments and the question is for what positive value of N, you get the optimum ratio of "value of scientific information you get from the experiments" over "the price of the experiments". A simple question. Aidan clearly wants to suggest – with all the comments about "good quality" and "crosschecks" – that the best value is N=2. But that's rubbish, of course. The law of diminishing marginal returns implies that the optimum value of N is N=1. What the additional experiment gives you is inevitably less valuable than what the first one could have given you. In most cases, the experiments simply overlap so they must share the "scientific profit".

To summarize, the redundancy and diversity of the experiments is grossly overrated. By convention, people chose to evaluate the LHC data in two separate teams. It's a perfectly legitimate although not necessarily economically optimal design – like the design of a car with 6 or 8 cylinders. But this randomly chosen design shouldn't be worshiped as a key ingredience of the scientific method. It's certainly not one.

While ATLAS and CMS folks are supposed to work in isolation in order to check each other at the very end of the research, it's still true that the LHC has accumulated a certain amount of experimental data – 1.4 quadrillion collisions – and throwing away some of the data would mean to deny the experimental evidence. ATLAS and CMS employees may have internal rules not to look at other experimenters' data; that's just fine as an internal etiquette. But impartial scientists who are not constrained by similar internal rules just can't afford to ignore one-half of the collisions; or to pretend that the two sets of the collisions don't affect the answers to the same questions (i.e. pretend that the data can't be combined).

They definitely do. Whenever a physicist wants to know what the LHC is telling us today, he must combine all the collisions.

And that's the memo.

Wednesday 27 June 2012

CMS leak: Higgs cross section will be announced with a 30% error

The pirate at the top, next to the flags, opens the mobile format of this blog entry: try it.

A member of the CMS collaboration has leaked the information that on July 4th (see the counter in the sidebar), the discovery of the Higgs boson will be accompanied by the information that the cross section for its production will have been measured with the 30% error margin.



While this error may look large, it doesn't mean that the significance of the discovery is only 3 sigma. In reality, the large uncertainty of the cross section may be partly if not mostly explained by the nonzero background events' cross section, its uncertainty, and the uncertainty of the overall luminosity of the LHC. One may roughly estimate that 30% of the Higgs cross section (which is comparable to 20 picobarns) is less than 20% of the total cross section for the Higgs and (i.e. plus) the background events which is why one is more than 5-sigma certain that the Higgs contribution is nonzero.




I won't tell you the last name of the person who leaked this information, in a pretty obvious way, because this could cause some trouble to Tommaso. :-) I think that he should get a confirmation from his psychiatrist that he just can't resist to leak these things – and then he should defend himself at the CMS by the claim that they shouldn't discriminate against psychopaths.

BTW, after some time, he changed the number with the leak to another number, 50%. When you're masking your footprints, you shouldn't make the masking more visible than the footprints themselves...

Murray Gell-Mann on testing superstring theory

A "Web of Stories" interviewer was under a visible influence of several crackpots who have polluted the science (and other) popular media in recent years so he asked Nobel laureate Murray Gell-Mann what he thought about the testing of superstring theory.



Murray Gell-Mann was obviously surprised by the question, thinking it was a strange one, and he enumerated some important successful postdictions (especially Einstein's equations derived from a deeper starting point) as well as some predictions waiting to be checked (SUSY: watch also Crucial tests of string theory).




He doesn't see anything wrong about superstring theory when it comes to testing. There's really no conceptual difference between the tests of this string theory and tests of older theories, e.g. those that Murray Gell-Mann became most famous for almost half a century ago.

In the interview, Gell-Mann has also defended the two main pillars of progress in high-energy physics – the theoretical extraction of the appropriate predictions; and the collider technology we must try to push to as high energies as possible.

In a different short video, Gell-Mann chastised Shelly Glashow for his hostile attitude towards string theory that Gell-Mann cannot understand. Glashow is a lot of fun, as I know from many social events in Greater Boston, but I can't understand it, either, even after I have used ex-his Harvard office for quite some time. ;-)

Gell-Mann also (partially) blames the completely wrong idea that superstring theory can't be tested on Glashow. One must carefully distinguish about direct tests of the "unified [Planckian] regime" and any tests of a theory; these are completely different questions. Glashow's attitude is even more paradoxical given his key contributions to grand unification in Yang-Mills theory which would also occur at (directly) inaccessible energy. Glashow is therefore a clear example of a person living in a wooden house not throwing termites. Why he should do that?

Gell-Mann and foundations of QM

In another video, Gell-Mann discusses Hugh Everett. Gell-Mann, Feynman, and others spent a part of their 1963-1964 years by thinking about the foundations of quantum mechanics and their conclusions didn't really differ from the "consistent histories" etc. that Gell-Mann (with Jim Hartle) was contributing to in recent decades, from 1984-1985.

He credits John Wheeler himself for the concept of "many worlds" while Everett was primarily interested in solving problems, not specifically in quantum mechanics, so he wasn't really disappointed by solving military problems in the rest of his life. At any rate, Gell-Mann didn't know about Everett; they reproduced some of the features. Like myself, it doesn't make sense to say that there are "other worlds that are equally real as ours".

See many other videos in the list at the bottom of this page which contains literally hundreds of several-minutes-long interviews with Gell-Mann. There are of course more than 7,000 other stories recorded by 166 other famous people on that website, too. You may try e.g. James Lovelock on his being a green skeptic (story 15).

Hat tip: Paul Halpern and B. Chimp Yen



Spain vs Portugal

Our, Czechia's quarterfinal conquerors, Portugal, are playing the semifinals of the Euro against Spain as I am typing these words. I can't stop thinking of Treaty of Tordesillas (1494). A Pope just wrote a one-page document and divided a planet, planet Earth, to two nations, Spain and Portugal, along a meridian that was chosen by randomly spinning a globe in his office. The paperwork used to be tolerable 500+ years ago. ;-)

In fact, they didn't talk about hemispheres: if you moved to the West from the demarcation meridian, it would always be Spanish; and the colony would be Portuguese anywhere to the East. Only decades later, Spaniards realized a conflict and argued that there was also an anti-meridian that helped to divide the world to two hemispheres.

At any rate, the reason why Brazil speaks Portuguese and occupies the Eastern corner of the South America boils down to the 1494 treaty. I was also intrigued by this 1502 Portuguese map, Cantino Planisphere:



Note the accurate shape of Africa – it is much more accurate than some obscure islands such as Great Britain. ;-) As you go further, the accuracy goes down and the map resembles an area settled by mysterious dragons. In some sense, this map is exactly "in the middle" of the explorers' mapping process. I believe that our current picture of the landscape of string theory is somewhat analogous. Lots of things that are very accurate in our minds, lots of sketches that are off, some identifications that shouldn't be there and some missed identifications that should be there, a few missing continents such as Australia ;-), but it is clear that we already know "much more than nothing".

Microsoft Surface: serious about details of hardware

A few weeks ago, I downloaded Windows 8 Release Preview and tested it inside the VirtualBox: here's how.



It's a great Windows operating system. The newest feature is that there's an independent user interface (aside from the Windows desktop) behind everything – the Metro UI – in which you have dynamically updated square or rectangular tiles which you may drag back and forth in the way you know from iDevices. Clicking at them opens full-screen minimalistic iPad-style applications.




There's some kind of a tension between "serious users" of computers and those who want "intelligent gadgets" as tools for entertainment and ordinary communication. The latter makes the iPhone-like user interface more natural: it's primarily designed for the stupid and lazy people, after all. These consumers represent most of the mankind and the dominant market and each of us belongs to this group at various moments. This user interface has lots of advantages for common situations – and one may be surprised to learn that a high percentage or most of the things we do with mobile gadgets are "ordinary".

However, sometimes you need to seriously work. You need – or at least someone else needs – all the functions you have known from the PCs. That's where things like iPad may start to look inadequate – unless a perfect application has been created to do really everything you need to do. Of course, a gadget that can do everything that a computer can is preferred in such situations. Given the strength of their hardware, you may often find it illogical that the Apple devices don't allow you to do everything you expect from computers.

A week ago, Microsoft has presented its Microsoft Surface tablet – not to be confused with the older Microsoft Surface at a Table which is now known as Microsoft PixelSense. Because everyone has already absorbed the improvements in Windows 8, what they were showing was primarily the hardware. And it looks like a nontrivial piece of work.



In the 47-minute presentation above, Steve Ballmer reviews some history of hardware at Microsoft, before the specialists (starting with Bruce Willis' twin brother whose Microsoft Surface freezes but he quickly picks a new one). They invented the Microsoft mouse, the first truly commercially successful one, among many other things. And it began with bad ratings! But what's fun about this Microsoft Surface tablet is the melted magnesium chasing and cover (VaporMg) which is light and resilient and various details about the integrated stand, the cover which also has a keyboard in it, magnetic hinges to attach it so that the sound of the "click" is also optimized, the user-friendliness-optimized ways to reconcile fingers with digital ink, and other things.

I think that many of you did notice that hardware is often suboptimal when it comes to various details. You must have been sure that things could have been much more clever or more convenient (despite a nearly identical price) if something had a different shape, was differently attached, was turned on or off automatically when you do something, and so on. It's good that some people are paid to seriously investigate such "details" before they release the product.

Of course, none of these things guarantees the commercial success. We will see whether Surface will share the fate of Windows which was a kind of success; or the fate of Zune, Zune MarketPlace, or KIN-one and KIN-two phones which didn't really make it when it came to the dollars. In the case of the KIN phones, the products survived for a few months only...

Tuesday 26 June 2012

Some totally healthy SUSY models

Matthew W. Cahill-Rowley, JoAnne L. Hewett, Ahmed Ismail, and Thomas G. Rizzo have looked at millions of phenomenological MSSM models (to be explained later) and they have found some "totally ordinarily looking" models which are not fine-tuned, which have light superpartners, and which are compatible with the 125 GeV Higgs boson as well as all published constraints on SUSY:
The Higgs Sector and Fine-Tuning in the pMSSM
Just to be sure, our ignorance about the precise way how SUSY is broken in the MSSM may be quantified by 100+ parameters. There is a subspace of this parameter space that has 19-20 parameters, the phenomenological MSSM or pMSSM, in which we require no new sources of CP violation, minimal flavor violation, degenerate 1st and 2nd generation squarks and sleptons, and vanishing A-terms and Yukawa couplings for the first two generations.

The LSP is the gravitino or a neutralino in these models. Incidentally, I've been thinking about the possibility that the LSP is charged, a stau, and we observe lots of "very heavy" hydrogen atoms (with the same spectrum) that have a stau or antistau in the nucleus which makes them hundreds of times heavier than the ordinary hydrogen atoms. Chemically, they're indistinguishable but their extra mass could replace dark matter, couldn't it? The main problem is that one would probably get too much of this stuff as the staus wouldn't be good enough in annihilation...

At any rate, starting from page 36, they showed the superpartner spectra of models that are not only compatible with all published measurements including the 125 GeV Higgs boson but that have also an unusually low amount of fine-tuning, \(\Delta\lt 100\). Here is the first one they show:



Click to zoom in.

They offer 12 other charts like that – and each of these models looks substantially different from others. Note that many superpartner masses are well below 1 TeV in these models.




In particular, the neutralino masses are near 100, 150, 250, and 3000 GeV: only the heaviest one is heavy. Both charginos are light, 100 and 250 GeV or so. The lighter stop and sbottom squarks are near 400 and 350 GeV, respectively. Their heavier cousins are near 1100 and 1600 GeV. Five slepton masses are also between 400 and 800 GeV while the gluino and the remaining squarks and sleptons are between 2400 and 3600 GeV.

The other 11 spectra they show differ in many important details. These models are compatible with the current LHC data, despite the light superpartners, for various reasons: generally, they just don't predict too much missing transverse energy, typically because a superpartner prefers to decay to other but heavier superpartners than the LSP. This produces chains instead of lots of missing transverse energy.

The last sentence of the paper brings an upbeat message:
Hopefully the LHC will discover both the light Higgs boson as well as the 3rd generation superpartners during its 8 TeV run.
Not bad because the first condition is likely to be satisfied in 7 days from now. What about the other one? Does the democracy between these two hopes suggest that they're aware of a 3rd generation rumor that is comparably strong to the widely known July 4th Higgs rumor?

Of course, if you believe that one of these models is right, you may ask: Why would the correct point in the MSSM moduli space try to imitate the non-supersymmetric Standard Model as well as it apparently does? In other words, why would the right supersymmetric model be one of those that require the longest amount of time to be discovered by the LHC – which seems to be the case, kind of? That's a good question but I would like to emphasize that it is a question. There is no proof that there can't exist a good, intuitive, qualitative answer. There may very well be a good reason.

If you agree that a phenomenologist should try to reduce the amount of fine-tuning, each of these models is much more acceptable than the Standard Model whose fine-tuning is extreme, despite the fact that one has to choose "the best model among 100,000 candidates" (which is not too much). On the other hand, if you think that this whole \(\Delta\)-counting and naturalness arguments are wrong and you prefer minimality, the Standard Model is your golden standard.

If the LHC continues to find no new physics, the latter position – the Standard Model – will be getting stronger relatively to the first one (if nothing else happens that would change the odds, of course). However, this drift is extremely slow. Whenever you reduce the surviving fraction of the MSSM parameter space by another factor of 2, you are effectively finding something like new 1-sigma evidence against SUSY. However, at the same time, people may use completely different theoretical or experimental methods to find some new 2-sigma evidence in favor of SUSY. So the exclusions of mild "majorities" of spaces isn't too strong an argument.



BTW if you haven't had a breakfast yet, it's very important for you to prepare it in a politically correct and mathematically correct way. Concerning the second condition, you have to cut the bagel in such a way that it is transformed to a pair of mutually linked tori. ;-) Instructions may be found in the video above. The essence of the procedure is familiar to those who have cut the Möbius strip in the middle and repeated the same process with the resulting non-Möbius, doubly twisted strip.

Climate skepticism is just being born in Brazil

Luis Dias from Brazil's former European headquarters kindly translates the essence of the video in the comments

Rio de Janeiro has witnessed another preposterous environmentalist summit a few days ago. The only tangible outcome of Rio+20 was that Jesus Christ was forced to join the Green Party: click the picture for a BBC story. It was quite a good selection of the place because when it comes to the climate panic, Brazil is one of the most brainwashed countries in the world although the hysterics are doing very "well" in India, China, and France as well.

There is no clear proportionality between the ignorance, poverty, and the influence of the pseudoscientific doctrine on global warming. However, it shouldn't be hard for you to imagine "communication-challenged nations" that honestly depend on the inflow of U.N.-filtered and U.N.-fabricated news and that haven't even heard some basic facts about the climate and its natural change, that haven't even heard the statement that this whole panic is a pile of junk. I am pretty sure that this is true for most of the natives of Micronesia who are being constantly told that their islands will sink because of the Czech power plant's CO2 emissions.

But this description of Brazil may be changing.




Chris Mooney has just told us that he has one more reason to panic:
Climate Denial Hits Brazil (Think Progress, Climate; DeSmogBlog Demagog)
What he sees as the major turning point in the brainwashing of the people in Brazil is the following 30-minute May 2012 interview of comedian Jo Soares (the old guy) with Prof Ricardo Augusto Felicio – the show resembles the Letterman show or Kraus' show in Czechia:



680,000 views (and rising) isn't bad for a video in Portuguese. The number suggests that a clear condensation core of the Brazilian climate realism is firmly in place.

He primarily studies the Antarctic climate. Incidentally, the melting rate of the Antarctic shelves was recently zero, thus proving that virtually all existing climate models describe the continent (and probably the remaining continents as well) inadequately.

Incidentally, Felicio has previously prepared the Portuguese translation of the Skeptical Handbook by a namesake of the TV host ;-), Jo Nova.

I've said it many times but we should care about the developing countries because the lack of adequate information about the climate is much worse over there – and these countries are really those that would suffer maximally if some global policies aiming to regulate CO2 were imposed. These countries are full of people who genuinely want or need to know some facts about the climate, as opposed to the shameless lies presented by the climate alarmists, but they're just not getting anything. Due to the lower penetration of these countries by the Internet and other media, those people may be kept and are kept in blissful ignorance about the basic numbers.

At any rate, you may want to read Mooney's rant to conclude that he is seriously worried; some people dared to watch the interview and Felicio is even allowed to teach some students at the University of São Paulo. Imagine the blasphemy! ;-)

Meanwhile, when it comes to the superficial criteria, I am convinced that the climate realists have a good public face in Brazil although I can't verify all Felicio's statements. That's because I don't speak Portuguese too well; so far, I have only authored one book in that language. ;-)

When one talk about Latin America and global warming, Shakira is planning to release a "global warming" album later this year, holy crap. A song she recorded with a rapper has been leaked.



I started with one "green" sculpture that's been modified; here's another one. Maybe the Americans will improve their famous statue, too. "Warming of the planet": via Angharad. ;-)

Feynman's 1986 Dirac Memorial Lecture

He talked about antiparticles, CPT, spin, and statistics



You may watch those 70 minutes. Feynman starts by saying that Dirac has been his hero so he was honored to give a Dirac lecture. Dirac was a magician because he could guess the right equation, a new strategy to do science.

However, Feynman also says that Dirac also invented Zitterbewegung which wasn't terribly useful. Well, not only that: Zitterbewegung is completely unphysical. However, it wasn't invented by Dirac but by Schrödinger as the German name of the non-effect indicates. ;-) Dirac wouldn't make such an elementary mistake when it came to basic quantum mechanics.




However, Feynman quickly returns to the marriage of special relativity and quantum mechanics: antiparticles are essential for the union. He was going to focus on antiparticles and the Pauli exclusion principle.

Without particle-antiparticle production, the antisymmetry of the wave function would boil down to the initial state – God knows why it's the way it is – and the antisymmetry would just be preserved by the evolution. However, it gets more interesting because new particles may be produced and the wave function is still antisymmetric in them.

Feynman began to talk about amplitudes for processes with particles and antiparticles. Unfortunately, the transparencies are not too readable. Can you find a better quality video somewhere? Well, the camerawoman didn't look at the slides most of the time, anyway.

At any rate, he says that the Fourier transform composed of positive-frequency modes only is inevitably non-vanishing in each interval. You need negative frequencies as well if you want things to vanish in the past or outside light cones which is needed for causality. The negative-energy objects have to be a part of physics in some way.

Virtual particles have to exist and one man's virtual particle is another man's virtual antiparticle, not to mention women's antiparticles :-), simply because the sign of energy of spacelike energy-momentum vectors depends on the reference frame. So the antiparticles' properties are actually fully determined by particles' properties because of relativity.

Feynman mentioned that he had never pronounced "probability" correctly because he didn't have the patience. LOL.

He explains why the fermions cancel diagrams to impose the Pauli exclusion principle, something that doesn't hold for bosons such as spin \(j=0\) particles (Feynman confusingly talks about photons with \(j=1\) just seconds earlier).

The Bose-Einstein statistics isn't hard to understand whenever we talk about oscillations. The Fermi-Dirac statistics may be more counterintuitive so of course some special attention is invested into explanations of the minus signs for fermions.

A segment of the lecture in which the (invisible) math formulae are important is dedicated to clarifying the Feynman propagator – why it's the right way to combine the retarded and advanced propagators' features. Unfortunately, he's too modest to call it "Feynman propagator" so you may have a problem to determine what he's really talking about here. ;-)

The CPT-theorem is explained in the same way I am doing it (independently). The CPT-operation is really just a rotation of spacetime. In the Minkowski space, there's also the mysterious nowhere land, the spacelike region, but with continuations to the Euclidean space, the CPT-operation simply is just a rotation by 180 degrees. So the world has to be invariant under it. (The C, charge conjugation, is an internal operation that is automatically included in the operation because if a particle goes backwards in time, as seen on the arrow of the 4-vector \(j^\mu\), and it does go backwards if we perform the T, the time reversal, then e.g. the charge \(\int\dd^3 x\,j^0\) reverts the sign.)

Around 29:00, the fermions change the sign of the wave function with rotations under 360 degrees: \(\exp(im\phi)\) just gives you that. Intuition wouldn't be enough. At 30:00, he reviews the Dirac's belt trick, a physical exercise showing that a rotation by 720 degrees is like doing nothing while a 360-degree rotation twists your arm. A huge applause follows.

The rest of the lecture is about a careful tracing whether you did the rotation or not. Without clearly readable transparencies, the technical stuff may be a bit hard to follow. He calculates \(T^2\) which has to change the state at most by a phase but the phase may be nontrivial. There are differences for bosons and fermions...

At 57:50, Feynman discusses a permuted connection between two loops – attributed to a Mr Finkelstein – that could be instantly used as an explanation of your humble correspondent's matrix string theory. ;-)