Sunday 31 March 2013

Tom Banks: holographic axioms against firewalls

Bill Zajc sent me a link to the following fresh CERN talk by my (former) PhD adviser Tom Banks:
The Unruh Effect, the S-matrix and the Absence of Firewalls
It's the same kind, unimitable style I have known for years. Tom also uses (or approves of) many assumptions I consider right or even dear and he reaches various conclusions I agree with. In particular, there are no black hole firewalls. See his and Willy Fischler's paper about firewalls.

But otherwise the results are presented as corollaries of some much deeper wisdom that I've been exposed to since 1999. Although I added another hour to the exposure, the beef in the hypothetical deeper wisdom – Tom's axiomatic holography – remains utterly incomprehensible to me.




When I was listening to this talk, it made me organize my reasons why it seems impossible to me to make sense of these things. What are the reasons?

Tom claims to have a completely new starting point to understand quantum gravity, one that is more fundamental and more accurate than any alternative that is already known to us. In this setup, Hilbert spaces and operators may be associated with finite regions of spacetime, with causal diamonds – essentially intersections of the (filled) future light cone of one point and the (filled) past light cone of another one.

This seems like a much more ambitious framework than the formulations of string theory we know. Those work with infinite spacetimes (flat Minkowski space or another space in perturbative string theory; flat Minkowski space times a simple compactification manifold in Matrix theory; AdS space times something in AdS/CFT) and the infinite extent seems essential for string theory's consistency. The boundary conditions at infinity are the players that allow us to attach the fields to their preferred values, choose the superselection sector, make something like the S-matrix scattering problem well-defined, and circumvent the fact that all local operators refuse to be gauge-invariant in a diffeomorphism-invariant theory (because the diffeomorphisms, i.e. gauge redundancies, change the coordinate location of such operators).




So I am surely not as confused about his statements as some people in the audience who think that Matrix theory "must" be a special case of Tom's axiomatic approach (they think so perhaps because Matrix theory is the insight that is most often associated with his name). It's surely not because Matrix theory is dealing with the whole infinite space but Tom's axiomatic approach is all about our ability to describe finite causal diamonds separately.

But I am still confused about almost everything else. ;-)

The main problem is that despite Tom's repeated promises that he proposes a "new formalism", there doesn't seem to be any formalism at all. The papers don't really have any mathematical structures or equations or expressions of a new kind (and in most cases, not even those of the old kinds) that could allow a mathematically inclined reader to "smell" where the scent of the new perspective is coming from. At least I've never "smelled" this scent. A new formalism is exactly what seems to be completely absent!

A compatible observation with this absence of the formalism is that major conclusions – whether they seem "probably correct" or "probably wrong" – are never derived through mathematical steps from comprehensible, well-defined axioms or assumptions. Instead, when it comes to the claims that should be the interesting conclusions, we're usually told that "Tom agrees with someone".

Now, the picture seems to involve many assumptions that I consider correct but some assumptions that I consider incorrect. But what is most confusing about the collection is that we're never told why the particular combination of the assumptions is picked and what can be derived out of this collection that would convince us that Tom sees something that others don't. In other words, we're never shown – I think – that/why the set of assumptions is the "complete set" needed to achieve something interesting.

If I mention the starting points of Tom that are correct, he is surely a believer in the proper Copenhagen-like, intrinsically probabilistic interpretation of quantum mechanics. Much like your humble correspondent, he thinks that Niels Bohr and friends had known pretty much everything they should have known (including the "conceptual framework of decoherence" even though decoherence was only articulated explicitly in recent decades) and he knows why the "alternative interpretations" of quantum mechanics such as GRW and Bohm-like "realist" interpretations (and perhaps the many worlds of the realist varieties) are wrong. He also understands holography as a novel feature of quantum gravity that forces us to qualitatively deviate from some wisdoms we got used to in the context of the effective quantum field theories. I have surely learned some things from him so some of the conclusions I am making might be secretly influenced by Tom's leadership.

Those things are great but they aren't new and the agreement seems to stop here.

It's partly because many technical assumptions that are being added to the mix for reasons that seem totally unclear. For example, Hilbert spaces are being associated with particular diamonds, regions of spacetime, or even particular world lines in the spacetime. I don't think it's really possible because this association seems to assume exactly the kind of locality – well-definedness of locations in the spacetime and even well-definedness of exact world lines – that the holographic nature of the new framework (and sometimes even the quantum fuzziness of the old type) should reject.

Moreover, the world lines of "all conceivable observers" seem like an even less well-defined collection of concepts because these world lines should also include non-differentiable, almost everywhere time-like trajectories (because they do contribute to the path integral) and if you look at these trajectories with too high a resolution, you must run into the conflict with the quantum gravity's ban on "probing too short distances". Quite generally, his picture seems to make the metric tensor degrees of freedom play too classical a role; it disagrees with the dictum "spacetime is doomed" that I surely believe to be right.

So for these and other reasons, I don't really believe that there's a way within quantum gravity to exactly associate a Hilbert space with a finite causal diamond and/or a world line of an observer in this causal diamond (despite the fact that I do think that something could or should become more well-defined or simpler for regions with null boundaries). The previous statement should be viewed as a no-go theorem (well, a hypothesis) about possible formulations of string/M-theory or another hypothetical consistent theory of quantum gravity. It's a non-existence statement I believe to hold although I can't prove it. Tom believes the opposite one but I don't think he is proving it, either. But even if I decided to join his faith system, it is still not clear to me why it would be interesting, i.e. what interesting conclusions or insights I could derive out of this membership in the "church". ;-)

In my opinion, the association of the Hilbert spaces with finite causal diamonds – when we take it as an exact statement about a quantum gravitational theory – probably violates the holographic character of the underlying dynamics. Or using a weaker statement, if this approach works, it must come with a mechanism that erases the bulk degrees of freedom (here I mean some degrees of freedom inside the intersections of several causal diamonds or on their shared boundaries) so that only the surface degrees of freedom remain physical. I don't see any structure in Tom's "atlas of diamonds" that would fulfill this task so this "atlas of diamonds" seems effectively local in the sense of a local effective QFT.

Another technicality that seems to be an artifact of misleading intuition is the spinor bundle and the "qubits" composition of the Hilbert space. There doesn't exist a reason why the Hilbert space dimension should be a power of two; it's mostly the incorrect paradigm believed by those who understand our ordinary computers (that just happen to be based on binary numbers, but even this thing could be done differently – and in fact, it has been) more than they understand physics (which doesn't include Tom, as the glitches with the laptop during the talk show). In fact, in the examples of interesting systems in quantum gravity we may describe, the Hilbert space dimension is never a power of two (think about the microstates of a Strominger-Vafa black hole with some charges). The entropy is never an integer multiple of \(\ln 2\). After all, it has no reasons to be.

Again, even if we joined Tom (and others, in the case of this particular assumption) in his belief that these special Hilbert spaces constructed out of qubits play a special fundamental role in quantum gravity, what of it? What encouraging conclusions (either unifying conclusions or those that nontrivially agree with some facts of Nature we know either from experiments or from "mathematical experiments" we did we particular vacua of string/M-theory) could we derive out of this assumption besides the assumption itself? In other words, why? If there's no answer to these "why" question, is it surprising that most people don't buy these proposals?

More generally, Tom seems to view the dimensionality of the Hilbert spaces associated with the diamonds to be entries on his (never completely specified) list of axioms. This very ambition seems very bizarre for a fundamental theory of spacetime because the dimension should be derived from a more elementary starting point: it's a thermodynamic property of the spacetime and its subsets. It's not clear to me how a precise i.e. microscopic theory of the spacetime could ever result from some properties of the physical system (spacetime) in the macroscopic regime, i.e. from the thermodynamic limit, which – by the basic character of thermodynamics – only contains a very small information about the microscopic theory. Quite typically, when we have a well-defined framework, the dimensions are the dimensions of representations of an algebra of operators and we must know the whole algebra, not only the numerical values of the dimensions.

In a later part of the talk, Tom is suggesting that his/their insights are at least compatible if not (partly?) equivalent to some cute proposals by Ted Jacobson. Now, I would positively interpret Jacobson's work as a nice way to "localize" some Bekenstein-Hawking-led efforts to relate general relativity with thermal phenomena. Bekenstein and Hawking would only tell you things like \(S=A/4G\) and \(T=a/2\pi\) (gravitational acceleration at the event horizon: I wrote it for the Unruh effect here) for some particular static global solutions etc. Jacobson is associating temperatures and entropy densities with small places in the spacetime and reinterpreting Einstein's equations as some equations of thermodynamics applied to the situation in which the physical system of interest is the dynamical spacetime.

I think it's cute, much of it is right, but many of the extra words that Jacobson has said about those things are self-evidently wrong, too. Many statements in related papers seem to be extremely loosely connected with the actual calculations that do work. Effectively, Jacobson is telling you "if you liked this calculation, then you must also agree with this and that because it's being said by the same Jacobson". Sorry, that's not how a rational careful person increases the collection of things he believes to hold. Tom's proselytizing strategy seems to be a bit similar. Too many statements just don't follow from any arguments or calculations.

A particular feature of Tom's approach is that quantum gravity is presented as some kind of "quantized hydrodynamics". Andy Strominger co-led the program to relate Einstein's equations near the event horizon with the equations of hydrodynamics (Navier-Stokes equations in various forms). I sort of understand what's going on over here although some questions remain. But it's clear that there's a mathematically nontrivial yet verifiable map between two a priori distinct systems of equations.

Tom's picture seems to be different. The calculations seem to be missing. On the other hand, Tom seems to extract much more far-reaching conceptual conclusions out of this (hypothetical) map. But if there's some evidence for these connections, I fail to see it again.

Well, the main problem here is that we really shouldn't quantize hydrodynamics because hydrodynamics is an effective theory of a high-entropy system. It ignores lots of degrees of freedom whose existence is needed for the high entropy. If I am a bit more explicit, what I mean is that we know that "a liter of [static] water at room temperature" doesn't correspond to a unique quantum state in the Hilbert space because all the water molecules are still very complicated and chaotic, stupid. However, in hydrodynamics, this object effectively does correspond to a unique classical configuration (vanishing speed, constant temperature etc.). That's really why hydrodynamics isn't and can't be a detailed microscopic, fully quantum theory of the liquid.

If you look what is the main feature that allows an effective theory to be both useful and intrinsically non-fundamental, you will realize that it's the incorporation of "local temperature" among its degrees of freedom. According to statistical physics, the temperature is only a meaningful quantity that describes a physical system once we start to look at the ensembles of microstates – many microstates in a set that we effectively refuse to distinguish. And because we don't distinguish them, we're inevitably overlooking some microscopic degrees of freedom. If we're not overlooking anything, we simply can't associate the temperature to generic states because a fixed temperature comes with very special probability distributions for the microstates and the most generic microstates just can't agree with the thermal distributions for any \(T\).

Once you understand and endorse the previous paragraph, you should agree with me that all effective theories that use the concept of a temperature are inevitably non-fundamental, effective theories for high-entropy systems that overlook some degrees of freedom. Bekenstein, Hawking, and perhaps Jacobson have told us how to construct such a description of black holes and quantum gravity. But it's an inseparable part of the picture that this framework simply can't be microscopically exact, it can't tell us about any details how the entropy is actually encoded. Tom – and perhaps Jacobson – seem to disagree with the argument above (it's supposed to be a full proof, not just an invitation for you to believe or not believe something) and I think that they're manifestly wrong about this point.

Also, Tom says a mostly correct thing that the Hawking evaporation results from the interactions of the quantum fields around the black hole – which kind of admit a local description – with the degrees of freedom of the black hole or its event horizon that are inherently non-field-theoretical. I agree with that. It's just an interaction between a system whose microscopic structure (local fields: they're OK outside the black hole) seems to be kind of understood; and degrees of freedom (inside the black hole, those that carry the bulk of the black hole entropy) that seem to be mysterious.

While I stay positive, I should mention that this paradigm explains why it's so hard to uniquely answer the question "Where is the entropy of a black hole located?". There can't be a canonical, good answer because the black hole doesn't respect the rules of locality, as we know them from the effective quantum field theories, so it doesn't have to organize the degrees of freedom by their location.

However, I would still be more cautious when I am saying similar things. I would say that the black hole dynamics and theories describing the black hole evaporation may work even if they say nothing about the location of the degrees of freedom, even if they reject the quantum-field-theory-like paradigms for the bulk of the black hole entropy. However, there may still exist some "dual descriptions" of the set of black hole microstates that effectively does such a thing (black hole fuzzball solutions are a major example) and I don't see anything wrong with their hypothetical existence. Even if they exist, they don't hurt; they're just a new "dual" way to look at the Hilbert space of the microstates. So I don't understand how it could ever be helpful for learning something if we assumed that such a local description of the degrees of freedom mustn't exist. Again, it looks like another assumption by Tom that seems completely useless because nothing useful may ever come out of it.

A few paragraphs above, I mentioned that descriptions with the temperature are inevitably non-fundamental. Jacobson effectively relates curvature tensors with local temperatures of a sort so this description with curvatures – general relativity – isn't a fundamental description of the microstates, either. Indeed, any particular vacuum of string theory is producing lots of degrees of freedom that go well beyond the general theory of relativity. The metric tensor is just the ground state in an infinite tower of excited closed string states, if I mention the most transparent perturbative string theory example.

Tom seems to say that his axiomatic framework can say lots of things about the fundamental microscopic laws of physics, perhaps the spectrum of particles etc. It seems very obvious to me that this can't be the case. Because he incorporates the temperature (by identifying it with the spacetime curvature, according to some of those Jacobson's recipes), it is inevitably just an effective, thermodynamic description of some properties of the quantum gravitational system.

I think that Tom has never derived anything about the spectrum of fields or particles out of his starting point. It could be just because I have overlooked something he has said or written but the previous paragraph contains an argument that tells me that such a derivation can't really exist. In other words, Tom is working with some effective description – or some vague properties of it – and assumes that it must contain the deepest definition of "a theory of everything" even though it's strikingly obvious to me that it can't contain such a thing.

Again, Tom differs from the people whose proposals drive me up the wall because they are ignorant about most of the established science – he knows the physics up to some relatively recent point of departure very well and he also knows how to present it. But despite this "promising DNA" and even "welcome conclusions", the newly proposed ideas seem frustratingly illogical to me.

Also, Tom meaningfully concludes that there are no firewalls but his arguments seem to be incomprehensible to me, too. And they seem to have a very little overlap with the reasons why your humble correspondent or e.g. Raju and Papadodimas and others think that the AMPS proof is wrong. I can't get rid of the feeling that Tom hasn't attacked the detailed microscopic steps in the AMPS argument at all – he is rejecting everything based on his opposition to the framework of the effective field theory. But even though the effective field theory is misleading about many points, it must still be very accurate and nontrivially sensible when it comes to many other points because this success has been verified experimentally. So Tom's refusal of the effective field theory even as a "standard that a deeper theory must emulate when it comes to the everyday life predictions" seems like a case of the baby thrown out with the bath water and it's hard to believe anything based on this "very revolutionary" approach.

Saturday 30 March 2013

01 result from AMS-02 on 03/04 at 05 pm

Do you know which number would be the next one?

Six weeks ago, Sam Ting claimed that not too uninteresting results from the Alpha Magnetic Spectrometer would arrive in two or three weeks.



It didn't quite work; it's plausible that they were waiting for reviewers to say OK, something they may have expected to be just a painless or painful formality back in February. Now we're told that it's at least 6.5 weeks but this timing should work. Why?




As tweeted by various antimatter tweeters and Twitters, there will be a talk at CERN on Wednesday, April 3rd.




The speaker is Samuel Ting (MIT) and the title is simple:
Recent results from the AMS experiment
The one-hour talk begins at 17:00 Pilsner Summer Time (which starts tomorrow) which should be 11:00 Boston Daylight Saving Time and it will be webcast by CERN. What the talk will say remains to be seen but it could be lots of fun (or less fun).

Just if you missed it, this powerful apparatus carried by the International Space Station could have easily discovered the dark matter particle or it could have undiscovered it or it could have done something entirely different, too. ;-)

Friday 29 March 2013

Antiprotons obey CPT within 5 ppm

Just a neutrino link: Two weeks ago, MiniBooNE reported some results that seem to conflict with the standard model of 3-flavor neutrino oscillations. Hat tip: Joseph S.
In January 2013, the ATRAP Collaboration that includes e.g. Gerry Gabrielse – an ex-colleague of mine who also led the most accurate measurement of the electron's magnetic moment, the most accurately verified prediction in all of science – published the preprint
One-Particle Measurement of the Antiproton Magnetic Moment
that finally made it to PRL this Monday. The article was accompanied by a popular review by Eric Hudson and David Saltzberg – who is also famous as the flawless science consultant behind The Big Bang Theory CBS sitcom; he's the man who makes sure that Sheldon Cooper in particular doesn't talk gibberish.



Saltzberg with Bill Prady. What part of 41 53 43 49 49 don't you understand? It does sound like a Sheldonite question...

The experimenters threw some of their Harvard devices to their luggage and pockets and they flew to CERN where the (\(5\MeV\)) antiprotons are cheap and abundant. A happy place, indeed. By measuring some frequencies of transitions in a magnetic field, they could quantify the magnetic moment of the antiproton – how strong a magnet each antiproton is. And yes, except for the sign, the result agreed with the figure for the proton within the antiproton 5 parts per million (0.0005%) error margin.




The proton's magnetic moment is measured with accuracy that is about 500 times better than for the antiproton. I think it's because you don't have to be afraid of the protons so much and they are cooler. When you are trying to determine how smooth the skin of various animals is, you may also get a more accurate measurement for a kitty than for a tiger.




I have written about CPT, CP, C, P, T, and all that many times, e.g. in November 2012, so I won't do it again. Here, let me just mention that while C,P,T,CP have been found to be broken in Nature, CPT has been proven to hold in a Lorentz-invariant Lagrangian quantum field theory since the mid 1950s when it was demonstrated by Wolfgang Pauli, John Bell, and Gerhart Lüders.

This symmetry – which replaces all particle configurations by their mirror images composed of antiparticles and that runs everything backward in time – has to hold because this CPT is the right interpretation of the rotation of the Euclideanized spacetime in the \(tz\) plane by \(\pi\) and this has to be symmetry because it belongs to the analytical continuation of the Lorentz symmetry.

The CPT-symmetry has many trivial consequences. For example, when you talk about the mass of the proton, there's just one number. We don't have a special mass for the "left proton" or the "right proton", a special mass for the "proton observed forward in time" and the "proton observed backward in time". It follows that as far as the value of the mass goes, the transformations P and T are irrelevant. CPT effectively reduces to C and it says that the mass of the particles and their antiparticles have to coincide.

(Most recently, this was verified at the LHC for tops and antitops, despite some previous implausible statements by CDF at the Tevatron.)

Similarly, there are just two possible values of the magnetic moment – one for the proton and one for the antiproton – and it's equally clear that the CPT-symmetry can't do anything else than to relate these two constants and demand that their magnitudes are equal. This is what the ATRAP Collaboration verified at a rather impressive accuracy.

I would think that if the top quark and the antitop antiquark had masses that would differ by \(3\GeV\) i.e. 2 percent or so, as previously claimed by the CDF, a comparable asymmetry would probably exist for other quarks as well and it would be more or less unimaginable that the antiproton – which is a rather complicated bound state of quarks and gluons that feels "everything" – would have the same properties such as the magnetic moment as the proton, within 5 ppm.



Your humble correspondent is a theoretically inclined person who takes all the proofs of the CPT-theorem very seriously and I think that the CPT-symmetry simply has to be exact. But even if you ignored all of theory, there are still experimental constraints. Measurements such as the measurement of the magnetic moment of the antiproton (and the proton) show that the typical differences between the properties of rather ordinary particles and their antiparticles are smaller than several parts per million (several times 0.0001%).

For the CDF to claim that they see evidence of a 2% difference between the top and the antitop was an immensely extraordinary statement that required extraordinary evidence and their 2-sigma deviations, probably caused by some human errors anyway, were simply not the adequate evidence to back similar statements.

At any rate, congratulations to ATRAP.

Reunification of Korea

Almost exactly 10 years ago, in March 2003, the U.S. invaded Iraq. I had mixed feelings about that decision and I still have mixed feelings. At any rate, Iraq (#2) used to belong to the Axis of Evil, as defined by George W. Bush, along with Iran (#1) and North Korea (#3). Quite certainly, one original member of this axis is no longer a member.

Note that John Bolton has defined the "Beyond the Axis of Evil Group" composed of Cuba, Libya, and Syria. Cuba has softened somewhat and Raul Castro is planning to retire; the Libyan regime has been overthrown so a re-evaluation would be appropriate; and Syria is in the state of a Civil War, we will see what comes out of it. (Condi Rice has talked about the "Outposts of Tyranny": Belarus, Burma, Zimbabwe. Not much has changed about those, AFAIK.)

You see that the Axis of Evil, both the core group and the extended one, has gotten weaker or less scary or nicer, if I exaggerate a bit. At least some good news. This evolution may continue.




For quite some time, I would think that Iran would be the most likely new place where dramatic developments – such as a war with the West and Israel – could take place. That's because of Iran's work on nuclear technology that may be violently abused as soon as the mullahs give new orders. But Iran has convinced us – or fooled us – that not much is happening. They even plan to stop the enrichment (perhaps because they already have enough?).

So quite surprisingly, North Korea is the new epicenter of tensions of possible looming wars.




The last military hotline between North Korea and South Korea has been cut off by Kim III, the ruler of North Korea. They systematically expose their seemingly childish propaganda plans to destroy the United States and some centers in South Korea – a country they primarily blame for being puppets of the U.S. We don't know whether anything should be taken too seriously. But Kim III is an untested young ruler and he may be willing to establish himself with dramatic actions.

Meanwhile, South Korea is full of love and compassion. The asymmetry of the situation is kind of amusing to me.

At any rate, it's somewhat plausible that a conflict could lead to a rearrangement of the order on the peninsula. South Korea has had a reunification ministry for quite some time and it works on the assumption of a peaceful evolution. I am not sure whether it's the right expectation for Korea but fine.

The possible democratization of North Korea would be an interesting event for the regional and even global economy. Note that South Korea has about 50 million citizens and $20,000-$30,000 GDP per capita (nominal/PPP). North Korea has 25 million citizens and below $2,000 GDP per capita.

Despite the much more bloody and war-like relationships between the two Korean states, it seems sensible to compare the situation with the reunification of Germany. For West Germany, it was a doable task to "invite" all the East Germans and artificially bring them to the West German level. After all, the East German population was 4 times lower than the West German one.

For South Korea, it may be a bit harder task because the population ratio is 2 and not 4. However, perhaps with some help from the West, a shock therapy meant to incorporate North Korea into the democratic Korean state as soon as possible could be a feasible plan. The North Korean savers could get some money for their savings, using a generous exchange rate, some extra welfare payments per person to make the transition smoother, but the rest of the economy could be instantly liberalized.

Alternatively, the nine top-level provinces of North Korea could be merged with the democratic Korean state one-by-one, e.g. in one-month intervals. The order of incorporation of the provinces could depend on "how ready they are", "how eager they are according to polls", and "how many of them want to emigrate to South Korea".

At this level, the plans sound straightforward and easy but I am afraid it could be much harder in Germany because it's very likely that a huge fraction of the North Korean citizens really believe the propaganda so the non-communist side could face many more enemies than just the leaders. They're much more detached from the external reality, I think. On the other hand, I think it's partly due to the lack of leadership in the West. The West should try to provide the North Korean citizens with much more information in some way and actually outline the alternatives. It seems painful that these things aren't really happening much.

Intuitively, I feel that the North Koreans would turn out to be at least as hard-working as the South Koreans and the addition of this territory and population to the global free economy would be an improvement for importers as well as exporters across the globe.

Thursday 28 March 2013

Irrational dissatisfactions with physics

When people aren't understanding certain issues in physics and when they're asking questions, e.g. at the Physics Stack Exchange, they understandably seem unhappy about something. When something seems strange or something doesn't make sense, it's sensible for people to feel somewhat disturbed or unhappy. All of us know the feeling.

In some cases, the dissatisfaction depends on a technical result and there are many of them to be learned. However, I would say that way too often, people are dissatisfied because of reasons that are utterly non-technical and that are universal.




One source of such dissatisfaction has been discussed often: certain results or principles in science are counter-intuitive and many people simply want to prefer their intuition over whatever evidence you may get. They're not ready to give up intuition and accepts the scientific method. Quantum mechanics is the most frequent victim – but not the only victim.

After all, the reason usually is that these people believe that science is just a slave that "fills the details" and isn't allowed to touch some important questions that are decided by "someone more powerful than science" which means either intuition or religion or ideology or philosophy or another package of scientifically unsubstantiated dogmas and prejudices.




But the laymen often get dissatisfied because of slightly different reasons, too. One of them is closely related to the "intuition" that I have already mentioned. People often want some "intuitive understanding" of concepts, equations, laws, and theorems in physics. Sometimes they call it their "physical meaning" but it's really the same thing because if you ask them why some accurate description of a concept isn't a "physical meaning", they end up saying that it isn't intuitive for them.

An example.

Sebastian asks a question suggesting that he understands the claim of the virial theorem and its proof (well, one of several proofs that differ by the strategy and the choice of ensemble); he seems to be using this theorem in some programming related to molecular physics. But he's not satisfied:
Right now this is just some math to me (which I totally get) to calculate the temperature of a system of particles in thermal equilibrium. Is there more to it? Am I not getting it? What is the intuition behind this?
What to do with this dissatisfaction? How is one supposed to "answer" it? I had a friendly chat with the chap but I am just not getting where the feeling comes from. I am not understanding whether he understands what I was telling him. It just remains a mystery because the dissatisfaction is ultimately powered by some totally irrational drivers.

I told him that "just some math" is a loaded expression because it refers to mathematics in a disrespectful way. Mathematical results and their mathematical proofs are the most solid – and the only truly solid – results and proofs we really have in science. The virial theorem is undoubtedly such a mathematical result. But it's not "just maths" because the objects in the theorem have a physical interpretation.



A new cute Andrew Borisiuk's time-lapse video of my hometown of Pilsen (where a roof of a movie theater in the Plaza Mall collapsed today). YouTube. Vimeo.

Now, the importance of the virial theorem in statistical physics is also self-evident. Statistical physics is about the computation of average values of various quantities in the statistical ensembles – that's really the explanation what the adjective "statistical" means. It means that statistical physics is all about such computations and if some average value may be explicitly determined, of course that this portion of physics or mathematical physics and its practitioners have to be interested in such a conclusion.

The particular quantity appearing in the virial theorem is special because its average value may be simplified to the simple result proportional to the temperature; and because this quantity is often equal to kinetic or potential energy or other natural or simple functions of positions or momenta (e.g. their powers, in a popular example of the theorem). So of course that we're interested in their average values if we're interested in average values at all – if we're interested in statistical physics at all! Moreover, even for quantities that aren't exactly addressed by the virial theorem, they may often be "similar" to those that are addressed by the virial theorem.

We have spent some time with the proof, too. It's rather simple and it may be even simplified or replaced by approximate arguments – this goes up, this goes down – that explain that it passes all the expectations and/or reduces to some "simple common sense" results in special cases. Nice but the dissatisfaction didn't go away.

One may also review the history – how the claim was first known for the harmonic oscillator, kinetic and potential energy, and then it was generalized because with some data about particular examples, it's not hard to guess what the generalization looks like and it's not hard to construct the proof even if you're the first one. Sebastian told me he wasn't really interested in the history i.e. how people got it in the actual history of science.

So what is he interested in and asking about? What is he dissatisfied with? I am absolutely not getting it. And I find it sad that people are dissatisfied in this way because physics and its results – including things like the virial theorem, if we stay in the modest waters of classical physics etc. – are very exciting. Sebastian clearly doesn't see this exciting nature of physics at all. 99.99% of the people don't see it, either. I don't know why. I don't know what's stopping them. But I am annoyed by that and I still refuse to believe that whatever the obstacle is, it cannot be destroyed, nuked, neutralized, liquidated in some way. I surely want to liquidate it because while I like physics, I am annoyed by the sour faces and frustrated whining that any mentioning of physics (especially advanced physics) immediately ignites in a more normal human society (and sometimes even elsewhere)!

Needless to say, the exchange may have stayed friendly, especially in the chat, but it just didn't lead anywhere and couldn't lead anywhere. The obstacle preventing people from understanding the actual key results that form the skeleton of our physics knowledge is a formidable enemy. I don't know how to nuke it and destroy it. I don't even know how to locate it. ;-) Anyone can help me?

Another example is a minor curiosity but the lesson of the following story is also much more general. Terry asked whether the Higgs mechanism addresses the "spin-statistics problem". Cute. Now, there is no spin-statistics problem. There is a spin-statistics relationship or spin-statistics theorem and it's a good thing, an insight (and theorem) about Nature, not a problem, so it shouldn't be "addressed" but learned, exploited, and celebrated. I feel that the wrong idea that some important result in physics is actually a "problem that should be wrestled with" is also a rather general misunderstanding that appears quite often. It must come from somewhere. It must come from some negative emotions that are being conveyed together with the technical material and that make people feel that there is a problem even if the instructor or textbook etc. never says such a thing.

Maybe Terry saw or heard that there was a problem with theories that combine half-integer spins with the Bose-Einstein statistics or vice versa. Indeed, it's a problem for these theories – they're dead – but for us, the death isn't a problem. It's a precious piece of knowledge. Maybe Terry and others think that the ban is a problem because everything should be allowed in physics. Anything goes. Well, that's not how Nature works and thank God for this fact. Every new insight we make proves that "anything goes" is even more wrong than previously thought. There are laws, there are patterns, only some statements are right while others are not.

Or take this bizarre question about the measurement of the cosmological constant. QSA suggests that when people say that they measure the constant, they are actually calculating it, so they're sloppy about semantics and terminology. Holy cow. Why would they say "measure" if they meant "calculate"? They're completely different verbs, aren't they? Now, a measurement of the cosmological constant sometimes needs to do some calculation aside from the "manual work with the measurement apparatus" but that's true for any other measurement of anything, too. The difference between a measurement and a calculation is that we still need some observational apparatus so the determination of the cosmological constant is really not a (mere) calculation.

(No one knows how to uniquely calculate the observed value of the cosmological constant, due to the largeness of the "landscape" and the absence of a known selection mechanism etc. When people learn how to perform this calculation, it will probably mean that the theory of everything has been fully understood. On the contrary, the measurement of the cosmological constant is a standard procedure in cosmology these days; the 2011 Nobel prize in physics was awarded for these achievements.)

This guy sees people who are saying "measure" but he hears "calculate". He convinces himself that they must be saying something else than they are saying. Why? What leads him to incorporate these random distortions and misprints into sentences he hears? What leads him to believe that the mistake is created by those who use the verb (and say that they're sloppy about semantics) – and not by himself, a person who manifestly fabricates what he's actually hearing?

A user named rsg asked about the applications of entangled states. The general theme is much more general here, too. He's asking what the concept is good for. He is apparently assuming that physical concepts are always "objects" that are about as large as a car, that have an application, and that must be justified by an application for them to be allowed to exist at all.

But this is a complete misunderstanding what science tries to do. Science is meant to understand how Nature works. It is not a collection of gadgets we collect to improve our lives. Moreover, different concepts in science may refer to situations and objects that vastly differ in their frequency or omnipresence. In particular, entangled states aren't a device that you use twice a day, like your car. Almost all states in the Hilbert space – all states describing Nature – are entangled (and non-maximally entangled). Sometimes we don't even talk about the situations in this way. We don't say they're entangled states. Most people don't realize that they're observing objects in entangled states. But there are still entangled states everywhere.

So it's really a very bad question to ask for an application of a concept such as an entangled state. It's analogous to asking what prime integers are good for. I don't know. They're numbers that can't be factorized which may be relevant, useful, or harmful in various situations. Of course that one can't pick any single representative example – one that would be as visualizable as the car ride from A to B. But this non-specificity doesn't mean that prime integers or entangled states aren't essential in maths or physics. When we want to understand or predict the behavior of physical systems, we need to think in terms of propositions that do talk about entangled states (and other things). Entangled states manifest themselves by a certain behavior that's pretty clear from their very definition: they lead to correlations in the measurements of various quantities. So they matter. It's nonsensical to ask about a particular application that would be as specific as a car ride. Such a question is really missing the reason why concepts exist in physics. They're not designed for one particular goal such as a car ride. They're invented to organize our knowledge in lots of ways. Also, unlike cars or animals, they don't have a contrived inner structure. On the contrary, they try to be as sharp and simple, to help us to quickly get to the heart of the problems (I don't mean bad problems, I mean any questions we want to understand!).

Needless to say, this discussion has pretty much nothing to do with entangled states. It's much more general. Entangled states were picked as a scapegoat, a particular concept that the user hasn't yet internalized or understood, for that matter. But the same thing occurs to pretty much all concepts or theories in physics that differ from a "car", something that does a specific service to the people.

I have discussed the value of the virial theorem but let me wrap this blog entry with another dissatisfaction about thermodynamics and statistical physics – one that boils down to a general feature or virtue of these disciplines. Douglas complained that no one ever says which microscopic interactions are responsible for the emission of the black body radiation from a heated solid body.

This is potentially a good question from a beginner but once you see that Douglas vehemently refuses to listen to the answer, you will realize that it's not a good question at all. It's another irrational roadblock, an additional brain defect that should be liquidated but I don't know how to do it.

The answer is, of course, that thermodynamics and to a large extent even its more constructive and more microscopic underlying theory, statistical physics, are pretty much defined by their ability – or desire – to predict certain general features of large macroscopic objects (with a temperature) by methods that don't require to study every microscopic detail of how these conclusions arise; methods that are largely independent of the identity of the elementary building blocks and their fundamental interactions. And the wonderful thing is that it is possible!

In fact, all available interactions – dipole interaction between the atoms and the electromagnetic fields, and all others – are employed in a very chaotic way when the black body radiation is being emitted. For a large enough near-black body, it would be a hopeless task to follow every interaction that takes place and study how they "conspire" to produce the Planck curve. But statistical physics is nevertheless able to predict some macroscopic properties of the final result – e.g. the whole Planck's black body curve – without knowing which interactions are actually transferring the energy from the solid body to the electromagnetic field!

(The reason is that after a time that is short enough if the interactions are sufficiently strong, the electromagnetic field – and any other object – reaches the thermal equilibrium with its surroundings and the statistical properties and distributions of photons in thermal equilibrium are completely calculable and only depend on the temperature. We can exactly predict the Planck curve even though we don't know which microscopic processes created which photons.)

Now, this fact is a wonderful news. We're able to learn something pretty much exactly without doing some messy work. It's a gospel. And this is the type of tricks – alternative arguments that work when and because the number of degrees of freedom in thermodynamic systems is large – that both thermodynamics and statistical physics are doing all the time. These disciplines are not about the analysis of one or several elementary particles, about the reduction of everything to particular microscopic processes, about the focus on a particular fundamental force. Instead, they don't care much what the microscopic architecture is but they may still derive certain statements about the statistical properties of a large number of atoms, thermal properties of macroscopic bodies, and so on.

Statistical physics and thermodynamics are self-evidently important and this sort of tricks – the ability to calculate things without analyzing every microscopic detail – surely belongs to their "defining character". So Douglas' dissatisfaction with these disciplines' not being specific about the microscopic interactions that dominate etc. is a dissatisfaction with the basic character of statistical physics and thermodynamics. He is clearly rejecting the whole point of these disciplines of physics because this "independence of certain results on the microscopic details" is indeed what these disciplines are all about.

One can talk for hours but he just won't get it. In some sense, this dissatisfaction is analogous to the criticisms of string theory by the ultrashitty scumbags who don't like the very fact that string theory deals with phenomena that can't be directly tested in our labs. But again, that's the whole point of the discipline that it focuses on the fundamental processes at the fundamental scale that we've known for more than 100 years (thanks, Max Planck) to be dozens of orders of magnitude away from the conditions we may reproduce in the contemporary lab. So their unhappiness is nothing else than their admission that they just don't give a damn about physics at the fundamental scale: they're primitive uncultural bastards and scum although they use all kinds of makeup to sell their shittiness almost as a virtue.

The criticism of statistical physics and thermodynamics is analogous except that the type of knowledge that "is not welcome" is different: the insights that are independent of some mechanical microscopic calculations shouldn't be allowed, Douglas thinks. But they should be allowed and science gets richer whenever it finds a totally new way to look at things. Some people or many people don't share the sentiment behind the previous sentence because they ultimately don't give a damn about the knowledge of Nature, they ultimately don't give a damn about the truth.

It seems that the roadblocks powering all the kinds of dissatisfaction that were described in this blog entry – and others – can't be detonated because the cause of this dissatisfaction probably isn't the presence of something but the absence of something. Unfortunately, this hole can't be detonated away.

And that's the sad memo.

Waiting for peak oil: a paradox

As an enthusiastic proponent of fracking, Gene sent me a link to this NBC article
Power shift: Energy boom dawning in America
that argues, among other things, that due to fracking, the U.S. will leapfrog Saudi Arabia and Russia to become the #1 fossil fuel producer by 2020. Already today, we see amazingly dropping prices of natural gas and many other things will follow. The technologies are getting better all the time. You get the point but you may find more information about these matters.



Just minutes later, I opened the blog of Alexander Ač, a crazy professional Czech and Slovak climate alarmist, any falling-sky alarmist, and peak oil champion:
Resolve the paradox (autom. transl. from Slovak)
And that was quite a contrast. Alexander lives in a different galaxy than Gene. He's waiting for peak oil every second, every day.




However, the complexities of the real world are making Alexander's waiting somewhat confusing, much like the beliefs of young rabbis in "Is Electricity Fire?", a story due to Feynman. The rabbis were trying to solve the puzzle whether the sparkling elevator was fire – an important scientific question because they wanted to know whether they could use the elevator on Saturdays.

Similarly Alexander is trying to find a maximum of a function. And it's so hard!




The paradox Alexander wants you to resolve is the discrepancy between the graph of the population-adjusted number of miles driven on all roads



and the graph of the U.S. oil production since 1922:



They just disagree! In particular, the first graph shows the holy moment Alexander has been waiting for – the maximum in June 2005. But the second graph shows nothing of the sort. It's a paradox for Alexander because in his way of looking at the world, the graphs should agree so the oil production should have peaked in 2005, too! But it didn't. Since a local minimum a few years ago, the U.S. daily oil production went up almost 30 percent.

Now, this is obviously nonsense. If we graphed the total oil production and the total oil consumption, we could get some agreement. But these two graphs are graphs of very different quantities so there is no reason why they should have maxima and local minima at the same places. In particular, the graphs have the right to differ because:
  • the oil production graph is just for the U.S., a small part of the world production
  • the mileage graph is a graph per capita while the oil production isn't expressed on the per-capita basis at all
  • oil is used not only for vehicles but also in power plants (poor countries), production of plastic matter, fertilizers, and other things and none of the parts has to be proportional to the "whole"
  • the mileage graph doesn't take the fuel consumption into account; fuel consumption increases for larger cars and in city traffic
  • the mileage graph probably doesn't include the miles vehicles make outside roads, especially agricultural vehicles on the fields
  • and I may have missed something equally important.
Please feel more than free to correct me and point out some difference between the graphs that is more important than the entries on my list. Which of the things is the most important one?

At any rate, a single difference above would be enough to explain differences in the locations of local extrema of the two graphs. Alexander's world view is one in which all the graphs are the same (and ideally, all of them agree with the global and local temperature and the CO2 concentration as well). It doesn't matter whether you draw barrels of oil or miles or ppm of CO2 or Celsius degrees here or there, whether the graphs are for the U.S. or the whole world, whether they're on per-capita basis or not, whether they include this or that, whether they're looking at a small fraction of the consumption or everything.

Alexander has pretended to be interested for things like oil production and oil consumption for many years but he has missed every single subtlety (well, they're not really subtleties) above. He still doesn't have the slightest clue how the world works. In fact, even when I told him about a subset of the reasons why the graphs differ and semi-joked that the mileage per capita dropped since 2005 because people begin to use Skype (and he therefore had a signal and opportunity to be more careful about the stuff he is writing), he showed another, less important aspect of his detachment from the reality. The stocks of Skype have to be going up, he said! A nice try but there are no stocks of Skype, a company that was bought by Microsoft NL in 2011 (as everyone who is getting e-mails about the "upgrade" of Windows Live Messenger to Skype knows).

Alexander believes that there may exist only one graph, the curve should probably be smooth (another nonsense but strongly believed by similar "thinkers"), and it should have a holy maximum that must be worshiped by everybody. Now, imagine how confused Alexander had to feel since the moment when he was spitted from the artificial environment of a vagina into the real world several decades ago (and he's getting increasingly confused). There are millions of graphs one may draw and each of them has different locations of the minima and maxima and his prayed-for peak oil maximum is not only non-existent but also ill-defined and utterly ludicrous and unimportant even if it could exist.

If you draw the total production or consumption – not just some per-capita figures – and if you include fracking, it's pretty clear that there won't be any peak for quite some time. Alexander will have to suffer through extra decades in which his beloved peak oil won't be coming but he will surely scream whenever he sees any blip, any foggy signal that may be misinterpreted as the Son of God of peak-oil-and-climate-armageddon-and-all-other-sky-is-falling-crackpot-stories-you-have-not-even-heard-about. He will misinterpret anything and everything that is ever seen or said as well as all the things that weren't even seen or said, hoping that his neverending sky-is-falling whining is sufficiently loud to convert his psychiatric pathologies into a reality. ;-)

Wednesday 27 March 2013

Americans see the Higgs boson, too

...sort of...

Two years ago, I was a staunch defender of the retirement of the Tevatron, the collider near Chicago, Illinois. The reason was that it just wasn't competitive anymore. The lower energy and amount of collisions relatively to the LHC translated to a much smaller probability of a legitimate discovery per unit time – which also means a much lower expected number of discoveries per dollar.



The actual shape of the Wilson Hall differs from the fictitious Hilson/Higgson Hall above.

I think that people gradually understood that people like me were right and it was pointless to keep on running the Tevatron and these days, everyone agrees. For the properties of the \(125\GeV\) Higgs boson, the Tevatron was said to be "roughly competitive" with the LHC. Below, we will see it's not quite the case: it was weaker, too.

But when it comes to the phenomena at the energy frontier that the LHC is probing these days – and still confirming the Standard Model as of today – one may estimate that one day of the data from the Tevatron would provide us with less information than one second of the data from the LHC. The ratio of the strengths of signals approaches a million or so, mostly because the lower energy at the Tevatron just couldn't get there. Running the Tevatron along with the LHC is pointless.




The Tevatron was shut down in September 2011, after 25 years. Now, almost 18 months later, the major detector collaborations at the Tevatron, CDF and D∅, finally teamed up and evaluated all of their data about the possible Higgs boson near the mass interval we know to be relevant:
Higgs Boson Studies at the Tevatron (arXiv)
They couldn't discover the \(125\GeV\) Higgs boson at the required 5-sigma confidence level but they see "strong evidence", so to say, with the local significance of 3.1 sigma. When the look-elsewhere effect is incorporated, the significance would drop below 3 sigma.




They looked at the decays in the following channels:\[

\eq{
H&\to b\bar b,\\
H&\to W^+W^-,\\
H&\to Z^0Z^0,\\
H&\to \tau^+\tau^-,\\
H&\to \gamma\gamma.
}

\] and concluded that everything they see is consistent with the Standard Model Higgs at \(m=125\GeV\).

That's nice but it's not a real discovery. The Tevatron with \(20/{\rm fb}\) (aggregate) of the collisions at \(\sqrt{s}=2\TeV\) just couldn't do better.

I can't resist to point out that it's not just the brute force of the collider. Even the physicists seem to be slower. The LHC Collaborations are already flooding the market with lots of papers using almost all the 2012 collisions – which were accumulated up to November 2012 or so. It may take five months.

Eighteen months that the Tevatron folks needed just seems too much, especially when they already know what result they're likely to get. I don't suggest that they should adjust their data to the expectation based on the LHC's work; but I do think that the amount of verification and cross-checks may be a bit less extensive because the \(125\GeV\) Higgs boson claim is no longer an extraordinary hypothesis.

On the other hand, I do appreciate that the Tevatron Higgs work could have been slower also because of the lack of motivation i.e. because it was no longer a priority – the Higgs boson had already been discovered by someone else. Still, I don't like many other things about the paper – it doesn't show any graph that would make the significance 3.1 sigma for the right Higgs mass manifest and that would allow us to estimate how precisely their observation of the mass is (from the width of the bump).

At any rate, congratulations to the Fermilab and good riddance, too.

Twenty years ago, people would still write assorted papers dreaming about another powerful American collider, the SSC. The titles would usually mention the center-of-mass energy \(\sqrt{s}=40\TeV\). Two decades later, we had to get more modest by a factor of five but thank God at least for what we have!

The LHC is being upgraded which will last for two more years. It will be back at \(13\TeV\) in Spring 2015.

Tuesday 26 March 2013

Rosatom plans fast reactors based on U-238

Technet, a Czech sci-tech server, published an interview with Vyacheslav Pershukov today, the deputy CEO and the director of the scientific-technological complex at Rosatom, the state-owned Russian nuclear corporation that is managing all Russian reactors that are in operation.

He says many things I should have noticed half a year ago because as Russia Beyond the Headlines mentioned in November (see also an echo in The Telegraph), there was a nuclear conference in October 2012 in a city whose name is nothing else than Prague where they presented plans to build new, "fast reactors" on the Russian territory with the help of 13 Czech companies.

And they seem to be better than the nuclear technologies we are using today.




Existing nuclear power plants are using uranium-235 which is rare (we need to get this one which is what enrichment is all about) and it produces lots of long-lived radioactive waste.




To make the story short, the fast reactors (or fast-neutron reactors) are employing the nearly omnipresent uranium-238 which can be supplemented with lots of other radioactive garbage, including the radioactive waste from the contemporary nuclear reactors. It's possible because they have a different speed of the neutrons which is allowed because the moderator isn't there at all and stabilization is achieved either by Doppler broadening, thermal expansion of the fuel, a neutron poison, or a neutron reflector.



To make the story even better, some of these reactors are breeder reactors so they produce some new fuel along the way. The radioactive waste coming out of these reactors is a mixture of isotopes that only need to spend one year in the cooling swimming pools; plus plutonium and uranium-238 that may be recycled as fuel if they're properly separated. This separation procedure only exists theoretically at this point but they seem confident that it's possible.

To summarize, these reactors may use what we consider waste today; their own waste is a mixture of a new fuel and true waste that doesn't need to be stored for too long; they're more efficient; and they can't really explode because there's no water (but if the envelope breaks in the sodium-cooled reactor with thousands of tons of sodium, the reactor is finished – but safely so for humans).

The particular RBTH story focuses on the SVBR-100 lead-bismuth-cooled reactor (developed with the Czech companies) and the BN-800 reactor, a sodium-cooled fast breeder reactor, which is under construction in the Beloyarsk power plant in the Sverdlovsk region.

(Sverdlovsk is the communist name of Yekaterinburg, Pilsen's twin city in Russia: they have kept the name inspired by the heartless murderer of the tsar family for the region around the city. That surely sounds fair to our Russian brothers: if one name is after a tsaritsa, Catherina I of Russia, there must be another name derived from the killer of the tsar family. Yes, the murder of the family took place in Yekaterinburg itself. Lots of other bad things happened in our twin city. Last summer, for example, they found 4 barrels with 248 human fetuses over there. The kids could have built the adjoint representation of \(E_8\) but they were terminated...)

I hope we will see the new power plants soon enough. They hope to end the research and design of SVBR-100 in 2014 and run it in 2017. Independently of that, Rosatom is planning nuclear reactors for spaceships. There are two big challenges: to get it into space and to launch it over there. They're thinking about its first big test – a mission to Mars.

Meanwhile, Westinghouse claims to be ahead of its Russian competitor in the tender to complete the (not fast) Czech Temelín nuclear power plant.

Monday 25 March 2013

Speed of light is variable: only in junk media

Francis the Emule (Spanish) diplomatically agrees with me...
If you open Google Science News at this very moment, the #1 story is saying things like
new research shows that the speed of light is variable in real space.
The only problem is that the "research" is pure crackpottery. Those stories build upon the following two papers in a journal called European Physical Journal D I have never heard of in the context of fundamental physics:
A sum rule for charged elementary particles by Gerd Leuchs, Luis L. Sánchez-Soto (free: arXiv)

The quantum vacuum as the origin of the speed of light by Marcel Urban, François Couchot, Xavier Sarazin, Arache Djannati-Atai (free: arXiv)
The abstracts are enough to see that the authors aren't just making one or two serious technical errors. Instead, they misunderstand the very logic of science - how arguments in favor of some claims may or may not be phrased.




The first, German-Spanish paper tries to claim that the sum of squared electric charges over all elementary particle species (regardless of their mass) is \[

\sum_i Q_i^2 \sim 100.

\] This is quite a bold statement. You may try to look what is the quantum field theoretical (or stringy?) calculation leading to this condition. What you will find is that there isn't any quantum field theory in the paper at all!




Instead, the paper misinterprets virtual particles etc. in the way you expect from a 10-year-old schoolkid. For them, the virtual particles are real and they're connected by springs of some sort. Some physically meaningless calculations lead them to the sum of the squared charges. If you try to find out where the number \(100\) came from, you will find out that it was calculated as a function of three more real parameters whose values were chosen arbitrarily.

It would be a terribly stupid paper even for a 10-year-old boy. But the authors must believe that it's possible to learn things about physics in this way even if they don't know anything about the way how modern physics describes particle species and their interactions with the electromagnetic field – about quantum field theory. So one sentence in the paper refers to quantum field theory, after all. It's the last sentence before acknowledgements that says:
We hope that this result will stimulate more rigorous quantum field theoretical calculations.
Wow: they leave the details for their assistants whose task is to convert the ingenious findings that contradict everything that a quantum field theory could say about these matters to a proof in quantum field theory.

Needless to say, it's totally impossible in \(d=4\) to have a similar constraint for the sum of squared charges. At most, the sum of cubed charges is what enters the gauge anomalies in \(d=4\). But summed squared charges over particle species can't occur in physically meaningful formulae. Moreover, the number of particle species is really infinite – although most of them may have masses near the string scale or higher – so the sum is either ill-defined or divergent. In other words, it's implausible for an important physical formula to deal with particle species regardless of their mass.



Via Gene, off-topic: Lots of music for a buck.

The other paper, the French one, is a similar nonsense about the variable speed of light. These papers are being clumped together because the authors of both of them are clearly pals and they coordinated their invasion into the journals and the media. Let me repost the abstract here:
We show that the vacuum permeability and permittivity may originate from the magnetization and the polarization of continuously appearing and disappearing fermion pairs. We then show that if we simply model the propagation of the photon in vacuum as a series of transient captures within these ephemeral pairs, we can derive a finite photon velocity. Requiring that this velocity is equal to the speed of light constrains our model of vacuum. Within this approach, the propagation of a photon is a statistical process at scales much larger than the Planck scale. Therefore we expect its time of flight to fluctuate. We propose an experimental test of this prediction.
Unbelievable. Look at the first sentence. They think that they "show" that the vacuum permeability and permittivity "may" originate from the magnetization and the polarization of continuously appearing and disappearing fermion pairs. (Needless to say, there's no quantum field theory in this paper, either.) How do they achieve this ambitious task?

It's easy. They forget and ignore everything we know about physics and everything they should have learned about physics, even at the high school. In this state of perfect oblivion, one isn't constrained by any knowledge at all – because there isn't any knowledge – so anything goes and an arbitrarily stupid pile of crackpottery "may" be true and one thus "shows" that it's possible.

Except that a person who knows something about physics may show that pretty much every sentence in this paper is pure rubbish. Their particular nonsense that "may" be true as they "show" is that the vacuum is chaotic for photons so the light propagation is chaotic and the speed is variable. By saying these things, they prove that they don't have the slightest clue about the actual explanation of the existence of light that we have known since the late 19th century.

The actual explanation of light is that it's a type of electromagnetic waves. And electromagnetic waves are simple solutions to Maxwell's equations, the equations that describe all electromagnetic phenomena. These equations are particularly simple in the vacuum. Maxwell's equations in the vacuum are actually the more fundamental ones; the propagation of electromagnetic waves in other mediums requires some extra work.

But in the vacuum, the permittivity \(\varepsilon_0\) and permeability \(\mu_0\) simply enter Maxwell's equations as conversion factors that disappear – that are replaced by \(1\) – if we use more natural units. The reason why I say these things is that \(\mu_0,\varepsilon_0\) are not supposed to be "derived" from any complicated mechanism involving lots of charged particles etc. On the contrary, they're players in the most fundamental equations of electromagnetism and it's the behavior of lots of charged particles that is "derived" and that can be reduced to fundamental Maxwell's equations.

Hendrik Lorentz was the first man who showed that Maxwell's equations in general materials may be derived from the vacuum Maxwell's equations combined with some behavior of the charged and magnetized particles that exist inside the materials. It was an important insight (it helped Einstein to think in the right way when he was marching towards relativity) and people could have been unfamiliar with this insight at some point – except that Lorentz found those things more than 100 years ago so they shouldn't be unknown to authors of a journal called European Physical Journal D in 2013.

The authors are trying to derive the light propagation in the vacuum from light propagation in some fictitious complex material – which is exactly the opposite strategy than physics chooses (and it's obvious why it chooses the opposite one): complicated materials are more complicated than the vacuum. In other words, the authors suggest that if their contrived "additional" effects didn't exist, the permittivity and permeability would vanish in the vacuum. But they couldn't vanish. Even when all the chaos is removed, physics must be described by non-singular equations which essentially means – among many other things – that the permittivity and permeability would still have to be finite nonzero constant in the vacuum. We know what these constants are: they are \(\varepsilon_0,\mu_0\).

But what is even more important is that the authors don't understand what is primary in science: unrestricted speculations about the ways how the world "may" work, or constraints from observations and experiments? They clearly think that it's the former. They "may" write kilobytes about nonsensical models that have nothing whatsoever to do with the Universe around us and they claim that this "shows" something.

But science doesn't work like that. We actually know that the speed of light has to be completely constant and free of any fundamental "noise". In fact, our definition of one meter is such that the speed of light in the vacuum is tautologically \(299,792,458\,{\rm m/s}\). So it's obviously constant. The constancy of the vacuum speed of light follows directly from special relativity and special relativity is what we actually know to be true from the observations. So all the speculations must adjust to this knowledge – and all other empirical knowledge we have. The authors' approach is just the opposite: they want the empirical knowledge to be adjusted to their unconstrained fantasies. They simply don't understand the basic point of science that the self-consistency of a hypothesis isn't enough for such a hypothesis to be a good scientific theory. Empirical knowledge actually matters and kills most of the conceivable guesses.

I can't resist to compare their approach with the following question that a user named John Smith asked on Physics Stack Exchange two days ago:
Why perpetual motion wouldn't be possible if we are so technological advanced?
You see some kind of a fundamental misunderstanding about the inner workings of the Universe and the humanity. John Smith – and similarly the authors of the papers discussed in this blog entry – doesn't get the point that regardless of the technological sophistication, every civilization much like every object in Nature is "obliged" to obey the laws of physics and the non-existence of the perpetual motion machines are among these laws (the so-called first two laws of thermodynamics).

John Smith's – and the authors' – opinion about this basic issue (about the very existence of the laws of Nature) is the opposite one. He believes – and they believe – that there are no permanent laws, there are just limitations that we're constantly breaking as we're getting more technologically advanced and more self-confident. The non-existence of the perpetual motion machines (or similarly the constancy of the speed of light in the vacuum) must be just due to some limitations of technology we can surely transcend in 2013 if we want! ;-)

It doesn't make sense to spend too much time with these silly papers. So I will stop and finish this blog entry with the complaint that the adjective European in the name of the journal could be replaced by Idiots' if we wanted the name of the journal to be more accurate. And that's not a good result for the old continent of ours! At the same moment, these idiotic crackpot papers are widely quoted in the U.S. and other media so Europe is not the unique continent on which similar junk flourishes.

And that's the memo.

Sunday 24 March 2013

Reagan's Star Wars: 30 years ago

Ronald Reagan gave the following 30-minute talk on March 23rd, 1983, i.e. 30 years ago:



Most of the talk is about the motivation and the situation. The very SDI comments begin at 25:00 or so.

The visionary SDI (Strategic Defense Initiative) speech was arguably the most consequential presidential speech in the modern U.S. history. I am somewhat impressed by the depth of the technical arguments that Reagan offered.

In July 1979, Reagan would visit some defense folks in Colorado and they showed him that the Mutually Assured Destruction doctrine was the only possible conclusion. Ronald Reagan couldn't accept such an attitude and the speech above symbolized what he wanted to do to protect the civilians against the Soviet-led attacks from outer space and change the doctrine.




I was 10 years old, I spoke no English, and today was actually the first time I listened to the speech above. But I remember that during a gym class, when I was a 3rd grader or a 4th grader, at the 21st Elementary School in Pilsen with an extended education of languages (Russian, in my case then), we suddenly had to listen to a bizarre scary speech in the school radio sometimes in 1983 or 1984 or so.

We were told that the international situation got worsened a lot and a war could be imminent. Of course, we were told about the imperialist warmongers all the time but this was the only time when I heard an announcement fully dedicated to a possibly looming war.




I have never reconstructed the date of that bizarre announcement or the reason behind it. Now, it seems plausible that Reagan's speech was what sparked the school radio announcement. Some commies at our school could have gotten anxious that the American imperialists could get really strong now and it's necessary to upgrade the war preparations and war rhetoric (although we've never heard anything that would be so pro-war as the North Korean propaganda we observe these days: the official propaganda would always paint us as the "camp of peace" while the capitalist world were the "warmongers").

At any rate, this was the impact of Reagan's speech on the Soviet politicians. Arms races escalated and they effectively led to the surrender of the Soviet Union. It has overspent the money for arms races. This caused some problems in the economy and that helped Gorbachev to be elected and ultimately terminate the totalitarian Cold War era in the Soviet Union – and, indirectly, in the whole Soviet bloc.

Many people – especially left-wingers – have been trying to humiliate the SDI. In 1987, the American Physical Society joined these critics and questioned whether the SDI is allowed by the laws of physics. But it's clear that "something like that" may be immensely useful and nowadays, similar technologies belong to the responsible defense strategists' standard toolkit. The critics usually employ excessively high standards when they evaluate the SDI. They say that because the technology can't be perfectly reliable under all circumstances, it's useless. But nothing in the real world is perfectly reliable but we are still using many things and they are useful.

Also, the critics who said that Reagan would effectively revive an "offensive mode" of the arms races have ultimately been proved wrong. SDI is clearly a defense technology and while it temporarily led the Soviets to be even more offensive in their strategic planning, this had to collapse and this did collapse, leading the world to the end of the Cold War. I would summarize the U.S. critics' motivation by saying that the real reason why most of them were annoyed was that they wanted the Soviet Union to prevail and Reagan's plan made that outcome less likely. They were commies. In fact, Obama's administration is the first Democratic administration after Reagan that accepted that SDI is a good idea. Secretary of Defense Chuck Hagel proposed to increase the GBIs on Friday.

Before the SDI plans managed to undermine the Soviet empire, the CIA has played an effective misinformation game. The Soviets have spent lots of money on similar anti-rockets, too. X-rays were planned to be the defensive bullets. Most of these devices remained on paper but the implications of these papers were damn tangible and damn far-reaching.

Saturday 23 March 2013

Wernher von Braun: 101st birthday

Today, we celebrate the birthday of three mathematicians who have heavily influenced physics: Pierre-Simon Laplace, Amalie Emmy Noether, and Ludvig Faddeev. But because I posted the biographies four years ago (click at the previous sentence), I won't do so again.

Instead, let me mention that six years ago, set theorist Paul Cohen died. He is the man who proved that the axiom of choice can neither be proved nor disproved using the Zermelo-Fraenkel (most popular) set theory axioms and who repeated the same achievement with the continuum hypothesis.

So it's up to your belief and aesthetic preferences whether there always exists a set whose intersection with each set in an infinite set of sets has one element; and it's up to your taste whether there exists a set that has more elements than the set of integers \(\ZZ\) but fewer elements than the set of real numbers \(\RR\). Thanks for that freedom, Prof Cohen.

Incidentally, I am agnostic about the continuum hypothesis because the very idea that the real numbers aren't countable seems a bit artificial to me because the proof uses totally unnatural maps one can't really encounter. For the same reason, I prefer to use the freedom won by Paul Cohen to prefer a picture where the axiom of choice is invalid. The picked set if the AC is believed is non-canonical, unnatural, and if you assume AC is false, you earn the extra freedom to assume that all subsets of an interval are Lebesgue measurable sets – and that's a reassuring situation for a physicist, I think, regardless of the fact that mathematicians must reorganize (and extend) the proofs of some theorems that remain valid, anyway.

But this blog entry is dedicated to someone else.




When Dr Sheldon Cooper's sister mentioned that she was saying that her brother was a rocket scientist, Dr Sheldon Cooper remarked that it was so humiliating: she could have equally described his brother as a tall taker at the Golden Gate Bridge. ;-) This is a theoretical physics blog so we don't run CVs of ordinary rocket scientists here. But according to NASA, Wernher von Braun was the greatest rocket scientist ever so he gets an exception.




Wernher Magnus Maximilian, Freiherr von Braun was born to an aristocratic family 101 years ago, on March 23rd, 1912. Being an equivalent of a Baron (Freiherr von Braun) is a special condition. I find it cute and special, too. On the other hand, I refuse to learn the family trees of both parents because we would be like a tabloid magazine here. ;-)

For the same reason, I won't discuss a nameless aristocrat Maria Luise whom he married and did her three children who were doing fine even though the wife was his cousin on mother's side. OK, let's stop this stuff.



He learned to play the cello and the piano, could play Beethoven and Bach from memory, and wanted to become a composer. He has actually composed several playful pieces (probably more impressive ones than your humble correspondent) but I wasn't able to find them. If you know an audio or video link with them, please don't keep it in secret!

When he was a 13-year-old boy in a boarding school, he sucked in maths and physics. But because space fascinated him, especially once he acquired Hermann Oberth's book on rockets (von Braun remained a huge fan of the author), he decided he had to learn maths and physics, too. Wait for two sentences to see how this plan worked. When he was 18, he joined the Spaceflight Society in Berlin. He concluded that major technological advances would be needed. He picked University of Berlin to learn more about physics, astronomy, and chemistry and got a physics PhD in 1934. This degree is what qualifies this music composer and rocket scientist to have a CV on TRF. :-)

When von Braun was in the early 20s, he told cosmic radiation physicist and balloon-explorer Auguste Piccard after a lecture that he was just planning to go to the Moon soon. Piccard responded with some encouragement.

He had mixed-to-positive opinions about Hitler's regime that were drifting towards the positive side when Hitler could reach some easy successes. But in 1937, as the technical director of the Army Rocket Center, he was officially requested to join the party and he did so much like the majority of people in Nazism and communism who were told the same thing. His work was simply more important for him than some subtle moral considerations. It's likely that his sending of the fees was the only "politics" he has really done in that party.

Von Braun did well in the non-democratic setup. In 1940, he also joined the General [Allgemeine] SS which wasn't an armed unit. If you believe his memoirs, what was important for him was the active attitude of General SS to pursue rocket research. He appeared on a picture with Himmler in a uniform, too. But it wasn't all rosy. He was arrested and accused of being a communist sympathizer in 1944, too. Released. Things were dangerous for him especially because he was a skillful pilot, too.

In his job, he would lead the development of jet-assisted takeoffs, the A-4 ballistic missiles, and some anti-aircraft missiles. The first combat A-4 was to be used against Britain in 1944. It was renamed V-2 [Fow-tsvai]; if you're hoping that it stood simply for "von Braun", I must disappoint you. It stood for a simple word "Vergeltungswaffe" (retaliation weapon). He liked V-2 because it was so good to go to space; so the only bug was that it landed on a wrong planet (Great Britain). Mort Sahl invented the caricature of von Braun saying "I aim at the stars, but sometimes I hit London". :-)

In Spring 1945, he knew what was going on so he gave his team a research project, to find out to whom they should surrender. And be sure that the Soviets didn't win this tender. ;-) After a complicated arm fracture from a car accident, he left the hospital early. He also hid some secret documents in an abandoned mine. The surrender wasn't hard. The hard work was done by von Braun's brother who approached a soldier from the 44th Infantry Division (in the Northeastern corner of current Germany) on a bike and told him in broken English: My name is Magnus von Braun. My brother invented the V-2. We want to surrender.

Deal. Easy.

For 4 months in 1945, von Braun was already in the U.S. secretly, due to a plan by a predecessor of John Kerry. The American politicians wanted him so they fabricated von Braun's CV. He was suddenly as clear as a lily. He's been never a member of the party, and so on. This dishonest trick was justified by the ends. Von Braun started the U.S. space program in 1958. Still, he had to watch that the Soviets were almost always a year or several years ahead of America.



Von Braun explains "a man in space" in a 1959 Walt Disney film. Be ready for a dropping pressure in a spacesuit and the body's annoyed reactions, people undergoing the acceleration of \(35g\), a four-stage rocket prototype (the upper one seems to be a space shuttle of a sort). He said a practical rocket for people could be ready in 10 years; the Soviets were clearly faster (Gagarin was there just in 2 years) but his estimate/plan was otherwise very realistic, we know today. If you find von Braun's German English funny, realize that the English German is also funny.

Wernher von Braun is the father – or at least the main co-father among the influential people – of many astronaut-related concepts we take for granted these days. The liquid fuel rockets have been mentioned. But he popularized/co-invented the manned space station, rotating wheel-shaped station that creates artificial gravity, he sketched a permanent lunar base, and drafted preliminary plans for a manned mission to Mars. During the coldest years of the Cold War – they were so cold that the war was almost hot – he would think about an orbital warfare.

Much of his later career would copy the history of the U.S. space program including the Apollo program. I won't try to cover that because I am no expert. In 1972, he left NASA and went to a private space research company for a while. He died of pancreatic cancer in 1977.



The grave is quite modest, isn't it? Here's what Psalms 19:1 say.