Friday 31 August 2012

EU: all incandescent light bulbs banned tomorrow

In the article aptly titled "Light-Bulb Ban Casts Shadow over EU Democracy", The Spiegel reminds us, the Europeans, about an important change that will affect our continent starting from tomorrow, Saturday, September 1st, 2012. No, Germany isn't invading Poland in the same way as it did on September 1st, 1939.

But maybe it is...



Starting from Saturday, it will become illegal to import any incandescent light bulbs to the EU or produce them at the territory of our continental confederation.




Traditional Edison's light bulbs remain the optimum-value product for all applications in which an isolated light is turned on for less than an hour a day, and for many other setups.

I hope you don't have to be told that the amount of electricity consumed by a light bulb may be very small and in your restrooms and in many other rooms, you may need many, many years to save the money for power to compensate for the initial investment in a more expensive lightning technology. And I am the only one who considers the spectrum of the black-body-based light bulbs to be more natural and safer? And the mercury incorporated to the fluorescent bulbs simply is worrisome – both for the individual human health and the environment.

Let me mention that I am – and my relatives are – using various fluorescent and other light bulbs at many places and there was a period in which I found it cool to buy lots of them and buy the most modern sources of light (such as LED etc.), something that didn't materialize. But I returned to my common sense. Most of the time when I press a button to turn a light on or off, it's a classical light bulb. They seem to be sufficiently long-lived but I have a couple of reserve light bulbs, too.

The Reference Frame actually can't guarantee that if you buy the classical light bulbs via the amazon.com link above, you will receive the package. (Moreover, be careful, the particular light bulb may be designed for a different voltage etc.) The green Nazis in the EU may actually steal your merchandise and show you documents claiming that they have the right to steal things from you – even though they don't actually even have a legitimate democratic right to influence political decisions in the European countries.



Thanks, Mr Edison.

This insane ban – which will be extended to other types of light bulbs in the future (halogen lamps will be banned in 4 years) – is just one of the many symptoms of the shortage of democracy at the level of the EU. Today in the afternoon, we won't be able to return freedom and democracy back to the old continent. But you may use a few moments in this Friday afternoon to visit a shop with light bulbs and buy a few pieces of this famous Edison's discovery for your future needs.

Harvard course: 125 students copy a take-home exam

B Chimp Yen has informed me about a mass cheating scandal at Harvard; see e.g. these sources. BBC and other outlets wouldn't tell you the name of the course.



However, good enough Internet users need approximately 1 minute to find out which course it was. Yes, it was "Introduction to Congress" (Government E-1310 23500) taught by Matthew Platt.




The minimum number of students in the course is 250. The actual number was slightly higher, 279. And 125 students, almost one-half of the class, apparently copied a take-home exam from the same source, despite an easily understandable explicit ban on this method of writing. With this degree of plagiarism, the author of the original text is almost as published as Shakespeare. ;-)

The tuition for this course is $1,045 for undergraduates and $2,000 for graduates.

If you interpret this Harvard course as a factory producing future U.S. Congressmen, U.S. Congresswomen, and other U.S. Congresspersons, you may easily estimate that about 1/2 of the members of the U.S. Congress at every moment are crooks.

I am not surprised by this discovery, especially in the case of undergrads. As I have understood them during the years I taught at that school, the typical Harvard undergraduates don't strikingly differ from average college students of the same age. Much of their above-the-average success in their later life is due to the Harvard degree itself (and contacts with similarly influential people they could establish), not due to their vastly greater skills. (When it comes to physics, you only start to see dramatically above-the-average skills if you look at the grad students which have passed a much stricter filter.) They're under more significant pressure to be excellent. And in some cases, "easy ways out" seem to be tolerated if not supported. And the students have ordinary human passions and hobbies much like other young people.

Well, I would bet that this mass plagiarism wasn't found by the instructor himself. Why? Simply because the students have to decide in advance whether it would be acceptable to copy the take-home exam. The personality of the instructor is the key piece of information in such decisions. If one-half of the students do such a thing, it shows that there is a widespread belief that the instructor would take it easy if he figured out what was going on.

[When I read some articles more carefully, I learned that the scandal was indeed found by a teaching assistant, not the instructor, in May.]

And yes, I just can't get rid of the feeling that the instructor is similar to Cornel West, another black professor who was a major source of grade inflation and pro-lazy-student populism at Harvard (aside from the rap music he helped to record). Recall that when ex-Harvard President Larry Summers dared to suggest that Cornel West should have focused on the quality of his scholarly activities, Cornel West got extremely pis*ed off, moved to Princeton, and continued his nuclear war against Larry Summers for years.
Update, August 2nd: The New York Times reveals that my guess was 100% correct. The cheating students said that Prof Platt has promised the students to give 120 A's away – and it was even OK not to attend the lectures and discussion sessions.
In a graduate course allowing undergraduates that I have taught, two students decided to cheat in a simpler way than to copy things from a classmate. They just waited once my official homework solutions were posted on the web, then they copied them (and changed the notation only by so modest "mutations" that it was impossible that the similarity would be coincidental), and they submitted them with a lame excuse why they're just a little bit late. Those two students did it repeatedly – I forgot the exact number but they may have done it throughout most of the course.

Your humble correspondent and his teaching assistant easily found out what was going on, at least in these two cases, and we were unlucky: one of the students was a female undergraduate and one of them was an ethnic Indian male. (No, I really don't think an undergrad should get an A just because he or she dares to register for a grad course. He or she may try but the same rules must apply, otherwise such "brave acts" wouldn't be brave at all and they would really become tools to get easy credit and good grades for free.) My teaching assistant was actually the main driving force in our activities to make sure that these students would be punished. Not much happened at the end. I was just afraid to push for justice too much because it was already during (or after?) the anti-Summers politically correct witch hunts and I didn't want to multiply my problems – to experience even more accusations that I was a sexist, racist, and all these outrageous politically correct labels that make the Ivy League environment pretty much insufferable for honest conservatives.

Harvard – with its unregulated terror by the feminist sluts and the professional blacks – just didn't provide me with the basic needs that are necessary to do the teaching work well and I am a realist, not a person who excessively enjoys the fights against wind mills.

While our discovery of the cheating students was totally impartial, I am confident that the composition of students who do such things and instructors who tolerate it or indirectly support it isn't sex-blind or color-blind. I would love to see the detailed composition of the students registered for the course and those who cheated and their sex, ethnicity, race, and other information. I guess that we won't learn such things, will we?

Well, if I won't be told any cold hard data, I will continue to believe in the obvious hypothesis that the females and the students of color – aside from other would-be weak groups that are systematically given advantages by the suffocating politically correct racist and sexist atmosphere on the campuses – are probably significantly overrepresented among the students who have cheated and the race of the instructor isn't quite a coincidence, either. It has to be so simply because it's much harder to punish these groups that are vastly more protected by the "establishment" so they may afford to do many things that others can't. My own experience has taught me a lot about the inner workings of these asymmetries.

Needless to say, "black agenda in politics" is one of the instructor's three major "research topics" so I would dare to suggest that his being black is a significant contribution to the reasons why he's at Harvard faculty in the first place. And when it comes to the would-be tough statement by the current Harvard President, Drew Gilpin Faust, I think that her words are hypocritical, too. She hasn't done 5% of what Summers had done to fight similar trends.

Thursday 30 August 2012

German offshore wind turbines: hiding all the disadvantages

Most of the otherwise rational German nation was scared by the Fukushima non-disaster and decided to close all of its nuclear power plants by 2020 or so.

Germany is also at the top of the fight against the CO2 emissions so both major sources of electricity seem to be doomed. So far, the doom is hypothetical because the policies have actually led to an expansion of coal power plants. But we may imagine that Germany gets really serious about its drinking of the Green Kool-Aid.



Yesterday, the German government approved a plan to speed up the construction of offshore (i.e. on-the-sea) wind power plants, see e.g. DowJones/WSJ. What does the decision mean? Is it wise?




First, let me shock some full-fledged skeptical readers of this blog. I actually do think that in a few decades or a century, we will be surrounded by things like offshore wind turbines and solar panels on cars and roofs and all these individual sources of "renewable" energy will be connected to a grid, perhaps wirelessly, and there will be a sophisticated system that will calculate how much money is paid to the energy producers depending on the output and the demand (and expected demand).

To amplify the shock, I do think that it's very intriguing to be using energy that doesn't really depend on any "commodities". If I could buy an energy system that would allow me not to pay any energy bills for 50 years or so, and that would allow to store the energy for some time. I would pay $3,000 for each source of 100 W in average.

The only major problem I have with the renewable energy hype is that the renewable energy is simply not quite competitive yet. It makes no sense to subsidize it. It makes no sense to force people to use and support energy sources that are economically inferior at this moment. It's also unpleasant that the wind turbines kill birds and they look like needless that want to pierce your skin – I don't like to look at sharp things – but these disadvantages are secondary. It's the price that matters.

Imagine that I do agree that in 50 or 100 years, the world will be flooded by similar sources of energy and at the same time, they will start to be economically competitive, perhaps because the price of fossil fuels will have increased. Doesn't it make sense to build them already now, even though they're not competitive yet? My answer is a resounding No. Because it only takes a few years to build such wind farms, it only makes sense to build them a few years before the moment when they become economical. By waiting for this moment, you save for electricity itself; and you save once more because the wind power (or solar power) technology is getting more efficient, advanced, and cheaper, so the more you wait, the better product you get.

I think it's almost guaranteed – because of the newly found reserves of fossil fuels – that this moment won't arrive in the next 50 years. So it's nonsensical to build many such wind farms today. It is actually surprising that Germany isn't able to figure out these things. West Germany experienced the post-war boom partly because the factories were bombed out during the war which forced West Germany – and which allowed West Germany – to install the most modern machines and technologies after the war. The apparent disadvantage was soon transformed into an advantage – one of the sources of the German competitive edge. It makes sense to wait.

What did the German government proposed to do yesterday? The first sentence of the Dow Jones/WSJ article has an answer:
The German government Wednesday proposed to speed the expansion of offshore wind farms by limiting the liability of power network operators for delays and outages of grid connections.
That's pretty bad! The power network operators won't be fully responsible for something they should be primarily responsible for, e.g. outages of grid connections and delays. It's an obviously ideologically driven distortion of the free markets because for every consumer, the risk of outages actually plays a role when she is deciding about the best way to cover her electricity needs. Now, the power network operators won't be fully responsible for such things. They may lower the quality of their services without too many worries. They're just not fully responsible for their work. It's apparently more important that they're participating in an ideologically driven and economically misguided program to shift to renewable energy sources.

But if producers are not held responsible for their work, the efficiency of the market and the consumer's well-being always suffers. Such distortions of the markets may be harmful in many previously invisible ways. If the program to prefer wind or solar energy were really "inevitable", it would be cleanest to blindly pay a fixed amount of subsidies for each renewable kilowatt-hour and avoid all other random ad hoc interventions.

Now, unless e.g. my homeland becomes a protectorate of Germany again and unless the European unification will really start to resemble the "unification" that Europe experienced under the 1933-1945 German Chancellor, Czechia (and others) will of course not be constrained by the "limited liabilities" imposed inside Germany. If the German offshore wind experiment will lead to increased irregularities of the power or even blackouts and if we will have to work hard or install new things to save our grid, of course that we want to be nicely paid for that. If we're not sufficiently paid, we will just cut the German grid off and be sure that it would be a problem for Germany already today – and their dependence on the neighbors will be getting worse if they increase the percentage of wind energy in their mix.

So Germany may change the liability internally and impose unnaturally distorted rules of the game but its operators will still have the same responsibilities externally and Germany will still have to pay for things that cause problems, in one way or another.

The Dow Jones/WSJ article estimates that the electricity prices in Germany will grow by something like 10 percent from the current value around 20-25 eurocents per kWh during the next 8 years. Of course, the price will depend on many other factors as well so this surely can't be trusted as the final result. I think that a 10% increase wouldn't be catastrophic. You pay just a few dozens of dollars a month for your electric bills; but you shouldn't forget that many other products you're buying also depend on cheap electricity and they could get more expensive, too. Still, a 10% jump isn't catastrophic (but if 10% applied to all prices, it would be horrible – a one-time 10% hyperinflation). On the other hand, this 10% increase only corresponds to a minor increase in the role of the wind energy. If Germany wanted to get all the energy from wind farms, the price would be vastly higher.

Suggestions that nations should switch to "renewable" energy sources are omnipresent but it's actually extremely hard to find cold hard numbers about the expenses you have to pay to construct a 1,000 MW source of wind energy, the lifetime, and so on. It seems clear to me that these numbers are not being publicized simply because these cold hard numbers are not flattering for the "renewable" energy sources. People are maximally encouraged not to think or to think irrationally and ideologically. It shouldn't matter to you how much you pay.

Well, it surely matters to me and I will fight with all the weapons I can find against those super-arrogant individuals who would love to tell me what should matter to me. In a free society, each of us is free to sort his or her priorities. In a free society, everyone is allowed to assign any "subjective price" to various market products. Above a certain price, we wouldn't buy or use electricity for certain things. Most of us think that the market price of Al Gore is about $200 because the price of pork is around $1 per pound. It's utterly scandalous if someone suggests that we should believe that the price is any other number.

So I think the only acceptable attitude of the government is to acknowledge all the advantages and disadvantages of individual sources of energy and the companies' and individual consumers' right to quantify these advantages and disadvantages. Subsidies for "ideologically preferred" sources of energy are as pernicious as the attempts to take the responsibility for the disadvantages away from the producers of energy and the grid operators. Those things shouldn't happen in an ideal world because they're pretty much guaranteed to make the world less efficient, to make the people less wealthy and less happy.

The Czech trade minister wants to reduce renewable energy subsidies and replace them with type-of-source-blind insurance against future changes of the electricity prices which is important or useful for any long-term economic planning of the construction of new energy sources (including nuclear power plants) but which is fiscally neutral for the state budget in average. That sounds sensible. After all, we're forced to think about these things; just in our medium-size country, the 2012 subsidies just for the renewable energy are going to stand at $2 billion. That's one-third of our budget deficit or 1% of our GDP!

And the renewable energy is still just a few percent of our total energy production.

We may express those $2 billion in renewable subsidies in one more useful way: it's 3% of our $60 billion annual state budget. So 3% of our budget has to cover subsidies for renewable energy sources that only represent 10% of our electricity production. Relatively to other nations, 10% is a very large number so in this hysteria, we joined the "elite club" of the most insane "leaders". However, you must realize that much of this figure (8,500 GWh in 2011) of 10% comes from the hydroelectricity (2,000 GWh) – which is also counted as renewable – whose capacity can't be increased too much anymore (and the hydroelectricity production has been constant at least for 7 years) and which is probably not too subsidized, anyway. A comparable amount came from "biomass"; this is no longer fashionable so it won't increase.

If you count solar (2,200 GWh) and wind power plants (0.6 GWh) only, they produce 3% of the Czech electricity, despite our being co-leaders in that discipline, and these two sources probably grabbed almost all the subsidies. To mask the price increases from the wind and solar energy which represent 3% of the electricity production, they pretty much swallow 3% of the state budget. It's not hard to see that if we wanted to have 100% of the energy coming from renewable sources, the subsidies would require approximately 100% of the state budget. In a green world, tell good-bye to the state-subsidized education, healthcare, pensions, defense, and other obsolete details. ;-)

It's just an order-of-magnitude estimate showing that the transition to renewable sources of energy may eat pretty much all of your nation's wealth – without any improvement of the quality of your life or your environment or anything else that matters whatsoever. If you only switch a few percent to the renewable energy sources, it will only eat a few percent of your wealth. But it's still stupid to waste the money in this way even though it's less stupid than to waste 100% of your wealth. These are huge amounts of money. The governments of the world feel that they have the right to waste tens of billions of dollars for completely worthless rubbish as long as it has the right ideological color. That's one of the reasons why citizens of democratic countries must very carefully watch the stupid and arrogant policies that the governments are adopting.

London built a new paralympic LHC collider

The opening ceremony of the 2012 Paralympic Games in London was apparently a cool event.

At the beginning of this 4-minute preview, you may listen to a 70-year-old athlete, Dr Stephen Hawking, who also talks about another hobby of his, the search for a theory of everything:



His audience included Her Majesty the Queen, Boris the Mayor, and 62,000 similar closet physics fans.

But Dr Stephen Hawking is far from being the last piece of physics you could have found in the ceremony.




Well, I doubt that too many TRF readers will find 4 hours to watch the whole ceremony:



Hawking starts to talk at 7:45 in the video above and then again at 15:05 and then again at 3:24:49 (about the parameters of the LHC). At 16:00 or so, you may see a magnified Brief History of Time, a king size Universal Declaration of Human Rights, and an oversized Newton's apple, too. A diverse company. :-)

But the first, local-choreographer-dominated 20 minutes are interesting enough because the stadium reconstructs the July 2012 discovery of the Higgs boson.

What tools do you need to reproduce the Large Hadron Collider at a stadium and find the Higgs boson? It turns out that a Czech patent specifically designed for opening Olympic ceremonies, namely the umbrellas, are the key.

Why?



This picture of the Higgs boson discovery appears around 17:35 of the video above.

The official CERN public website explains that a spherical array of silver umbrellas is physically equivalent to the Higgs boson; it's the first time I learned about this new kind of a duality. (The duality is only exact if the 2007 song by Rihanna about the umbrella \(\ell_R^+ \ell_R^- \ell_R^+\ell_R^-\) plays at the same moment; yes, Rihanna is an expert in production of right-handed leptons.)

There was also a giant umbrella in the middle representing the Big Bang; it became an interstellar cloud of dust around 18:35, as the accompanying dark-age songs and narrators' comments clarified. They chose quotes from Isaac Newton's Principia as lyrics for the song, a good choice. ;-)

If they hadn't forgotten about another Czech patent, the wellies, they could have discovered supersymmetry, too. Too bad: they did forget. ;-)

It seems as though athletes need a physical handicap in order to learn some cutting-edge physics.

Wednesday 29 August 2012

Moonlanding was staged, 74% of climate alarmists say

Update: poll results

You may still vote but analogously to the Lewandowsky poll, I decided that a sufficient number of votes have been submitted and we may reveal the final results:



You see that out of the 219 votes, 84% of the people are climate skeptics. Among the climate skeptics, 3% are conspiracy theorists. Among the people alarmed by climate change, 74% are conspiracy theorists. ;-)

You get the point, don't you?

Just to be sure, I am not saying that the results of this "poll" should be taken seriously. And I hope you realize what's the main reason they shouldn't. However, I surely do claim that the "poll" by Lewandowsky et al. was exactly as "serious" as this one.
Off-topic, breaking news: TRF is getting hundreds of hits from harvard.edu to this 2010 TRF article on the elixir of youth, well, a fraudulent Harvard stem cell research claiming to have found one. Ms Shane Mayack was finally found guilty of misconduct today: Boston.com, Nature News
Original article posted 15 hours ago:




Where can you find out whether climate skeptics are conspiracy theorists?
A poll is included in this blog entry



A blog at the Guardian and many other sources have promoted a "research paper" by an Australian professorial fellow (whatever it means) called Stephan Lewandowsky (plus co-authors, Dr Cignac and Dr Oberauer) and named
NASA faked the moon landing  — Therefore (Climate) Science is a Hoax:

Motivated rejection of science (PDF full, in press, Psychological Science)
in which the authors inform us about their "scientifically sounding" poll of 1,100 people that has "established" a strong correlation between climate skepticism and conspiracy theories such as
  1. "HIV doesn't cause AIDS" and 
  2. "moonlanding was shot in Nevada" and 
  3. "Martin Luther King was killed by CIA" and 
  4. "[first-hand] smoking doesn't increase the odds for cancer" and
  5. "the U.S. government was behind 9/11",
aside from many others. Mr Lewandowsky has "therefore" explained the existence of these conspiracy theories by the belief in free markets (no kidding) and reclassified climate skepticism as the newest conspiracy theory.

If it's not a strong enough cup of tea for you yet, you must look at an "additional detail" that was pointed to me in an e-mail from Joanne Nova, see also her blog:
Lewandowsky – Shows “skeptics” are nutters by asking alarmists to fill out survey
The question is: Where did Mr Lewandowsky et al. find the conspiracy believers that were so important for them to establish the correlation with climate skepticism? The shocking answer is that while those 1,100 participants were visitors of blogs, none of the skeptical blogs you know – Anthony Watts, Jeff Id, Joanne Nova, your humble correspondent, Tom Nelson, CarbonSense, GWPF, and others – have participated in the "scientific survey".

Instead, the blogs exploited in this "scientific survey" were hardcore lunatic alarmist blogs such as Deltoid, Grant Tamino Foster, Scott Mandia, and 3-5 others. To find out whether climate skeptics are paranoid, Mr Lewandowsky asked the visitors of the alarmist blogs whether they are paranoid! ;-)

I mean: Why would anyone even pay any attention to such "scientific polls" which are clearly not scientific? To show what I mean, let me include a poll here, too. You are kindly asked to decide whether you are a climate skeptic or not; and count how many "conspiracy theories" above you believe. And you should participate in a poll.

The poll could measure individual correlation coefficients between individual conspiracy theories and individual tenets of the global warming doctrine as well but my point is that even the overall correlation coefficient is untrustworthy so you will only be asked whether you believe in a majority of the 5 conspiracy theories above or not. Consequently, you must include yourself into one of the four groups. Here is the poll:

Are you a skeptic and/or conspiracy theorist?
 
 
 
 
  
pollcode.com free polls 

If you need the URL for this poll only, try this one.

Just like in the original experiment, I may assure you that police won't investigate you whether your answers are accurate. BTW if TRF were a pure climate (skeptical) blog, I would know what the results would approximately look like. But because there are clearly non-paranoid alarmist visitors to this blog, I am not so sure! ;-)

Incidentally, the term "conspiracy theory" is somewhat vague. It's a label that is usually used to dismiss a hypothesis. But some hypothesis that some people call "conspiracy theories" may still turn to be right. Just to be sure, I don't believe any of the "conspiracy theories" in Lewandowski's paper. One could be more inclusive and count the disbelief in cosmic inflation or string theory as conspiracy theories, too. It's of course a matter of definition; whether something is included as a conspiracy theory doesn't prove that the content must be right or wrong.

BTW I realize that there are people in all the four categories. I also think that the belief in a staged moonlanding is a much better criterion of insanity than the climate panir or climate skepticism. Because the folks who matter are those who don't believe that moonlanding is staged, it is pure demagogy to link non-paranoid climate skeptics to the paranoid climate skeptics, much like it would be pure demagogy to link non-paranoid alarmists to the paranoid ones.

A funny addition.

Willie Soon has brought my attention to a 2009 letter to the editor that he and five co-authors sent as a reply to claims that climate skepticism is analogous to the belief that the moonlanding was staged. They make the case that there's no consensus about the dangerous man-made climate change and mention this beautiful "detail":
One of us, Dr. Harrison Schmitt, actually stood on the moon, drilled holes, collected moon rocks and has since returned to Earth. Man’s landing on the moon is real.
Now, the only way for the climate alarmists to question the credentials of Dr Schmitt et al. to discuss whether climate skepticism is on par with the moonlanding conspiracy theories is to say: But Mr Schmitt, you're a denier so your trip to the Moon was just a video trick orchestrated and paid by the Big Oil industry. ;-) We only trust real NASA experts such as Mr James Hansen who are really living on the Moon.

The alarmists are saying very similar things – preposterous, manifestly indefensible, ideologically motivated ad hominem attacks on inconvenient people – all the time.

Conformal Standard Model and the second \(325\GeV\) Higgs boson

Does Peter Higgs (or God) have a secretive brother?

Křištof Meissner and Hermann Nicolai released a short preprint
\(325\GeV\) scalar resonance seen at CDF?
in which they use a strange accumulation of four events of the type\[

p\bar p \to \ell^+ \ell^- \ell^+ \ell^-

\] observed by CDF, a detector at the defunct Tevatron, that happen to have the invariant energy \(E=325\GeV\) within the detector resolution, to defend some interesting models in particle physics. The probability that four events of this kind are clumped this accurately is (according to the Standard Model and some simple statistical considerations) smaller than 1 in 10,000. I would still bet it's a fluctuation. But it is unlikely enough for us not to consider the authors of papers about this bump to be leaves blown around by a gentle wind.

Three previous TRF articles have discussed possible signals near \(325\GeV\): this very four-lepton signal, a different signal at D0 indicating a different particle, a new top-like quark, and some deficits near that mass at the LHC: yes, a convincing confirmation of the \(325\GeV\) by the European collider doesn't seem to exist.




At the end of their new article, Meissner and Nicolai mention that this bump, if real, could be the heavier new Higgs boson in the Minimal Supersymmetric Standard Model in which, I remind you, the God particle has five faces.

This is the possibility that every particle phenomenologist is aware of but most of the new Polish-German article is actually dedicated to a different, non-supersymmetric explanation. They exploited the excess to promote their old, 2006 idea about the Conformal Standard Model:
Conformal Symmetry and the Standard Model
In this interesting model, one doubles the number of the Higgs boson but the justification is a different one than the supersymmetric justification. And they conclude that this Conformal Standard Model stabilizes the hierarchy because it is conformally invariant classically; and it may remain consistent all the way up to the Planck scale, too.

That paper is designed to explain the unbearable lightness of the Higgs' being in an unusual, yet seemingly very natural way: things are light because in an approximation, they're massless. Their being massless is a consequence of the conformal symmetry and this symmetry should be imposed at the tree level. What does it mean? Which terms violated the conformal invariance at the classical level?

Well, it's easy to answer this question. The only conformally non-invariant terms are those that have dimensionful (i.e. not dimensionless) coefficients and in the Standard Model, the only such classical terms are the \(-\mu^2 h^2\) quadratic terms for the Higgs field. In the Standard Model, this term is the source of the low-energy, electroweak scale and all other masses such as the Higgs mass, Z-boson mass, W-boson mass, and top quark mass (and, with some suppression, other fermion masses) are controlled by this quadratic term.

This quadratic term is also what makes the Standard Model unnatural.

Obviously, particle physics wouldn't work if you just erased this term: the electroweak symmetry couldn't be broken at all. So they have to emulate its functions – well, more precisely, they have to prove that Nature emulates its functions – differently. "Differently" means that the electroweak symmetry is broken by quantum (i.e. virtual loop) effects: they need a "radiative" (="by quantum loops") electroweak symmetry breaking.

This idea has been around for a long time because of the work of two rather well-known men, Coleman and Weinberg. However, in the context of the Standard Model, it's been a failing idea. The most obvious bug is that the radiatively generated quadratic term still had to be rather small compared to the quartic one – because it's just a "quantum correction" – which means that the Coleman-Weinberg model predicted a Higgs boson much lighter than the Z-boson, about \(10\GeV\). That's too bad because we have known that the Higgs mass is \(126\GeV\) since the Independence Day and we have realized that the mass exceeds \(100\GeV\) for more than a decade. If you tried to achieve this heavy Higgs boson in the Coleman-Weinberg framework, you would need such a strong quartic self-interaction for the Higgs that it would die of the Landau pole disease right behind the corner, within the energies that the LHC is already probing.

Meissner and Nicolai chose to incorporate right-handed neutrinos with both Dirac (shared with left-handed neutrinos) and Majorana mass terms and the seesaw mechanism; and the extra scalar field that is helpful for a particular realization of the seesaw mechanism. They have also changed the detailed logic of how the conformal symmetry is allowed to be violated (to arguments centered around the dimensional regularization). I don't quite understand the change yet and I don't see whether this change is quite independent from their other "update", the addition of the new neutrino and scalar fields. However, what I understand is that their model treats the two Higgs doublets "democratically" when it comes to the Higgs potential terms (and there is a quartic term mixing them). However, the Yukawa couplings are different for the two Higgs doublets; the normal one is responsible for the quarks and charged leptons while the new one is responsible for the neutrinos.

At any rate, they compute the one-loop effective potential for the old light Higgs field \(h\) and their new, now arguably \(325\GeV\)-weighing scalar field \(\phi\). These one-loop terms in the potential contain some logarithms and for dimensional reasons, the arguments of the logarithms have to be dimensionless. This forces them to introduce a new scale \(v\). What's different about this \(v\) relatively to "generic" scales that appear in similar quantum field theories is that its powers never enter the effective Lagrangian; it only appears through its logarithm.

The renormalization group flows are modified in the presence of the two scalar doublets. I don't understand the reason "conceptually" but they claim that because of the extra scalar doublet, the Landau pole is delayed to super-high energies above the Planck scale so the theory may be OK up to the scale of quantum gravity.

In the new 2012 paper, the authors also offer some reasons not to worry that the hypothetical new particle at \(325\GeV\) is not showing up in events with missing energy or events with two jets instead of two of the four leptons. If you offered me 1-to-1 odds, I would bet that their model isn't realized in Nature but it is neither impossible nor "insanely implausible", I think.

Tuesday 28 August 2012

Alan Guth and inflation

Alan Guth of MIT is one of the nine well-deserved inaugural winners of the Milner Prize. He has received $2,999,988 because Milner failed to pay the banking fees (Alan Guth was generous enough not to have sued Yuri Milner for that so far).

As far as I know, Alan Guth is the only winner of a prize greater than the Nobel prize who has ever regularly attended a course of mine. ;-)



I have taken many pictures of Alan Guth, this is the fuzziest one but I think it's funny to see a young Italian physicist showing a finger to Alan Guth in the New York Subway during our trip to a May 2005 conference at Columbia University.

Under the name Alan H. Guth, the SPIRES database offers 73 papers, 51 of which are "citeable". That's fewer than some other famous physicists have but the advantage is that it keeps Alan Guth in the rather elite club of physicists with about 200 citations per average paper.




For quite some time, Guth would work on rather typical problems of particle physics, the science of the very small, but he of course became one of the main symbols of modern cosmology, the science of the very large. Note that the LHC probes distances comparable to \(10^{-20}\) meters while the current radius of the visible universe is about \(46\) billion light years which is \(4.4\times 10^{26}\) meters.

Every distance scale comes with its own set of physical phenomena, visible objects, and effective laws, and it may look very hard to jump over these 46 orders of magnitude from the very short distance scales to the very long distance scales and become a leader of a different scientific discipline. And indeed, it is rather hard. However, Nature recycles many physical ideas at many places so the "ideological" distance between the short and long distance scales is much shorter than the "numerical" distance indicates. Fundamental physicists are the rulers of the vast interval of distance scales (except for some messy phenomena in the middle where folks such as biologists may take over for a while).

And yes, Alan Guth's most famous discovery was a very important piece of "reconciliation" between physics of very short distances and physics of very long distances – a fascinating idea that put their friendship on firmer ground. (We're not talking about quantum gravity here which is what we do if we talk about the "stringy reconciliation"; gravity is treated classically or at most semiclassically in all the discussions about inflation.) Guth was thinking about the Higgs field – a field that became very hot this summer – and he realized it could help to solve some self-evident problems in cosmology.

By finding a speedy bridge between the world of the tiny and the world of the large, Guth has also explained where many large numbers comparing cosmology and particle physics such as "the number of elementary particles in the visible Universe" come from. These large numbers were naturally produced during an exponentially, explosively productive ancient era in the life of our Universe, an era in which the Universe acted as "the ultimate free lunch", using Guth's own words. Yes, cosmology has acquired an exemption from the energy conservation law. While people who study inflation usually say that there's nothing such as a free lunch (if they're economists, including Alan G[reenspan]), and they're "mostly" right, their colleague Alan Guth knows better.

Two papers by this author have over 1,000 citations. The pioneering 1980 paper on cosmic inflation has collected over 4,000 citations so far; Guth's 1982 paper with S.Y. \(\pi\) on fluctuations in new (i.e. non-Guth) inflation stands at 1,300+ now. Three more papers above 250 citations are about scalar fields, phase transitions, and false vacuum bubbles. All the papers are on related topics but they're inequivalent.

Old inflation: first look at the paper

Of course, I want to focus on his most famous paper whose content began to be discovered in 1979,
The Inflationary Universe: A Possible Solution to the Horizon and Flatness Problems (scanned PDF via KEK, full text)
Rotate the PDF above in the clockwise direction; these commands are available via the right click in the Chrome built-in PDF reader, too.

Some people make a breakthrough but they present the idea in a confusing way and other people have to clean the discovery. I think that Guth's paper is different. It may be immediately used in the original form. It highlights the awkward features of the old-fashioned Big Bang Theory in a very modern way, pretty much the same one that people would talk about today, 30 years later; it sketches the basic strategy how to solve them; and it lists some undesirable predictions of his model of "old inflation" that could perhaps be solved by future modifications. If you read the paper in a certain way, you might conclude that everyone else who did research on inflation was just solving some homework exercises vaguely or sharply defined by Guth. He or she was filling holes in a skeleton constructed by Alan Guth.

Problems of TBBT

If you use the term "The Big Bang Theory" in the less popular sense – i.e. if you are talking about a cosmological theory, not a CBS sitcom – you will find out that despite all the advantages, the theory has some awkward features (unlike Sheldon Cooper who doesn't have any).

Alan Guth correctly identified two main problems of TBBT: the horizon problem and the flatness problem. I am no historian and at the end of 1979, I was affiliated with a kindergarten so I can't tell you how much people were confused about the disadvantages of TBBT in the late 1970s. But it's clear that Alan Guth wasn't confused.

The horizon problem

The horizon problem is the question why the cosmic microwave background radiation discovered by Penzias and Wilson in 1964 seems to have a uniform temperature around 2.7 kelvins, with the relative accuracy of 0.001% or so, even though the places in different directions of the heaven where the photons were originally emitted couldn't possibly have communicated with each other because they were too far from each other and the speed of light is the universal cosmic speed limit (for relative speed of two information-carrying objects moving past each other).



The limitations on speed are relevant because the Penrose causal diagram of the spacetime in TBBT looks like the picture above. Observers and signals have to move along timelike or lightlike trajectories which are, by the definition of the Penrose diagram, lines on the Penrose causal diagram that are "more vertical than horizontal", i.e. at most 45 degrees away from the vertical direction.

But at the moment of the Big Bang, and this moment is depicted as the lowest horizontal "plate" on the picture, the Universe had to be created and there was no prehistory that would allow the different places of the ancient Universe to agree about a common temperature. You might object that the Universe "right after \(t=0\)" was smaller so it could have been easier to communicate (shorter distances have to be surpassed). But you also had a shorter time between \(t=0\) and the other small value of \(t\) and if you study these things quantitatively, you will realize that the latter point (shortage of time) actually becomes more important than the smallness of the distances (because the distances go like \(a\sim t^k\) for \(k\lt 1\) so they're "more constant" than the time, relatively speaking), so to agree about a common temperature, two places in the Universe would need an ever higher speed of communication which surely exceeds the speed of light \(c\).

In other words, two events at \(t=0\), the horizontal plate at the bottom of the picture, had "no common ancestors" i.e. no other events in the intersection of their past light cones – because there was no "past" before the Big Bang (sorry, Bogdanov brothers and others) – so it's puzzling why the temperature is uniform if the different regions of the Universe were "created by God" independently from others.

Alan Guth proposed a solution: if the Universe has ever been exponentially expanding for a long enough time i.e. by a high enough factor, the Penrose diagram effectively becomes much taller – it looks like we are adding a whole "pre-Big-Bang prehistory" below the bottom plate at the picture above – and suddenly there is enough room to prepare the thermal equilibrium by the exchange of heat. So with this "taller" Penrose diagram, the equal magnitude of temperature in different directions is no longer mysterious: it is a result of a relatively long period of thermalization i.e. exchange of heat that inevitably erases the temperature differences.

To successfully achieve this goal (and especially the "flatness goal" to be discussed later), we need a certain amount of time for the thermalization: the universe has to increase about\[

e^{62} \approx 10^{27}

\] times, i.e. a billion of billions of billions of times. Appreciating that \(2.718\) is a more natural base of exponentials than ten (because Nature has \(e\) and not ten fingers as humans or two fingers as the discrete physicists), physicists say that there had to be at least \(62\) \(e\)-foldings. An \(e\)-folding is a period of time during an exponential expansion in which linear distances increase \(e\) times. The required minimum varies but 60-65 is what people usually consider the minimum (but there's nothing wrong with thousands of \(e\)-foldings, either, and many models on the market actually predict even higher numbers). I only chose \(62\) so that I could have written "billion of billions of billions". ;-)

So the distances \(a\) between two places in the sky as defined by the FRW coordinates, i.e. between two "future galaxies", grew \(10^{27}\) times during inflation; the remaining multiplicative growth was due to the ordinary Big Bang Theory growth (which approximately follows power laws, \(a\sim t^k\)). But because the growth was exponential, the proper time that inflation took was just \(62\) of some basic natural units of time: a natural, small number.

You see that the exponential growth is what allows cosmology to "quickly connect" very different distance scales and time scales. If you can expand the distances \(10^{27}\) times very quickly, it's easy to inflate a subatomic object to astronomical distances within a split second. That's cool. The uniformity of the temperatures suddenly becomes much more natural (even though you could have waved your hands and say that God created different regions of the Universe in similar conditions even if they couldn't have communicated with each other – because He has some universal initial conditions that just hold everywhere).

A reader could protest that we cheated because we "explained" the unnatural features of the Universe by using large numbers that are calculated as exponentials and the exponentials themselves are "unnatural". However, the latter assertion is incorrect. The exponentials are actually totally natural in the inflationary context. It's because the FRW equations, Einstein's equations simplified for the case of a homogeneous and isotropic expanding Universe, imply that the distance \(a\) between two future (or already existing) galaxies obeys\[

\ddot a = \dots + \frac{\Lambda c^2}{3} a.

\] Einstein's equations control the second time derivative of \(a\) – which emerges from the second derivatives of the metric tensor that is hiding in the curvature tensors – and the equation for the second derivative of the distance \(a\) is analogous to an equation in the Newtonian physics, \(ma=F\), for the acceleration of an object. In the FRW case, the force on the right hand side contains a term proportional to the cosmological constant \(\Lambda\) as well as \(a\) itself. And you may verify that the equation \(\ddot a = K a\) has solutions that are exponentially increasing (or decreasing, but the increasing piece ultimately dominates unless you fine-tune the exponentially growing component exactly to zero).

Well, the exponentially increasing/decreasing functions are solutions for \(K\gt 0\) i.e. \(\Lambda\gt 0\), a positive cosmological constant. For \(K\lt 0\), the solutions are sines and cosines because the equations describe a harmonic oscillator. (That's also why a negative cosmological constant \(\Lambda\) would tend to produce a Big Crunch – a sign that the Universe would like to resemble an oscillatory one.) You may see that if you had a spring with a negative (repulsive) spring constant, it would shoot the ball attached on the spring exponentially.

It's because the derivative (and the second derivative) of the exponential function is the exponential function (times a different normalization) in general. I hope you know the joke about functions walking on the street. Suddenly, the derivative appears behind the corner. All functions are scared to hell. Only one of them is proudly marching on the sidewalk. The derivative approaches the function and asks: Why aren't you afraid of me? I am \(e^x\), the function answers and moves the derivative by one unit of distance away from itself (because the exponential of the derivative is the shift operator, because of the formula for the Taylor expansion).

Sorry if I made the joke unfunny by the more advanced Taylor expansion piece. ;-)

Fine. The exponential (the exponentially increasing proper distance between the seeds of galaxies) is a totally natural solution of the basic universal equations – of nothing else than Einstein's equations expressed in a special cosmological context. It's not cheating. It's inevitable physics.

Flatness problem

Concerning the flatness problem, I may recommend you e.g. this question on the Physics Stack Exchange plus my answer.

Einstein's equations say that the spatial slice \(t={\rm const}\) through the Universe is a flat 3D space if the average matter density is close to a calculable "critical density", or their ratio \(\Omega=1\). However, it may be derived that \(|\Omega-1|\), the (dimensionless) deviation of the density from the value that guarantees flatness, increases with time during the normal portions of TBBT (which are either radiation-dominated or, later, matter-dominated).

Observations today show that the \(t=13.7\) billion years slice is a nearly flat three-dimensional space – the curvature radius is more than 1.5 orders of magnitude longer than the radius of the visible Universe (i.e. the curvature radius is longer than hundreds of billions of light years) – so \(|\Omega-1|\leq 0.01\) or so today. But because this \(|\Omega-1|\) was increasing with time, we find out that when the Universe was just minutes or seconds old (or even younger), \(|\Omega-1|\) had to be much more tiny, something like \(10^{-{\rm dozens}}\). Such a precisely fine-tuned value of the matter density is unnatural because \(|\Omega-1|\) may a priori be anything of order one and it may depend on the region.

Our Universe today seems rather accurately flat – I mean the 3D spatial slices – and you would like to see an explanation. You would expect that the flatness is an inevitable outcome of the previous evolution. However, TBBT contradicts this explanation. In TBBT, the deviations from flatness increase with time, so when the Universe was very young, the Universe had to be even closer to exact flatness by dozens of orders of magnitude, so it had to be even more unnatural when it was young than it is today! It had to be unbelievably unnaturally flat.

Again, cosmic inflation solves the problem because it reverses the trend. During cosmic inflation, \(|\Omega-1|\) is actually decreasing with time as the Universe keeps on expanding. So a sufficiently long period of inflation is again capable of producing the Universe in an unusually "nearly precisely flat" shape and some of its exponentially great flatness may be wasted in the subsequent power-law, TBBT expansion that makes the flatness less perfect. But the accuracy with which the Universe was flat after inflation was so good that there's a lot of room for wasting.

Inflation also solves other problems. For example, it dilutes exotic topological defects such as the magnetic monopoles. If you watch TV, you must have noticed that Sheldon Cooper's discovery of the magnetic monopoles near the North Pole was an artifact of a fraudulent activity of his colleagues. It seems that the number of magnetic monopoles, cosmic strings, and other topologically nontrivial objects in the Universe around us is much lower than what a generic grand unified theory would be willing to predict. Inflation makes the Universe much larger and the density of the topological defects decreases substantially, pretty much to \(O(1)\) defects per visible Universe. It's not too surprising that none of these one or several defects moving somewhere in the visible Universe has managed to hit Sheldon Cooper's devices yet.

So Alan Guth realized that the exponentially increasing period is a very natural hypothesis about cosmology beyond (i.e. before) the ordinary Big Bang expansion which helps to explain previously unnatural features of the initial conditions required by the ordinary Big Bang expansion. He also realized that the cosmological constant needed for this exponential expansion may come from a scalar field's potential energy density \(V(\phi)\). That's where his particle physics experience turned out to be precious: it's just enough to consider the potential energy for the Higgs field \(V(h)\), realize that its positive value has the same impact on Einstein's equations as a positive cosmological constant – they're really the same thing, physically speaking, because you may simply move the cosmological constant term \(-(1/2)Rg_{\mu\nu}\) to the right hand side of Einstein's equations and include it as a part of the stress-energy tensor. And he had to rename the Higgs field to an inflaton.



Guth's original "old inflation" assumes that the inflaton sits at a higher minimum of its possible values i.e. its "configuration space" during inflation and it ultimately jumps to a different place (the place we experience today) where the cosmological constant is vastly lower.

Now, the exponential expansion had to be temporary because we know that in the most recent 13.7 billion years, the expansion wasn't exponential but it followed the laws of the Big Bang cosmology. So the state of the Universe had to jump from a place in the configuration space with a large value of \(V(\phi)\) to another place with a tiny value of \(V(\phi)\). In Guth's "old inflation", it would literally be a discontinuous jump. In a year or two, "new inflation" i.e. "slow-roll inflation" got popular and started to dominate the inflationary literature. In the new picture, the inflaton scalar field continuously rolls down the hill from a maximum/plateau (the upper inflationary-era position is no longer a local minimum of the potential in that "new inflation" picture but it isn't necessarily a catastrophe) it occupies during inflation to the minimum we experience today. When it's near the minimum, its kinetic energy is converted to oscillations of other fields, i.e. particles that become seeds of the galaxies.

The very recent 8 years in cosmology and especially in string theory have shown that "new inflation" may possibly be incompatible with string theory. The very condition of the "slow-rollness", the requirement that the inflaton rolls down (very) slowly which is needed for the inflation to last (very) long, might be incompatible with some rather general inequalities that may follow from string theory. It's the main reason that has revived the interest in the "old inflation": the transition from inflation to the post-inflationary era could have been more discontinuous than "new inflation" has assumed for decades and physicists may be forced to get back to the roots and solve the problems of "old inflation" differently than by the tools that "new inflation" had offered.



Averaged fluctuations of the CMB temperature as a function of the typical angular scale: theory agrees with experiments.

These comments rather faithfully reflect the amount of uncertainty about inflation. The observations of the cosmic microwave background made by WMAP satellite – and even more recently, the Planck spacecraft – are in excellent, detailed agreement with the theory that needs TBBT as well as the nice, flat initial conditions, as well as some initial fluctuations away from the flatness that are naturally calculable within the inflationary framework.

So the pieces probably have to be right.

However, there are many technical details – about the mass scale associated with the inflaton (it may be close to the GUT scale but it may be as low as the electroweak scale: there are even models using the newly discovered Higgs field as the driver of inflation although they need some extra unusual ingredients); about the number of inflaton scalar fields; about their detailed potential; about the question whether any quantum tunneling has occurred when the inflationary era ended; whether the scalar field should be interpreted in a more geometric way (e.g. the distance between branes, some quantity describing the evolving shape of the hidden dimensions etc.); and other things.

But it is fair to admit that I would say that exactly the general features that were discussed in Alan Guth's pioneering paper have already been empirically established. The Nobel prize is nevertheless awarded for "much more directly" observed discoveries so it's great that Yuri Milner has created the new prize in which the "theory-driven near-complete certainty" plays a much larger role than it does in Stockholm.

And that's the memo.

Previous article about the Milner Prize winners: Ashoke Sen

Monday 27 August 2012

Crackpots are patient while sending texts to journals

In November 2011, a group of three physicists/mathematicians wrote the 3,635th paper suggesting that there had to be something wrong about the foundations of modern physics as redesigned by the founding fathers of quantum mechanics. Or at least this is how the paper was interpreted by some journalists.

None of these 3,635 papers has ever offered any successful description of experiments by an alternative theory that isn't equivalent to proper quantum mechanics and most of these papers contain statements that are manifestly false. Everyone knows that this whole anti-quantum program has been a giant waste of time and a miserable failure but people keep on writing similar garbage because the human stupidity and bigotry knows no limits.




If you read the paper by Pusey, Barrett, and Rudolph, it is self-evident in pretty much every sentence that they always assume that the world is a manifestation of a fundamentally classical system of laws. Even though physicists have known that the laws of physics in this Universe fundamentally differ from the very framework of classical physics for more than 85 years, these folks view the possibility of a non-classical essence of the world as a taboo. It can't even be thought about. A heresy. These folks are typical cultists, religious nuts.

In practice, they only think about several classes of hypothetical classical descriptions; non-classical candidates aren't allowed. In essence, these crackpots assume that the state of the world is fundamentally described either by an "ontic" state (from a Greek world related to the existence of things) – a pompous would-be philosophical term for a point in a phase space – or an "epistemic" state – a fancy word for a probability distribution on a phase space.

These nuts are sometimes capable of finding arguments that one of the scenarios is incompatible with the reality – everyone can do it easily because both scenarios are obviously incompatible with reality – but they incorrectly assume that any evidence against one of these two classical models shows that the other model is right.

But it doesn't because both of them are wrong. The world isn't described by any "ontic" state; and it isn't exactly described by an "epistemic" probability distribution on a phase space, either. It is described by a theory – quantum mechanics – that cleverly generalizes the second possibility.

The statement that there is no "ontic" state underlying the world is nothing else than the statement that the world doesn't follow the laws of classical physics. Nature can't objectively be in any "right point" of a "right phase space" now because phase spaces are fuzzy and one can't determine all the coordinates simultaneously; that's called the uncertainty principle.

Concerning the second wrong possibility, conventional probability distributions on the phase spaces look more general but they are still classical. In fact, if you say that an object may be described by a probability distribution on an ordinary classical phase space and nothing else, then you are implying that the probabilistic nature is purely due to one's personal ignorance about the "actual" point of the phase space that Nature chose. All observations will be compatible with the assumption that at every moment, Nature was objectively occupying a particular point of the phase space and all predictions fundamentally boil down to this assumption. The usage of probability distributions in classical statistical physics is purely about our ignorance (or lack of interest) about the detailed microstate but in principle, this state could be isolated.

However, quantum mechanics says that it ain't the case. Even in principle, there can't be any objective state of the system. The wave function is closely related to probability distributions – probability distributions may be "rather easily" calculated from bilinear expressions in the wave function – but the wave function isn't a classical probability distribution exactly. It has extra phases that are very important for almost all predictions yet totally unknown in classical physics; and due to the complementarity, its dependence on the position (or another observable) automatically encodes its dependence on the momentum (the complementary observable to the first one). Of course, the phases \(\psi(x)\) are critical for any reconstruction of \(\tilde\psi(p)\).

So Nature isn't a classical model with a simple phase space parameterized by the observables we know and routinely measure; Nature isn't a classical model with a more complicated phase space that also contains additional "hidden variables"; Nature isn't a classical model whose phase space emulates the numbers in the wave function; and Nature isn't a classical model based on a probability distribution on a classical phase space that would be in one-to-one correspondence with the complex-number-based quantum entities.

To summarize, Nature isn't any of these things. Nature isn't classical. Nature is a wonderfully new and clever beast, a quantum mechanical system. Its quantum rules can't be squeezed into any of these classical straitjackets indefinitely advocated by the infinitely stubborn anti-quantum cranks. That's why the evidence against either of these classical pictures isn't evidence in favor of any of the classical alternatives.

In their misguided "competition" between various wrong classical models, the winner was a "truly bad guy" which was maximally different from quantum mechanics, and the people therefore insanely concluded (in the very title!) that the quantum state cannot be interpreted statistically. Just to be fair, the same Barrett and Rudolph together with two new co-authors also wrote a paper called The quantum state can be interpreted statistically. What a diversity. The first, "cannot" paper was later renamed to a neutral title, On the reality of the quantum state. Depending on the paper and the version you read, the same authors can answer the key question positively, negatively, or neutrally.

The first paper in which the authors assumed wrong foundations of physics brought nothing new to the subject; they just described another simple toy model dealing with quantum information and showed, much like dozens of previous papers, that the actual behavior of Nature is in conflict with some of the classical ideas (while they self-evidently fail to see that it is in conflict with any type of a classical picture of the world, so they may publish 500 additional "revolutionary papers" with no meaningful content in the future).

Back to the sociological issues

As soon as the preprint was sent to the arXiv – which is something that pretty much every person with a university affiliation may do – Nature allowed a journalist named Eugenie Samuel Scott to announce a revolution in physics, Quantum theorem shakes foundations. The correct title should have been "another crackpot paper was posted to the arXiv" and this title should have been written on a high school toilet, not in a well-known journal claiming to cover science.

At any rate, there had to be an editor who was responsible for this unjustified and – as the following months confirmed, unjustifiable – hype. The paper was sent to the very same journal Nature at about the same time and, when someone important enough wants it this way, peer review was just a formality. In a new Cosmic Variance article, Terry Rudolph, one of the three crackpots, pompously described how their paper was almost published in Nature.

Well, the original paper received mixed reviews. In some cases such as this one, Nature doesn't need papers with perfect ratings so the editor simply ignored the negative review. What could he or she have done if Nature supervised by the same editor had already approved a popular article on a new revolution in physics? However, some extra problems and opposition appeared so the crackpot paper was finally published only in a new and dramatically less influential subjournal, Nature Physics. What they should have done is to reject the paper and publish a correction of Eugenie Samuel Scott's outrageously idiotic "commercial" but we would probably expect too much integrity from the journal.

Rudolph's detailed stories about the journey of their crackpot paper help to reinforce the theory about the crackpots' obsession with the conventional journals. They're not Wittens so they haven't published 300+ papers or so. So every paper that manages to penetrate through the quality filters is a source of immense pride for these folks. This is true both for the institutionally unaffiliated crackpots as well as the institutionally affiliated ones. Lee Smolin is a good textbook example of the latter category. If his paper is rejected, he doesn't care. He just keeps on sending it to other journals. By chance, it is almost inevitable that it is ultimately accepted somewhere.

Of course, the record holders in the "number of submissions per paper" work outside the system. This behavior is unusually hypocritical especially for those of them who otherwise criticize the "establishment". But if they see any nonzero chance to be admitted as members, they would do everything you can think of to join! Still, if they want to use the number of papers published in serious enough journals as an argument to win over science connected with folks like Edward Witten, they still need to publish at least about 350 papers in similar journals. ;-)

Terry Rudolph may be less obsessed with journal publications than the most obsessed crackpot colleagues but he's still obsessed with this sociological criterion. What he doesn't want to show is that even according to the sociological criteria, the paper sucks. Nine months after these folks "shook the foundations of physics", using Eugenie Samuel Scott's modest words, the paper has eleven citations. (Maldacena's AdS/CFT paper has had 300 citations, usually very serious and nontrivial papers, after 9 months.)

The only paper in this list of 11 papers that has been cited itself is a paper by Hardy – which is also deeply confused, by the way. In this list of the other 10 dull, confused, and uncited papers, the most eye-catchy one is the last one, an essay written by hardcore creationist crackpot and a Shmoit fan Roger Schlafly: Nature has no faithful mathematical representation. Although the author is a complete nutcase, this paper is arguably the most sensible one (or least insane one) in the list of the 11 papers (although it brings nothing new, of course).

Hype may occasionally be useful but do we really have to regularly see hype about similar junk? In particular, quantum mechanics and its rules have been known since the 1920s. Many people received their Nobel prizes and Max Born got 1/2 of the 1954 physics Nobel prize "for his fundamental research in quantum mechanics, especially for his statistical interpretation of the wavefunction" – for a key feature of the quantum world that was new for physicists who had worked within classical physics for a few centuries. So don't you think that if someone writes papers with titles such as "wave function cannot be interpreted statistically", directly contradicting the Nobel-prize-winning insights by Max Born and others, he or she or they should have some evidence? And some evidence that goes beyond their ability to offer a 765th proof that Nature isn't a classical system following certain old-fashioned and obviously wrong rules?

There is no known evidence against quantum mechanics and there is no known alternative framework that could match the same empirical data (and the double slit experiment is really pretty much enough as a test) so why do some people speak about a permanent revolution in a layer of science that hasn't changed since the 1920s?

These people are living in a completely different world, a world of pompous crackpots who are much more interested in their visibility than in the truth about Nature.

And that's the memo.

Sunday 26 August 2012

Neil Armstrong: 1930-2012



Neil Armstrong, an unusually modest professional who considered moonwalk to be a job just like any other job, took this New World Symphony by Antonín Dvořák to his trip to Earth's only large natural satellite.




Exactly 40 later, in 2009, I combined the music (the final, fourth part "Allegro con fuoco" of the symphony, listen to the full piece) with the moving pictures above. Apologies that it wasn't optimized for similar sad moments. While I don't consider manned space flights a top priority, I do share the feeling that it's painful that the people have left the Moon – the mankind could soon find out that there is no living person who has walked on a celestial body – and I do think that someone should send people to Mars to make some progress in this obvious benchmark of the human power...

RIP Neil Armstrong.

(Because of the name, I can't avoid mentioning Lance Armstrong. The decision to strip him of all the titles and ban him for life seems cruel to me. I am no expert in these stories but it's my perception that his guilt has never really been proven – at least not an unusual type of substance – and the presumption of innocence was violated by the anti-doping authorities. So he remains the world's #1 cyclist in my eyes.)

Friday 24 August 2012

Simple proof QM implies many worlds don't exist

A vast majority of the people who write popular books, blogs, and comments at discussion forums about the foundations of quantum mechanics are peers of the stupid monkeys.



A week ago, Scott Aaronson wrote that he is a champion of the "Many Worlds Interpretation" (MWI) even though MWI is slightly more frail than heliocentrism. That's what I call an understatement on steroids.

The term "MWI" is notoriously ill-defined, it may mean everything or nothing or something in between and there is no actual theory of physics that would deserve this name and that would work. But let's assume that the proponents of MWI mean that there exist many worlds and different mutually exclusive properties of a physical system are realized simultaneously.

In the following 40 seconds, let's see that it ain't the case.




Let's take an electron and measure its spin component \(j_z\) via the Stern-Gerlach apparatus i.e. via a magnetic field.



The initial state of the electron is prepared to be "up" with respect to a particular tilted axis – every state of the spin in 3 dimensions is "up" with respect to a semi-axis – so that we have\[

\ket\psi = 0.6 \ket{\rm up} + 0.8 \ket{\rm down}.

\] So the electron will have a 36% chance to have the spin "up" and 64% chance to have the spin "down". Note that it's not just the absolute values of the amplitudes that matter. The relative phase matters, too. If we changed the relative phase of the two terms by the factor of \(\exp(i\alpha)\), it would mean that the axis with respect to which the electron is polarized "up" would rotate by the angle \(\alpha\). Such a rotation may be inconsequential for our measurement of \(j_z\) but it would matter for the measurement of all other components of the spin.

Now, let's ask the key MWI question: will there be an electron with spin "up" as well as an electron with spin "down"?

The MWI proponents say "Yes". They imagine that different possibilities "really occur" in different universes, and so on. So this is the main question that decides about the validity of the MWI. Stupid monkeys are obsessed by questions whether MWI and other things are "not even wrong", "politically correct", "obeying Occam's razor", "pretty", and all such irrational adjectives, but no one seems to care about the question whether it is scientifically false or true.

Quantum mechanics offers a universal rule to answer all Yes/No questions that have any physical meaning, that are in principle observable. For the given question, we identify the projection operator \(P\), i.e. a Hermitian operator \(P=P^\dagger\) obeying \(P^2=P\) (which is why its eigenvalue have to obey \(p^2=p\) as well and they must belong to the set \(\{0,1\}\) i.e. {No, Yes}). The expectation value\[

{\rm Prob} = \bra \psi P \ket \psi

\] is interpreted as the probability that the answer is Yes. Quantum mechanics doesn't allow us to predict anything else than probabilities. So there's always some uncertainty about the answer to the question. The only exceptions are projection operators whose expectation values are equal to \(0\) or \(1\): these values correspond to "certainly No" or "certainly Yes" and there's no uncertainty left.

We will see that the "key question of MWI" is of this sort. The projection operator for a question "A and B" is constructed as\[

P = P_A \cdot P_B.

\] When it comes to operators, "and" is multiplication. That's why Logical AND i.e. conjunction is also known as "binary multiplication". And that's also why the probabilities of two independent questions' having answers "Yes" is equal to the product of probabilities.

Fine, what are \(P_A\) and \(P_B\)? They are projection operators on the subspaces for which the answers to questions A and B are "Yes". In particular, we have\[

P_A = \ket{\rm up}\bra{\rm up}, \quad P_B=\ket{\rm down}\bra{\rm down}.

\] They're projection operators on the "up" and "down" states of the electron, respectively. There are just no other states in the Hilbert space for which the statement "there is an isolated electron with the spin up" or similarly "...down" would be valid. Now,\[

\braket{\rm up}{\rm down} = 0

\] and therefore\[

P = P_A P_B = \ket{\rm up}\bra{\rm up}\cdot \ket{\rm down}\bra{\rm down} = 0.

\] Therefore, the probability that there will be both an electron "up" and an electron "down" is\[

\bra\psi P \ket \psi = \bra \psi 0 \ket\psi = 0 \braket\psi\psi = 0.

\] I've written the derivation really, really slowly so that at least 10% of the stupid monkeys have a chance to follow it. At any rate, we may prove that the probability that the electron exists in both mutually exclusive states simultaneously is zero. It can't happen. The derivation is identical for any other mutually excluding alternative properties of any physical system.

Note that the operators \(P_A,P_B\) commute with one another, i.e. \(P_A P_B=P_B P_A=0\), which means that both questions may have an answer at the same moment (the uncertainty principle adds no extra hassle). That allows us to avoid some discussions.

The simple conclusion is that there aren't many worlds. QED. Get used to it, monkeys. ;-)

Let me now spend some time by discussing how indefensible various "loopholes" would be and why there are many other ways to see that the answer to the question "Are there many worlds?" had to be "No". And I want to mention several likely fundamental and rudimentary errors that prevent MWI advocates from deriving the right answer to this simple question and from seeing that this is truly a kindergarten stuff and not something that they should be confused by for days, weeks, months, years, decades, or centuries.

First, let me discuss the interpretation of the "plus" sign.

As I already suggested, it's important to distinguish addition and multiplication. (If you don't know what multiplication is, watch 0:40-0:45 Miss USA on maths.) The key fact is that the wave function composed of several mutually exclusive pieces such as\[

\ket\psi = 0.6 \ket{\rm up} + 0.8 \ket{\rm down}

\] has a plus sign that roughly means "OR", not "AND" as many people apparently think. When we care about the \(j_z\) component of the spin, the formula above says that the state \(\ket\psi\) allows the electron to be either "up" OR "down". It doesn't say that there is both a spin "up" AND a spin "down".

If we need to say "AND" in quantum mechanics, either "one proposition/question AND another proposition/question" (as discussed with the \(P=P_A P_B\) relationship above) or "one object added on top of another object", we need multiplication, not addition. For the case of the two propositions, we have already discussed an example, the \(P=P_A P_B\) relationship above. If we discussed physical systems composed of several pieces, e.g. a group of 2 apples and a group of 3 apples, we would need another kind of a product, the tensor product,\[

\ket{\text{5 apples}} = \ket{\text{2 apples here}} \otimes \ket{\text{3 apples there}}.

\] The matrix elements extracted from similar "tensor products" are products of the matrix elements for the individual subsystems and the same thing therefore holds for the probabilities, too.

Some people may be thinking that it almost looks like I am suggesting that the MWI advocates are complete idiots with the IQ of a retarded third-grader because they can't distinguish addition from multiplication. The reason why it looks so is that this is exactly what I am trying to say. In fact, it's pretty obvious that my attempts to say such a thing are successful and I am actually saying this thing. ;-)

Why is there so much confusion about the meaning of addition and multiplication here?

Because people with common sense – as it evolved for millions of years – and no genuine knowledge of the pillars of modern physics (which includes the MWI advocates) always think in terms of objects, e.g. apples. So when you're adding two apples and three apples, place the two groups next to one another, you're adding apples. Similar addition more or less applies to lengths of sticks, momenta and other conserved quantities, and even quantities such as voltages, currents, charges, and many others.

But this "combining objects that exist simultaneously is addition" is fundamentally and completely wrong for wave functions in quantum mechanics. In quantum mechanics, addition of wave functions or density matrices roughly corresponds to "OR", not "AND", and "AND" must be expressed by multiplication. How can we understand the origin of this flagrant difference between the classical thinking and quantum mechanics?

The primary reason is that quantum mechanics just isn't describing the objects themselves. It describes propositions we can make about objects. As Niels Bohr used to say, Physics is not a tool to describe how the reality is. Physics is a tool to say right things about what we can see. The basic building blocks such as the wave functions and projection operators don't describe and count objects but encode propositions, knowledge, information.

For propositions and their probabilities (expectation values of the projection operators), addition is simply not "AND", addition is "OR". The right mathematical expression for "AND" is another operation, namely multiplication rather than addition.

An MWI advocate could start to spread fog. It may be debatable which one it is, the difference between "AND" and "OR" isn't that important, anyway, and it may be up to centennial deep philosophical discussions which way it goes. Well, all these statements are pure rubbish. There isn't any ambiguity, confusion, or room for modifications. Addition and multiplication are completely different operations so you should better not confuse them. The right theory that is tested is the theory that says the same thing about the interpretation of addition and multiplication as I did. Be sure that if you modify its rules, the rules of quantum mechanics, by randomly replacing addition by multiplication and vice versa at various places, you will get a completely, qualitatively different theory that will yield a totally different description of the reality and it will disagree with almost all observations, including some extremely elementary ones.

There just isn't any room for confusions and debates. Just like a 7-year-old schoolkid who invents arrogant excuses why she cannot learn the difference between the addition and multiplication (note that I am politically correct so I sometimes include "she" in similar sentences, especially if it increases the degree of realism), the MWI proponents should be given a failing grade and should be spanked.

Be sure that any "technical" modification of my proof that there aren't many worlds will damage the theory so that it will become totally incompatible with the experimental tests. For example, if you suggested that the projection operator for "A and B" should be \(P_A+P_B\) rather than \(P_A P_B\), you will easily find out that the same rule used for any experimentally testable situation will lead to wrong predictions. In fact, pure thinking is enough to see that "AND" must be expressed by the product of the projection operators and not the sum.

Using charge conservation to prove there aren't many worlds

The fact that one electron can't suddenly be split to two electrons so that it would be both "here" and "there" may also be derived from charge conservation, angular momentum conservation, mass conservation, or other conservation laws. In quantum mechanics, such laws still hold.

If the initial state \(\ket\psi\) is an eigenstate of the electric charge operator \(Q\),\[

Q\ket\psi = q\ket\psi,

\] then, because \(QH=HQ\) i.e. the charge is conserved i.e. the symmetry generated by it is a symmetry of the Hamiltonian i.e. of the laws of physics, the final state will obey the same relationship with the same value of \(q\). But if there were an electron on both places, the electric charge could be shown to be doubled and different than the original one. That would conflict with the conservation law.

Inflating the Hilbert space along the way

Some people could say that my derivations are missing the point that there is an "Everett multiverse". I should have increased the size of the Hilbert space before the measurement etc.

There are many wrong things about such a potential objection.

First, the constancy of the dimension of the Hilbert space is a mathematical necessity. Especially because some MWI proponents including Brian Greene say that they want to be led by the most natural interpretation of the equations of quantum mechanics, it's totally indefensible to actually change the dimension of the Hilbert space along the way. It's surely not what quantum mechanics tells us to do. In fact, one may easily show that such a proliferation of the degrees of freedom couldn't lead to an internally consistent theory.

It may be explained in many ways, e.g. by the quantum xerox no-go theorem. There can't be any evolution of a state in \({\mathcal H}\) to a state in a larger Hilbert space such as \({\mathcal H}\otimes {\mathcal H}\) because the evolution of the state vector in quantum mechanics is linear while the map \[

\ket\psi\to \ket\psi\otimes \ket\psi

\] is not linear; it is bilinear or quadratic. If \(\ket x\) and \(\ket y\) were evolving to \(\ket{xx}\) and \(\ket{yy}\), respectively, then linearity would dictate that \(\ket{x+y}\) evolves to \(\ket{xx+yy}\) while the universal squaring formula would say that it should evolve to \(\ket{(x+y)^2}=\ket{xx+xy+yx+yy}\). These are different ket vectors on the larger Hilbert space because there are extra mixed terms. At any rate, it's a contradiction: in a quantum world, there can't be any gadget that creates two exact copies out of the arbitrary initial state.

Another problem with the objection is that I actually haven't done any assumption about the non-existence of the "Everett multiverse". For example, in the fast "charge conservation" proof, \(Q\) could have meant the total electric charge in "all branches" of the world you could ever hypothesize. Clearly, if the number of worlds is being multiplied, the charge won't be conserved. That will be a problem because the symmetry generated by \(Q\) won't be a symmetry of the laws that control the "Everett multiverse" anymore. It won't be able to be exact at a fundamental level, you won't be able to use it to constrain the laws of physics, and so on. This "demise" will be fate of all the symmetries in physics (translations, rotations, Lorentz boost, parity, etc.) because all symmetries are related to a conservation law.

One more problem with the "splitting of the Universes along the way" is that there can't possibly exist any justifiable rule about "when this splitting takes place". There aren't any sharp qualitative boundaries between phenomena in Nature. It's clear that there can't be any splitting during a sensitive interference experiment – because such an "elephant in china" converting the fuzzy quantum information into the classical one would surely destroy the interference pattern.

The problem is that in principle, we may say the same thing about 2 particles, 3 particles, 100 particles, or \(10^{26}\) particles. In principle, the interference pattern involving an arbitrarily large system may be measured so the Universe is just not allowed to "split" into possibilities where different classical outcomes are realized because such a splitting would make the "reinterference" permanently impossible while it is arguably always possible in principle.

In practice, there's a lot of irreversibility, "decoherence", but this process always depends on our inability to manipulate with the elementary building blocks of information too finely. Decoherence is an emergent phenomenon and it isn't sharp, either. There is no point during the decoherence process when you could say "now it's the right time for the universes to split into many worlds". Decoherence is just a continuous process in which the off-diagonal elements of the density matrix gradually decrease. They decrease faster and faster but they're never "quite" zero.

Shannon told us that Brian Greene thinks that he and your humble correspondent have a "little disagreement" about a physics question. ;-)

The little disagreement is about the existence of a paradigm shift in the 20th century science that would invalidate the previous framework of classical physics. I am sure it has happened in the 1920s; Brian Greene thinks that it hasn't happened so it is still possible to think about Nature in the "realist" way. Of course, I could also be saying it is a little disagreement, I have also been taught how to be diplomatic, polite, hypocritical, and dishonest. But I just don't think it's right to behave in this way. The disagreement is clearly about a major question, about the very existence of modern physics as something that is outside the box of classical physics. Brian Greene is really denying the existence of quantum mechanics; instead, he is suggesting that what we need are new theories (e.g. nonlocal ones or multiverse ones) within classical physics (although he and others prefer more obscure ways to describe the very same thing, ways that make the naked Emperor's new clothes look more fashionable and decent).

The MWI chapter of The Hidden Reality by Brian Greene (whose Czech translation by me will be in the bookstores on Monday) really drove me up the wall many times because most of it is literally upside down. One repeatedly "learns" that if we want to describe the whole world in a uniform fashion, we must adopt the MWI ideology. Bohr et al. were incapable of doing so, so they preferred to live in their messy, marginally inconsistent system of ideas, and use behind-the-scene tricks to fight against the true messengers of the truth such as Hugh Everett III.

This uses the right words except that the content is exactly the opposite of the truth.

Bohr et al. always used legitimate, official, and transparent channels to discuss similar physics questions – e.g. in the Bohr-Einstein debates – and it is the MWI advocates who are using non-standard channels such as popular books to spread misconceptions. Equally importantly, the "universal validity of the laws for small and large objects" is an important consideration, indeed. But it unambiguously says that MWI is wrong and QM as understood by Bohr et al. and the followers – modern physicists – is the only plausible right answer.

I have already mentioned why it is so. There just can't be any splitting of the worlds when one quantum particle is coherently and peacefully propagating through an experimental apparatus. The same comment applies to 2 or 3 particles so if we're using the laws of physics coherently for small as well as large systems, there can't ever be any "splitting of the Universes".



An impressive song about the Higgs, a new genre of music.

There is one more aspect of the unity that could be violated by the MWI advocates to defend the indefensible. They could say that the question "is there an electron here as well as an electron there", the question whose probability we calculated to be zero, shouldn't be answered by the rules of quantum mechanics i.e. by identifying the right projection operator and by computing its expectation value (interpreted as the probability of "Yes"). They could say that this is a question "above the system" that should be answered by some philosophical dogmas.

But that's not how physics works or should work. Quantum mechanics has a way to answer all physically meaningful i.e. in principle observable questions and it is the same way for all the questions. In fact, there is nothing unusual about asking whether there are electrons at two places. This is the kind of questions that all of physics is composed of. If you were free (or even eager) to abandon your standardized theory and methodology to answer such questions and if you switched to some metaphysical dogmas just because this question about the many worlds is "ideologically sensitive", it would prove that the theory you may still be using for other questions isn't something you are taking seriously, isn't something to answer really important questions in physics. It would surely show that you have double standards and the technical theory you're using isn't universal and uniformly applicable because you often replace it by metaphysical dogmas.

Your attitude would be completely analogous to the attitude of a fundamentalist Christian physicist who just chooses to believe that Jesus Christ could walk on the sea because the laws of gravity and hydrodynamics didn't have to apply and the non-nuclear conservation of carbon atoms could have been invalidated when he was converting water into wine. And I don't mention many other Jesus' hypothetical crimes against the laws of physics that such a physicist could be eager to overlook for political reasons. ;-)

The MWI advocates prefer metaphysical dogmas and their naive classical intuition over the standardized quantum mechanical "shut up and calculate" approach to answer such questions about the electron on two places (or pretty much any other question in physics) because they haven't started to think in the quantum way yet. To think in the quantum way is to be deciding about the validity of propositions (or the probabilities of their being valid) and the procedure is always the same. One constructs the projection operator related to the proposition and calculates its expectation value in the quantum state. It's the probability and if the result is \(0\) or \(1\), we may be certain that the answer is "No" or "Yes", respectively.

(The detailed arguments or calculations may proceed differently and avoid concepts such as "projection operators" but they must still agree with the general rules of quantum mechanics.)

When we follow this totally universal quantum procedure – valid for questions about microscopic systems as well as macroscopic systems – carefully and rigorously, we will find out that quantum mechanics as it stands, in the same Copenhagen form as it has been known since the 1920s, answers all questions, including those that "look philosophically tainted", correctly i.e. in agreement with the experiments. Sidney Coleman gave many examples in his lecture Quantum Mechanics In Your Face.

For example, it's often vaguely suggested by the MWI champions and other "Copenhagen deniers" that the experimenter could feel "both outcomes at the same moment". However, by the correct quantum procedure whose essence is absolutely identical to my discussion of the two positions of the electron at the beginning, we may actually find the answer to the question "whether the experimenter feels both outcomes at the same moment". We will convert the proposition to a projection operator, it has the form \(P=P_AP_B\) again, and because its expectation value is zero for totally analogous reasons as those at the top, it follows that according to quantum mechanics, the experimenter doesn't perceive both outcomes at the same moment. This is a completely physical question, not a metaphysical one, and quantum mechanics allows one to calculate the answer. It's just not the answer that the anti-Copenhagen bigots would like to see.

Quantum mechanics doesn't predict "unambiguously" which of the outcomes will be perceived by the experimenter (spin is "up" or "down"?) but this uncertainty is something totally different than saying that he will perceive two outcomes. The number of outcomes he will perceive may be calculated unambiguously by the standard rules of quantum mechanics and the number is one. There is no room for "two worlds" or "two perceptions at the same moment". Which outcome will be felt has probabilities strictly between 0 and 100 percent so the answer isn't unequivocal.

When the MWI-like folks are discussing these matters, they are constantly making lots of other totally rudimentary errors – and perhaps "deliberate errors" – aside from the confusion of addition and multiplication I mentioned above. A frequent one is to totally forget or deny that quantum mechanics predicts and remembers correlations (in their most general form known as entanglement) between any pairs, triplets, or larger groups of degrees of freedom and properties that may co-exist in the real world.

For example, Coleman mentioned the cloud chamber example by Nevill Mott. A particle leaves the source in the cloud chamber. It is in the \(s\)-wave: its wave function is spherically symmetric so it has the same chance to move to each direction. So why does it create a straight line of bubbles in one direction rather than a spherically symmetric array of bubbles?

Again, this may be interpreted as some super-deep metaphysical question that goes well beyond quantum mechanics and the Copenhagen interpretation may be claimed to be incapable of answering such questions. Except that there is nothing hard or metaphysical about this question at all. It is completely physical, quantum mechanics allows us to answer it using a very simple calculation, and the answer is right. There will be a straight line of bubbles because one may prove that due to some demonstrable entanglement between properties of the supersaturated water or alcohol at various points that the propagation of the charged particle causes, the direction of two newly created bubbles as seen from the source is always essentially the same.

(One may prove that the charged particle only creates bubbles in a small region around its location; and one may prove that the position of the charged particle goes like \(\vec x = \vec p \cdot t / m\) where the momentum \(\vec p\) is essentially conserved. That's enough to see that the bubbles will be aligned.)

So again, while quantum mechanics gives ambiguous predictions about the direction in which the "bubbly path" will be seen – all directions are equally likely – it actually does unambiguously predict that the bubbles will have a linear shape, they will only emerge along a straight semi-infinite track. There is absolutely no inconsistency between these two assertions. Any wrong idea that QM has to predict that the distribution of the bubbles is spherically symmetric boils down to a trivial error: the omission of the fact that the existence or absence of bubbles at a point is correlated with the existence or absence of bubbles at other points. In fact, the correlation is so tight that for each semi-infinite line, there are either bubbles everywhere along the line or there are no bubbles on it. And there is only one semi-infinite line.

As I said many times, the people who have trouble with proper, i.e. Copenhagen or neo-Copenhagen laws of quantum mechanics, are always "eager" to simplify the quantum rules of the game prematurely and convert the situation to some "real physical object" way too early (well, one should really never do so but if one does it too early it may be more damaging). But Nature never does such mistakes. It remembers the wave function which knows about all the possible correlations between all the degrees of freedom, which knows about all the relative phases because they could matter, and only when an observable question has to be answered, it just calculates the right answer. The right calculation looks very different than any kind of reasoning in a classical world but it isn't too hard; it's really straightforward and in all situations in which classical physics used to work, it still gives the same answer (with tiny corrections).

When the initial wave function for the charged particle in a cloud chamber is spherically symmetric, it doesn't imply that spherically asymmetric configurations of the bubbles at the end are forbidden i.e. predicted to have vanishing probabilities. On the contrary, we may prove (the right verb really is "calculate" because the proof boils down to the calculation of an expectation value of a projection operator) that the distribution of the bubbles will be spherically asymmetric – a semi-infinite line in a direction. There is no contradiction because the initial wave function isn't a real object such as a classical field, stupid. It's a quantum-generalized probabilistic distribution. A spherically symmetric probabilistic distribution (on a sphere) doesn't mean that the actual objects such as the particles (or, later, the bubbles they will create) are spherically symmetric. Instead, it means that the probability that the objects are found in one direction is the same as it is for another direction. But because the particle may be shown to be in a direction, we know that the actual measurements of positions will inevitably be spherically asymmetric.

Is that really so hard to understand that the wave function in quantum mechanics is a generalization of a probability distribution – and not a generalization of a classical field? It encodes the information about the physical system, not the shape of the object itself. It is not really difficult to learn these things but some people just don't want to.

And that's the memo.