Breaking News
Loading...
Friday 8 February 2013

Info Post
Whether some people like it or not, science has demonstrated that reductionism works. The laws governing the evolution of increasingly complex – and, typically, geometrically larger – objects (laws that are increasingly vague, riddled with uncertainties, errors, and exceptions) may be reduced to the laws describing ever smaller and ever more elementary building blocks (i.e. to laws that are increasingly more fundamental, accurate, and universal).

In fact, one may approximately list the disciplines of science in a hierarchical tree in which the arrow ↑ means that the discipline at the beginning of the arrow may be reduced to the discipline at the end of the arrow.




The sequence may look like this:
string theory

beyond the Standard model QFT

the Standard Model

nuclear and subnuclear physics

atomic physics and chemistry

biochemistry and microbiology

neuroscience

psychology

economics

science about religions and mass delusions
And so on, and so on. I could make the tree more diverse and add branches. Note that something close to God appears at the end of the list above so He is highly non-fundamental, despite some people's beliefs. Of course, one could also make God more fundamental and place it above string theory. But that would be a slightly different God, a more scientific and less compassionate one. ;-)

(Some other people may choose the opposite convention for the arrow; independently of that, they could also sort the entries upside down. But I will keep my arbitrary ordering and use it in many sentences.)

The entries at the the beginning will be called "more fundamental" and the entries near the bottom will be "less fundamental". You should note that the objects and concepts discussed by a more fundamental discipline are typically shorter – associated with smaller distance scales – than the objects and concepts in the less fundamental disciplines of science.

A conceptual breakthrough in quantum field theory has formalized this rule. In the 1970s, Ken Wilson and others have realized that one may use a generalized description of some phenomena in Nature. Biology or theology is never an example; all these descriptions are quantum field theories (or models of statistical physics that obey somewhat similar rules as quantum field theories, at least in this Wilsonian enterprise). But they are the so-called "effective quantum field theories" that neglect all the inner structure of objects that are shorter than a distance scale \(L\) associated with the effective quantum field theory. So an effective quantum field theory should be OK for a description of phenomena whose typical distance scale is of order \(L\) and those whose typical distance scale is longer than \(L\) (those phenomena must be derived by some possibly complicated calculation). However, the effective field theory says nothing – and remains agnostic – about everything that happens at distance scales shorter than \(L\). The key fact that it's possible – that it's often possible to formulate the laws governing phenomena at long distances that are well-defined (either completely or up to parameterically small errors) despite the uncertainty about many things at shorter distance scales.

An amusing important thing is that you may consider two effective field theories for the same situations whose values of \(L\) are almost equal to one another but not quite: they infinitesimally differ. You may switch from \(L\) to a slightly longer \(L(1+\varepsilon)\). The effective field theory with the longer value of \(L\), i.e. the latter one, may be derived from the former one by "integrating out" the degrees of freedom (the integration is a partial integration in the Feynman path integral over fields) that are "long enough" that they are described by both effective quantum field theories but too short so that they're "ignored" in the latter effective quantum field theory (one with the longer distance scale).

(The opposite derivation – the derivation of the short-distance theory from the long-distance theory – isn't possible in general because the long-distance theory has "forgotten" about some details, especially some new particle species that appear at one \(L\) or another. In the most general case, the "integrating out" is therefore an irreversible process and the procedures of "integrating out" therefore form a one-way "semigroup" which is a reason why the term "renormalization group" is mathematically misleading but still accurate enough for a physicist.)

Because we have changed the description (in particular, its characteristic distance scale \(L\)) infinitesimally, the theory will only change infinitesimally. Unless something special was happening exactly at the scale \(L\), it means that the qualitative spectrum (list of fields and particle species) remains unchanged. However, the parameters of the theory – masses and values of the coupling constants and other parameters – do change. They only change infinitesimally, by terms proportional to \(\varepsilon\). But you may repeat the same step many times which effectively makes the ratio of distance scales \(L\) finite – in other words, \(\varepsilon\) is pretty much finite. The coupling constants then typically depend on \(L\) either as power laws or (the classically dimensionless ones) logarithmically. This dependence of \(1/g^2\) on \(\log L\) is known as the "running couplings" and it is very important for calculations of the gauge coupling unification in grand unified theories as well as all other analogous calculations that relate the values of parameters in the natural, short-distance-based theory and the parameters in an effective theory that is directly relevant for "low energies" such as those at the LHC.

(Yes, \(13\TeV\) at the LHC from 2015 may look like high energy to some people – and that's why particle physics is known as "high energy physics" – but in this classification, it's referred to as "low energy" because it's still closer to the mundane energies we know from the everyday lives than to the high energies such as the Planck energy whose self-evident experimental inaccessibility is sometimes criticized by idiots who just hate the fact that \(10^{19}\) is a large number and want to vote this fact and most other facts out of existence.)

But let's stop these possibly complicated comments. My main point is that in quantum field theory, shorter distance scales are the more fundamental ones. The shorter distances you may experimentally probe (which requires ever higher accelerator energies) or the shorter distances you may describe by your effective theories (which requires an ever deeper and more comprehensive knowledge of physical phenomena that have already been seen and those that may only be clearly seen in the future), the closer you are to the most fundamental laws of Nature.

We have to ask: Does this rule always work?

Needless to say, the answer is No according to quantum gravity and/or string theory – which is ultimately the same thing. In quantum field theory, we needed to go an "infinite distance" on the log scale if we wanted to reach the ultimate fundamental laws. But note that the word "infinite" appears in the previous sentence and "infinities" or "singularities" mean that a particular description is incomplete. When you switch to a more well-behaved description, such infinities are replaced by finite numbers. The "infinity" in this paragraph is no exception. The final theory is a finite distance away!

Why is it so? Well, it's true because distances shorter than a particular constant with the units of one meter don't "exist" in the operational physical sense or, to say the least, the geometry at the would-be shorter distances obeys rules that are unfamiliar from our everyday experience accumulated in the world of rather long distances.



John Wheeler's "quantum foam" meme – the last part of the picture above – is a way to imagine how geometry gets modified when we're approaching the distances close to \(10^{-35}\) meters, the Planck length. Note that even the topology of the curved space (and spacetime) gets variable. The characteristic size of the tunnels and handles is comparable to one Planck length, too.

However, the picture above is still way too classical. You should imagine that the true picture is more fuzzy, more probabilistic, and the very concept of "distances" becomes somewhat ill-defined and inappropriate to describe everything that is going on. Every classical "picture" you may imagine is clearly way too constraining and naive. Nevertheless, let's see that the proper distances start to "brutally fluctuate" if we focus at distance scales comparable to the Planck length.

We begin with the Einstein-Hilbert action for general relativity\[

S = \int \dd^d x\,\frac{R}{16\pi G} \sim \frac{1}{G} \int \dd^d x\,[\partial(\eta+h)^2]

\] in \(d\) spacetime dimensions. I wrote the metric tensor as \[

g_{\mu\nu}=\eta_{\mu\nu}+h_{\mu\nu}

\] the sum of a background metric – imagine the flat Minkowski metric for the most important example – and a variation \(h_{\mu\nu}\). Also, we appreciated that the Ricci curvature scale \(R\) is bilinear in the spacetime derivatives – at least, that's the number of derivatives in the terms that are most important at long distances (and in mildly curved spacetimes). At the end, perhaps after some integration by parts, these leading terms produce the usual bosonic kinetic terms of the form \((\partial h)^2\) that only differ from the Klein-Gordon kinetic term by subtleties involving the contraction of the Lorentz vector indices.

We may now ask: If we consider a line interval whose length is approximately \(L\) in the space, how much will the proper length of this interval fluctuate because of the quantum fluctuations in between? We only want an order-of-magnitude estimate. Well, write \(h=\sqrt{G}H\) to get rid of the \(1/G\) prefactor in the action. The relevant Lagrangian is then simply \((\partial H)^2\) and depends on no dimensionful parameters. Take a Fourier mode of the field \(H\) with wavelength \(L\). What is the characteristic \(\Delta H\)?

Because \(H\) is dimensionful – it inherited the units of \({\rm length}^{1-d/2}\) from the square root of Newton's constant – and because there's no other dimensionful parameter that \(\Delta H\) could depend upon (the action is written without extra dimensionful parameters; we have even eliminated \(G\)), the dimensional analysis dictates that it must be proportional to the appropriate power of \(L\):\[

\Delta H \sim L^{1-d/2}.

\] Note that for \(d=4\), this scales as \(1/L\). By our simple relationship between \(h\) and \(H\), this translates to\[

\Delta h\sim L^{1-d/2}G^{1/2}.

\] In particular, \(\Delta h\sim\sqrt{G}/L\) in \(d=4\). But \(\Delta h\) is nothing else than \(\Delta L/L\), the relative uncertainty of the proper length because \(h\) itself is the quantum variation of the metric tensor. And \(\sqrt{G}/L\) is nothing else than \(L_{\rm Planck}/L\). It means that the relative error in \(L\) decreases as \(1/L\) and it becomes comparable to 100 percent if \(L\) itself is comparable to the Planck length!

Consequently, you can't really measure distances with a better accuracy than the Planck length. The Heisenberg uncertainty principle won't allow you such a feat. In an operational sense, the proper distances (much) shorter than the Planck length don't really exist. Those comparable to the Planck length marginally exist – they partly exist, partly don't exist. (These comments only apply to proper distances in some "invariant frame" such as the center-of-mass frame. All the people who claim that coordinates – and coordinates are not necessarily proper lengths of anything – can't be expressed with better-than-Planckian precision or that the wavelength of the photon can't be shorter than the Planck length misunderstand this business completely. The wavelength of a photon may be arbitrarily short, of course: you may always make it even shorter by the Doppler shift i.e. by going into a more boosted frame.)

But can't we probe ever shorter distances by paying ever larger amounts of money for ever stronger particle accelerators? Up to a moment, yes, we can. But beyond that point, we can't. If we collide two particles (in the LHC case, two protons) with the center-of-mass energy \(E\), they (or quarks inside them) may exchange momentum comparable to \(E\) which means that the process is sensitive to virtual particles of mass comparable to \(E\), too.

However, if you built a (cosmic size) collider that would accelerate the particles to \(E\) comparable to the Planck energy which is about \(10^{19}\GeV\), things would change. You would expect that such a high-energy collider would test physics at distances comparable to the Planck length. However, the energy carried by the protons would be the Planck energy, the corresponding mass via \(E=mc^2\) would be the Planck mass, and the Schwarzschild radius for the Planck mass is the Planck length. The Schwarzschild radius for a mass is the radius of a ball such that if you squeeze the mass to this ball (or a smaller one), you will unavoidably create a black hole.

And that's where we stand with two colliding particles whose center-of-mass energy approaches the Planck energy: you start to create Planck-length-sized black holes.

They're really the smallest black holes that may be discussed as "pretty ordinary black holes", marginally so. More precisely, proper black holes should always be larger (or much larger) than the Planck length. One reason is that the Planck-sized black hole evaporates in the Planck time, almost instantly; an even smaller black hole would evaporate in an even shorter time and the required "speeds" to evaporate would exceed the speed of light. It makes no sense to consider an object that would decay more quickly than the time that light needs to traverse it. ;-)

Can you probe distances shorter than the Planck length by increasing the energy of the two protons above the Planck energy? Nope. If you do so, you create an ever larger black hole. The black hole size \(R=2GM/c^2\) is an increasing function of the black hole mass and because \(E=Mc^2\), a larger energy of the proton translates to a larger mass and a larger black hole. The "inner architecture of matter" will be shielded by an event horizon of an increasing size. Instead of getting to shorter distances, you will be producing increasingly large black holes whose distance resolution will therefore start to drop again. The Planck length is the minimum proper distance you may (marginally) achieve by particle collisions.

An interesting fact is that if you pump an energy that vastly exceeds the Planck energy into the colliding particles, you will create macroscopic black holes. And their curvature radius is comparable to the black hole radius. If the black holes are very large, the curvature radius becomes large and the curvature itself gets tiny. It means that general relativity – the theory that is reliable for low energies and for low curvatures – becomes arbitrarily accurate once again.

That is why the Einstein-Hilbert action (and comparable terms describing other fields and their interactions and optimized for low energies) is good not only for very low energies of the colliding protons but also for very high energies of the colliding protons – because you produce black holes that admit a general relativistic "classical" description again. Only the transition regime, the intermediate range of energies comparable to the Planck energy (and distances comparable to the Planck length, the shortest possible ones) remain "completely unknown" to the classical general relativity. Whenever the center-of-mass energy is either much lower or much higher than the Planck energy, classical general relativity becomes an arbitrarily accurate approximation!

But the intermediate regime which is not calculable by classical tools – because the influence of quantum mechanics on anything is of order 100 percent – is completely crucial because it has to solve a difficult task: it must smoothly interpolate between two "favorite places of general relativity" where this 1915 Einstein's theory loves to apply itself with an impressive accuracy. It must smoothly interpolate between the very low energies and very high energies. At energies comparable to the Planck energy is where the bulk of "quantum gravity phenomena" take place in the full glory.

It's important to notice that the fact that classical general relativity (with a nearly classical background metric) should apply in both extreme regimes – very low energies and very high energies – is a highly nontrivial constraint on all candidate theories of quantum gravity you could propose. For example, you could think that you may build a theory of quantum gravity by starting with a long-distance theory and extending it to shorter distances, in some arbitrary way. However, that wouldn't give you a consistent theory because the behavior of these laws at very high, trans-Planckian energies would behave according to the desires of this random theory instead of the right way which is the production of ever larger black holes of the right shape.

We may compare the right theory of quantum gravity to a gravitational slingshot. A spaceship coming from the Earth (=behavior of the theory at very low energies) is approaching Jupiter and we want to exploit the largest planet to redirect the spaceship so that it continues to Mars (=behavior of the theory at very high energies). Clearly, the initial velocity has to be very nicely adjusted for the slingshot to end up with the desired outcome. Quantum gravity is analogous. Things at low energies must be very special so that if you extend them to much higher energies, they start to behave in the pre-guaranteed black-hole way rather than an arbitrary wrong way.

When we say "quantum gravity", we usually mean a description of quantum gravity that remains ambiguous or unknown exactly in the transitional regime near the Planck energy. However, whenever we use the otherwise equivalent notion "string/M-theory", we are talking about a much more specific description that can tell us not only what happens at very low energies (negligible gravity etc.) and very high energies (black hole production) but also the Planck-sized energies in between.

For example, M(atrix) theory and the AdS/CFT correspondence provide us with equations that allow us, at least in principle, to calculate the spectrum of black hole microstates for the smallest, Planck-sized black holes exactly when they're in the process of becoming black holes worth this Wheeler's name.

Perturbative string theory is a theory in which weakly coupled strings interact by interactions suppressed by the string coupling constant \(g\equiv g_{\rm closed}\) which is taken to be much smaller than one (or, in other conventions, an appropriate power of the string length) if the perturbative expansion is well-behaved. It's the first historically known consistent description of quantum gravity except that Newton's constant goes like\[

G \sim g_{\rm closed}^2 \cdot l_{\rm string}^8

\] in \(d=10\) which means that gravity is very weak in the stringy perturbative regime \(g_{\rm closed}\to 0\), too. While perturbative string theory allows us to see many new physical phenomena that are unfamiliar from ordinary non-gravitational quantum field theories, it's not quite true that we may use it to see the quantum foam or the extremely fluctuating curved geometry of near-Planckian black holes. Because the strength of gravity is suppressed by another small parameter \(g_{\rm closed}\ll 1\), the strings in weakly coupled string theory are rather large (and therefore distant from each other) objects whose gravity is rather weak.

Just to be sure, this hierarchy goes away for \(g_{\rm closed}\sim{\mathcal O}(1)\) and we have known how to understand string theory in this regime – at least in many vacua – for more than a decade, too.

However, even if you stay in the \(g_{\rm closed}\ll 1\) regime where gravity is weak, string theory offers you lots of phenomena that emulate the general relativity's interpolation between very low and very high energies. There is a difference, however: the intermediate distance scale is \(L_{\rm string}\) which is much longer than the Planck length. They differ by a coefficient such as\[

L_{\rm Planck}\sim L_{\rm string}\cdot g_{\rm closed}^\alpha

\] where the exponent \(\alpha\) is positive which makes the Planck length much shorter than the string length. Weakly coupled, \(g_{\rm string}\ll 1\) string theory "protects" us against the mysterious phenomena near the extreme \(L\sim L_{\rm Planck}\) distance scale by surrounding this very short distance scale by a longer one, \(L_{\rm string}\), and producing lots of new phenomena at this longer distance scale, the string length.

And they're very interesting and rich phenomena, indeed. Note that for \(g_{\rm closed}\sim 1\), the transition near \(L\sim L_{\rm string}\) is really the same thing as the quantum gravitational transition near \(L\sim L_{\rm Planck}\). However, let's assume that \(g_{\rm closed}\ll 1\) (weakly coupled string theory) and look at the string theory's genes that are responsible for its ability to interpolate between the very low and very high energies (relatively to the string scale).

One of these long-short relationships that are often discussed in popular literature is the T-duality. If you compactify one of the spacetime dimensions in perturbative string theory on a circle of radius \(R\), you get pretty much the same theory as the theory where the circle was compactified on a circle of radius\[

\tilde R = \frac{ L_{\rm string}^2 }{ R}.

\] If you use the stringy units (for distance and other things) where \(L_{\rm string}=1\) (you can't simultaneously set \(L_{\rm Planck}=1\) because they're different distances, so you may use string units or Planck units but not both!), then you have simply \(\tilde R = 1/R\). Let's work in string units in the text below.

(I am neglecting the technicality that by switching to the inverse radius, you must choose a "different kind of string theory" in general. For example, type IIA string theory on a circle of radius \(R\) is equivalent to type IIB string theory on a circle of radius \(\tilde R\).)

Why are the theories with a large radius \(R\gg L_{\rm string}\) and a small radius \(\tilde R=1/R\ll L_{\rm string}\) exactly equivalent to each other? Well, strings may have a momentum along the circular dimension of the spacetime, much like point-like particles. The momentum is quantized (for the same reason why \(j_z\) is quantized etc.: the wave function has to be single-valued). You have \(P=n/R\), just like if the strings were point-like particles.

However, strings may also wind around the circle \(w\) times. This gives them an extra (or minimum) length \(2\pi R\cdot w\) which also gives an extra/minimum mass to the string (much like the momentum) and which is directly proportional to \(R\) instead of indirectly as \(P=n/R\). Well, the interchange of the integer labels \(n,w\) for the momentum and the winding composed with the inversion of the radius \(R\to 1/R\) pretty much gives you the same spectrum (list of allowed masses) back. The interactions are invariant under this map, too.

When you try to visualize things, the momentum \(n\) (how much the string moves) looks like something entirely different than the winding number \(w\) (how many times the string is encircling the spacetime circle). However, the best picture is an equation. And the equations of string theory imply that regardless of your inner fantasies, the physical phenomena will not depend on whether you decide that \(n\) is the momentum and \(w\) is the winding number or vice versa!

This remarkable map means that if you try to shrink the radius of circular spacetime dimensions beneath the Planck length, you won't get any new choices. Every \(R\lt 1\) may be inverted to get an equivalent radius with \(1/R\gt 1\). A dualities vs singularities discussion shows that this is true for all extreme regimes in which the independent radii of tori are scaled to zero or infinity at general rates. All of such extreme tori may be mapped to equivalent tori. It's always more natural to pick the equivalent description in which all the radii are large – because dimensions much larger than the string length behave much like we're used to from point-like-particle quantum field theories.

But T-duality is far from being the only example of a phenomenon by which string theory relates substringy distances to the ordinary longer-than-stringy distances. An even much more characteristic example of this relationship is the relationship between light and heavy states running in quantum loops. That is, among other things, the explanation why string theory has no ultraviolet (short-distance) divergences.

In string theory, Feynman diagrams – which may be interpreted as a spacetime history of splitting and joining world lines of point-like particles propagating in the spacetime – are replaced by two-dimensional world sheets, tubes that split or merge much more smoothly. The simplest "loop diagram" in string theory (with closed strings only) is a torus. In a particle-like limit, the torus becomes a thin ring and then a circle – the same circle you may find in 1-loop Feynman diagrams for quantum field theories.



However, the stringy loop diagram, a torus, may be drawn as a rectangle (or a square) with the horizontal (red) edges identified with one another; and the vertical (blue) edges identified with one another. Just like in the Pac-Man PC game, if you cross the upper boundary, you reappear at the bottom and vice versa. If you cross the left boundary, you reappear on the right side and vice versa. As the Czech fans of Pepa Nos know, the Earth is round (and periodic) in all these respects. ;-)

If you want the torus to resemble a history of a point-like particle in quantum field theories, it should better be a thin torus and the much shorter edge of this torus should be interpreted as the (negligible) spatial dimension of the string while the longer dimension should be interpreted as the (temporal) direction of the Euclideanized time.

Now, there may also be spacetime tori (histories of strings i.e. world sheets) that are extremely thick in the opposite limit. But you may draw them as rectangles, rotate the rectangle by 90 degrees, and exchange the intepretations of the two sides of the rectangle! A very thick torus would be the part of the path integral that could contribute UV divergences but by this 90-degree rotation, you may reinterpret all of them as infrared divergences (which may always be there a priori but they either cancel or are innocently physically interpreted).

Because string theory contains no new integral over the "ultraviolet" shape of the tori – because the very thick tori are really very thin, just with a rotation by 90 degrees used to reinterpret the history – string theory is automatically free of short-distance divergences. My proof may look sloppy or heuristic to you and it arguably is both sloppy and heuristic. However, it's a description of something you may actually prove rigorously – and it has been proven rigorously.

Due to the fact that the 90-degree rotation is a symmetry of the world sheet (a gauge symmetry – a "large diffeomorphism" – of the world sheet theory, in fact), the path integral over toroidal world sheets which is a trace (thermal partition sum) may be interpreted in two ways that have to be equal:\[

{\rm Tr} \exp(-\beta H) = {\rm Tr} \exp(-H/\beta).

\] This really means that if you invert the temperature \(T\) or inverse temperature \(\beta\), you get back to the same partition sum (or, more generally, partition sum of another string theory). So just like the radii shorter than the string length aren't new, the world sheet temperatures above a self-dual value are copies of the very low ones, too. An extremely high temperature is the same thing as an extremely low temperature.

(These are not actual temperatures relevant for the physics in spacetime; these are temperatures for world sheet physics; the inverse temperature here should really be denoted by letters such as \(\tau\) and not \(\beta\). I don't want to go into these technicalities here.)

A funny fact is that if \(\beta\) is very high, only the states with a small value of \(H\) (very light string states) contribute to the trace on the left hand side above because all other states are heavily exponentially suppressed. On the other hand, \(1/\beta\) is very low in the same case and the right hand side has contributions from lots of excited, high-energy string modes. It primarily counts their number or "entropy".

So the seemingly trivial fact that the rectangle may be rotated by 90 degrees has a remarkable implication for any perturbative string theory: the density of high-mass states in the string spectrum is actually fully determined by the energy and other properties of a few low-mass excitations of the string!

(Similar relationships exist for the cylindrical world sheet. In that case, the two related sides of the story have a different interpretation.)

This fact – symmetry of the toroidal path integral with respect to the 90-degree rotation – is known as "modular invariance". Classically, the invariance of the path integral over the torus "automatically holds". However, it may have quantum anomalies. They must cancel in a consistent string theory. This fact constrains the spacetime dimension, dictates that winding modes (and twisted strings and other boundary condition sectors) must automatically arise whenever you try to compactify some dimensions (or quotient the spectrum or impose GSO-like projection). They imply that the tori on which the heterotic strings' chiral dimensions are compactified must be derived from even self-dual lattices, and so on. It's a very powerful constraint.

Clearly, I got too technical and you need to study string theory properly – e.g. with a textbook – to really understand these issues. The basic toroidal modular invariance is actually enough to prove similar relationships at all loops.

However, I want you not to overlook the fact that these relationships between the light string states and heavy string states and similar UV/IR relationships in string theory are a toy model – a very explicitly understood cousin – of the relationship between the low-energy/high-energy behavior of the theory of quantum gravity; recall the black hole production discussion at the beginning. (A similar UV/IR connection has been observed in quantum field theories on non-commutative geometries, too.)

If you try to imagine that "quantum gravity" is a concept that is much more general than string/M-theory, it must still obey the UV/IR connections in some way except that the explicit "toroidal path integral" of perturbative string theory is replaced by something much more general and perhaps "much less two-dimensional", "much less constructive", and "much less geometric". But it's still true that quantum gravity is a very rare, heavily constrained animal.

And that's the memo.

0 comments:

Post a Comment