## Read an Excerpt

#### From Perturbative to Constructive Renormalization

**By Vincent Rivasseau**

**PRINCETON UNIVERSITY PRESS**

**Copyright © 1991 Princeton University Press**

All rights reserved.

ISBN: 978-0-691-08530-2

All rights reserved.

ISBN: 978-0-691-08530-2

CHAPTER 1

**The Ultraviolet Problem**

* * *

Quantum field theory is an attempt to describe the properties of elementary "point-like" particles in terms of relativistic quantum fields. It is now widely believed to offer a coherent mathematical framework for relativistic models (like the "standard *U*(1) × *SU*(2) × *SU*(3) model"). These models include all the particles and interactions observed up to now except gravity. Therefore, together with general relativity, field theory is the backbone of our current understanding of the physical world. In the future a new, more unifying framework may be adopted, like the currently promising superstring theory, which is a relativistic and quantum description of extended one dimensional objects instead of point-like particles; nevertheless even in this case it is extremely likely that field theory will remain important in many situations, just as classical mechanics is still today.

This situation is relatively recent. Until the 70's the very statement that quantum field theory might provide a coherent mathematical framework at all was not widely accepted. The main doubts on the mathematical consistency of quantum field theory were due to the persistence of ultraviolet problems (and to the lack of successful models for strong interactions: QCD, the present field theory of strong interactions, did not exist). Let us sketch what these ultraviolet problems are and why they are important.

An ultraviolet problem is one which is due to the existence of arbitrarily small length scales, or equivalently of arbitrarily large frequencies in the Fourier analysis of a theory. Such problems are inherent to the formalism of quantum field theory, because it is a crucial assumption that the fields live on a continuous space time. One might wonder whether this continuity condition has anything to do with physics and whether the whole problem is not a mathematical artifact. After all it is reasonable to expect that space time will conserve its smoothness only until the Planck scale, where quantum aspects of gravitation might distort it significantly. This Planck scale might provide a physical ultraviolet cutoff; this is what seems to occur in superstring theories where it is conjectured that at least in perturbation theory, there are no ultraviolet divergences. However the Planck scale is much higher than the typical scales that field theory tries to describe, and is completely inaccessible to direct experiments.

In fact the most compelling reason for which we are interested in the continuous formulation of field theory is the same for which we are interested in the thermodynamic limit of statistical systems. In statistical mechanics this limit corresponds to systems of infinite volume. We know that in nature macroscopic systems are in fact finite, not infinite, but they are huge with respect to the atomic scale. The thermodynamic limit is an adequate simplification in this case, since it allows one to give a precise mathematical content to the physically relevant questions (like dependence of the limit on boundary conditions, existence of phase transitions etc....). Since a limit has been taken, the power of classical analysis may be applied to these questions. It would be much harder and less natural to try to define the analogous notions for a large finite system, just as it is difficult and often inappropriate to make discrete approximations to some typically continuous mathematics like topology.

From this point of view the ultraviolet problem appears central and inescapable in field theory. A limit has to be performed; if this limit were not to exist, the corresponding mathematical formalism would be of little interest.

Historically quantum field theory was plagued by two successive ultraviolet "diseases" which raised doubts on the existence or consistency of the ultraviolet limit. In both cases the situation looked bad for many years until a way out of the crisis was found. The first and most famous ultraviolet disease has been recognized almost since the birth of quantum field theory. It is the presence of divergences due to the integration over high momenta in the loops of Feynman integrals. In the φ44 theory which will be discussed soon, one of the simplest of these divergences is the divergence of the second order graph which we call the "bubble" (see Fig. 1.1.1).

By momentum conservation the amplitude for this graph is only a function of *k = k1 + k2.* In Euclidean space, this amplitude (apart from combinatoric coefficients discussed later) is given by the integral:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (1.1.1)

which diverges logarithmically for large values of *p.* (Similar divergences occur of course in the more physical theory of quantum electrodynamics).

Around 1950, this disease was cured by the invention of perturbative renormalization by Feynman, Schwinger, Tomonaga, Dyson and others (see e.g., [Dyl]). Basically it amounts to a redefinition of the physically observable parameters of the theory which pushes the infinities into unobservable "bare" parameters. It took more than a decade to put this perturbative theory of renormalization on a completely firm mathematical basis. Roughly speaking, the main result states that theories which are renormalizable from the naive power counting point of view can indeed be renormalized without changing the formal structure of the Lagrangian. More precisely one can replace the bare parameters of the Lagrangian by formal power series in the renormalized parameters (usually the coupling constant), so that the resulting perturbative expansion in the renormalized coupling is finite to all orders, as a formal power series. We call this important theorem the BPH theorem (Bogoliubov–Parasiuk–Hepp [BP][Hel]) although as usual it incorporates a lot of former work and was followed by important extensions and refinements. It was somehow a surprise to discover that this theorem, developed for quantum electrodynamics or the φ44 model, remained also true for non-Abelian gauge theories ['tH1][LZ], in which case it is highly non trivial to check that the counterterms required do not break gauge invariance, i.e., can be absorbed in a redefinition of the field strength and of the coupling constant. By power counting analysis alone, this would not be true: one has to incorporate additional information coming from Slavnov identities or BRS invariance [Tay][Sla][BRS]. The traditional proof (see [IZ]) relies on a dimensional regularization in which gauge invariance is maintained, but this approach has some drawbacks: the dimensional regularization is complicated and cannot be used up to now in a constructive (non-perturbative) program.

But quantum electrodynamics, the only firmly established field theory has been plagued by another ultraviolet problem, raised in particular by L. Landau and other physicists of the Russian school. We call it the "renormalon" problem, although this name was introduced much later. It does not occur any more at the level of individual Feynman graphs, but it affects the perturbative series as a whole. In φ44 (which is similar to electrodynamics in this respect), there are several ways to discover the problem. One of them is to consider the leading-log behavior of the renormalized 4 point function at *n*-th order, *S4,Rn*, and large external momenta. Let *m* be the mass of the particles. One finds (in Euclidean space):

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (1.1.2)

where *gR* is the renormalized coupling constant and β2 is a numerical coefficient (for single component φ44 it is 9/2π2 or 3/16π2 if one writes, as usual, the interaction as φ4/4!). Various ways of playing with formula (1.1.2) give rise to various troubles, all related.

When the 4 point function is inserted into a convergent loop like the triangular 6-point graph of Fig. 1.1.2, at 0 external momenta, one obtains contributions to the *n*-th order of perturbation theory for the 6 point function proportional to

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (1.1.3)

These contributions are not summable over *n* (since they add up with the same sign they are also not Borel summable). Therefore the renormalon problem appears as a difficulty in summing perturbation theory. However one might also consider the asymptotic behavior in *k* of the bare (unrenormalized) 4-point function:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (1.1.4)

where *gB* is the bare coupling constant. Since in the theory of perturbative renormalization this function should be the counterterm of the theory, i.e., the difference between the renormalized coupling *gR* and the bare coupling *gB,* one gets the formula:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (1.1.5)

Therefore if [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (no matter how *gB* is chosen as a function of *k*). This is the phenomenon of "charge screening" called also the triviality problem for φ44 QED. In the ultraviolet limit, only the trivial noninteracting theory seems to exist. Still another possibility is to invert formula (1.1.5) to obtain

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (1.1.6)

Keeping *gR* fixed, *gB* becomes negative at an energy of order [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], therefore the theory should become unstable or inconsistent at this "Landau" energy scale. The singularity in (1.1.6) is sometimes called a Landau ghost. But of course one should not really trust (1.1.6). One should rather remark that as *k* -> ∞, *gB* increases and becomes of order unity around the Landau energy, after which perturbation theory itself and in particular (1.1.2)—(1.1.6) should no more be valid. The behavior of *gB* is turned into a strong coupling problem, intractable up to now, except perhaps by Monte Carlo simulations.

Let us summarize the characteristic features of the renormalon problem. As remarked already, it affects the summability of perturbation theory as a whole (therefore physicists might call it a non-perturbative problem, although we would prefer to consider it as a "strong coupling" problem). It is truly an ultraviolet disease, because it does not occur in the theory with fixed ultraviolet cutoff, no matter how large. It is easy for instance to check that the *n*! behavior of (1.1.3) appears neither in the bare nor in the renormalized perturbative series with fixed ultraviolet cutoff. Finally the name "renormalon problem" although perhaps awkward, is justified because the disease arises from the introduction of counterterms, hence from the use of perturbative renormalization. As will indeed be discussed at length in this book, the perturbative theory of renormalization cures the first problem of infinities too well. It introduces both some pieces of counterterms which we call "useful" because they make the renormalized Feynman amplitudes finite, but also some pieces of counterterms which we call "useless." These useless counterterms do not cure any divergence. Furthermore they are the ones which are responsible for the renormalon problem!

To observe the distinction between useful and useless counterterms, one needs some detailed Fourier analysis such as the one provided by phase space decomposition. Then it appears that the sole reason for introducing useless counterterms is the locality requirement and the insistence upon writing the renormalized series in terms of fixed renormalized constants which do not depend on the energy scale. In this book we will argue that one should drop this restriction, and adopt a more effective perturbation theory with an infinite number of scale dependent "effective constants." In this way we rediscover that the renormalization group point of view is the right way to investigate the renormalon problem.

Historically the renormalon disease and its investigation by the invention of the renormalization group was not discussed exactly in these terms; the emphasis was on the invariance of the theory under changes of the (arbitrary) subtraction scale (from this invariance came the very name "renormalization group") and on high energy asymptotic behavior, not on *n*! behavior at large order and the problem of summing up perturbation theory. Of course both points of view are closely related. Anyway the problem was serious enough to raise again doubts on the consistency of quantum field theory through the 60's.

As a reaction the constructive program was launched, and in the early 70's a major milestone was reached when superrenormalizable theories of various types were built and checked to be free of inconsistencies. Reviews or books on this first period of constructive theory are [Erl][Sil][GJ2]. Although the results and techniques used have proved very influential in many areas of physics and mathematics, this success was nevertheless not of specific relevance to the renormalon problem. For theoretical physicists, a convincing way to escape this problem was really found with the major discovery of asymptotic freedom in non-Abelian gauge theories [Po][GW]. This occurred just at the right time to complete the spectacular rebirth of quantum field theory: non-Abelian gauge theories had been developed as realistic models for the electroweak interaction and had been shown to be perturbatively renormalizable. It gave in turn a major impetus to adopt them to describe strong interactions as well.

Asymptotic freedom occurs when the coefficient β2 in (1.1.2) is negative. As a result equations (1.1.5)–(1.1.6) are inverted. It is now the bare charge which is screened. At large energies the particles behave like free point-like objects (hence the name of asymptotic freedom). Looking at (1.1.3) and changing the sign of β2 we see that the renormalon problem still prevents "ordinary" summation, but the corresponding contributions now become alternate; therefore an other type of summability, like Borel summability, becomes possible.

The discovery of asymptotic freedom in non-Abelian gauge theories convinced the theoretical physics community that these quantum field theories are indeed mathematically consistent. Many physicists believe that there is no longer any surprise to be expected in the ultraviolet problem for gauge theories (see however ['tH4] for an exception). But this belief has yet to be substantiated by a non-perturbative, mathematically rigorous analysis.

To understand rigorously the concept of asymptotic freedom beyond perturbation theory, one way is to construct first some consistent models of renormalizable theories with such a behavior. There exist models of this kind simpler than the non-Abelian gauge theories in 4 dimensions, namely fermionic models in two dimensions with many components and a quartic interaction [MW][GrNe]. Also the φ44 model, although not asymptotically free in the ultraviolet direction, is asymptotically free in the infrared direction. Although the corresponding constructive problem is a problem of statistical mechanics rather than field theory (the ultraviolet cutoff is not removed), it is very similar in mathematical structure. This road has been followed by K. Gawedzki and A. Kupiainen [GK2-3-4] and by J. Feldman, J. Magnen, R. Seneor and the author [FMRS3-4-5], who, with somewhat different technical tools, succeeded in building these models. In fact it is the main goal of this book to present in a more systematic and accessible form than the original papers the technique of multiscale or phase space expansion, as developed and applied in our collaboration with J. Feldman, J. Magnen and R. Seneor. This technique originated in the constructive work of Glimm and Jaffe [GJ1].

To extend our rigorous understanding of asymptotic freedom to non-Abelian gauge theories in a finite volume is the next natural challenge. (The large volume problem is indeed a different one, where one has to deal with a strong coupling problem and physical issues like quark confinement which seem still much farther from a rigorous analysis). This ultraviolet consistency of non-Abelian gauge theories on compact manifolds is not only a key issue in theoretical physics, but is also becoming one in geometry, in particular since E. Witten related Donaldson's invariants to (still formal) functional integrals of some (supersymmetric) Yang-Mills theory [Wit]. Various versions of Yang-Mills theories now seem to be the link between the most fascinating problems of geometry (homotopy and knots in three dimensions, symplectic geometry and conformal theories in two dimensions and differential geometry in four dimensions). A constructive understanding of the corresponding functional integrals would be therefore a major progress in pure mathematics.

The problem has been attacked first by T. Balaban [Ba2-9]. In an impressive sequence of papers, completed recently, he establishes an ultraviolet stability bound for the effective action of a lattice gauge theory after iterating a large number of clever block-spin transformations. From this result it is expected that the continuum limit of gauge invariant observables like Wilson loops can be constructed. Hence at least these observables should be free from inconsistencies and this is a very important result. This work is now followed by a related program of P. Federbush [Fe2-7][FedW], still in progress, which uses also a lattice regularization and phase space cells.

*(Continues...)*

Excerpted fromFrom Perturbative to Constructive RenormalizationbyVincent Rivasseau. Copyright © 1991 Princeton University Press. Excerpted by permission of PRINCETON UNIVERSITY PRESS.

All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.

Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.