1cm
Universal Constraints on Conformal Operator Dimensions
Vyacheslav S. Rychkov and Alessandro Vichi
[1cm] Scuola Normale Superiore and INFN, Pisa, Italy
[5mm] Institut de Théorie des Phénomènes Physiques, EPFL, Lausanne, Switzerland
[5mm]
1 Introduction and formulation of the problem
Our knowledge about nonsupersymmetric Conformal Field Theories (CFTs) in four dimensions (4D) is still quite incomplete. Suffices it to say that not a single nontrivial example is known which would be solvable to the same extent as, say, the 2D Ising model. However, we do not doubt that CFTs must be ubiquitous. For example, nonsupersymmetric gauge theories with colors and flavors are widely believed to have “conformal windows” in which the theory has a conformal fixed point in the IR, with evidence from large analysis [1], supersymmetric analogues [2], and lattice simulations [3]. Since these fixed points are typically strongly coupled, we do not have much control over them. In this situation particularly important are general, modelindependent properties.
One example of such a property is the famous unitarity bound [4] on the dimension of a spin conformal primary operator :^{1}^{1}1Here we quote only the case of symmetric traceless tensor operators.
(1.1)  
These bounds are derived by imposing that the two point function have a positive spectral density.
As is well known, 3point functions in CFT are fixed by conformal symmetry up to a few arbitrary constants (Operator Product Expansion (OPE) coefficients). The next nontrivial constraint thus appears at the 4point function level, and is known as the conformal bootstrap equation. It says that OPE applied in direct and crossed channel should give the same result (see Fig. 1).
The bootstrap equation goes back to the early days of CFT [5]. However, until recently, not much useful general information has been extracted from it^{2}^{2}2Except in 2D, in theories with finitely many primary fields and in the Liouville theory [6]. We will comment on the 2D case in Sections 4.1 and 5 below.. All spins and dimensions can apriori enter the bootstrap on equal footing, and this seems to lead to unsurmountable difficulties.
Recently, however, tangible progress in the analysis of bootstrap equations was achieved in [7]. Namely, it was found that, in unitary theories, the functions entering the bootstrap equations (conformal blocks) satisfy certain positivity properties which lead to general necessary conditions for the existence of solutions.
The concrete problem considered in [7], and which we will continue to discuss here, was as follows. In an arbitrary unitary CFT a Hermitean scalar primary of dimension was singled out. The conformal bootstrap equation for its 4point function was studied under the sole assumption that all scalars in the OPE have dimension above a certain number, call it
(1.2) 
It was shown that the conformal bootstrap does not allow for a solution unless
(1.3) 
where is a certain continuous function, computed numerically. We stress that this conclusion was reached without making any assumptions about dimensions or spins of other operators appearing in the OPE, beyond those implied by the unitarity bounds. Nor any assumptions about the OPE coefficients were made (apart from their reality, which is again implied by unitarity).
In other words, in any unitary 4D CFT, the OPE of any scalar primary must contain at least one scalar field with dimension not larger than
Incidentally, the function was found to satisfy which is quite natural since corresponds to the free field whose OPE contains the operator :: of dimension .
What makes the result like (1.3) possible? The basic reason is that, in any theory, crossing symmetry relation of Fig. 1 cannot be satisfied term by term, but only by cancellations among various terms. The guaranteed presence of the unit operator in the OPE (1.2) creates a certain “crossing symmetry deficit”, which has to be balanced by other fields. The idea is to show that this cannot happen unless at least one scalar of sufficiently low dimension is present.

We Taylorexpand the conformal bootstrap equation near the “selfdual point” configuration having equal conformal crossratios . The expansion is truncated to a certain finite order .

We systematically search for positivity properties satisfied by linear combinations of Taylor coefficients of the conformal blocks, for fields appearing in the RHS of the OPE (1.2). A found positivity property implies that the “crossing symmetry deficit” cannot be balanced and rules out a CFT with a given and .

For fixed , the bound is then computed as the point separating those for which a positivity property exists, from those ones for which it does not (Fig. 2).
The nature of the method is such that increasing can make the bound only stronger.The optimal bound should in principle be recoverable in the limit . In practice the value of is determined by the available computer resources and algorithmic efficiency. The best bound found in [7], plotted in Fig. 3, corresponds to
The purpose of this paper is to present an improvement of the bound (1.3) obtained by using the method of [7] with larger values of , up to . The new results are interesting in two ways. First, pure numerical improvement turns out to be significant. Second, happens to be large enough so that we start observing saturation of the bound. So we believe our current results are close to the optimal ones achievable with this method.
The paper is organized as follows. In Section 2 we review the conformal bootstrap equations. In Section 3 we review the connection of the bound (1.3).with positivity properties satisfied by the conformal block expansion coefficients. In Section 4 we present and discuss our results. We also mention accompanying results which we obtain for an analogous problem in 2D. In Section 5 we propose several future applications and extensions of our method, with emphasis on connections to phenomenology and string theory. In Section 6 we summarize and conclude. In Appendix A we collect some details about our numerical algorithms. In Appendix B we include the tables on which plots in Section 4 are based.
2 Review of conformal bootstrap
We will review the conformal bootstrap equation in its simplest form—as applied to the 4point function of identical scalars . We largely follow [7], where a more detailed discussion and references can be found.
2.1 Conformal block decomposition
Let be a Hermitean scalar primary^{3}^{3}3The field is called primary if it transforms homogeneously under the conformal group. operator. The operator product expansion (OPE) contains, in general, infinitely many primary fields of arbitrary high spins and dimensions:^{4}^{4}4If there are several primaries with the same , they have to be all included in this sum with independent coefficients.
(2.1) 
Here
We assume that the OPE converges in the following weak sense: it gives a convergent power series expansion for any point function
provided that , i.e. is closer to the origin than any other local field insertion (see Fig. 4). This assumption can be justified by using radial quantization ([8], Sect. 2.9), and checked explicitly in free field theory. For rigorous mathematical results about OPE convergence see [9].
The OPE (2.1) can be used to obtain conformal block decomposition of the 4point function :
(2.2)  
(2.3) 
where are the conformal crossratios. This representation is obtained by using the OPE in the 12 and 34 channels. The conformal blocks sum up the contributions of the primary and all its descendants. Their explicit expressions were found by Dolan and Osborn [10]:
(2.4)  
Notice the judicious introduction of the auxiliary variables and When the theory is formulated in the Euclidean space, these variables are complexconjugates of each other. To understand their meaning, it is convenient to use the conformal group freedom to send and to put the other three points in a plane, as in Fig. 5. Then it’s easy to show that
(2.5) 
where are the coordinates of in the plane, chosen so that corresponds to halfway between and This “selfdual” configuration, for which , will play an important role below. We can see that the variable is a natural extension of the usual complex coordinate of the 2D CFT to the 4D case.
According to the above discussion, the OPE is expected to converge for Conformal block decomposition is a partial resummation of the OPE and thus also converges at least in this range. In fact, below we will only use convergence around the selfdual point . However, conformal blocks, as given by (2.4), are regular (realanalytic) in a larger region, namely in the plane with the cut along the real axis (see Fig. 5). The conformal block decomposition is thus expected to converge in this larger region. One can check that this indeed happens in the free scalar theory.
One can intuitively understand the reason for this extended region of regularity. The condition for the OPE convergence, as stated above, does not treat the points and symmetrically. On the other hand, the conformal blocks are completely symmetric in and so must be the condition for their regularity. The appropriate condition is as follows: the conformal block decomposition in the  channel is regular and convergent if there is a sphere separating the points from the points For the configuration of Fig. 5, such a sphere exists as long as is away from the cut.
2.2 Conformal bootstrap and the sum rule
The 4point function in (2.2) must be symmetric under the interchange of any two , and its conformal block decomposition (2.3) has to respect this symmetry. The symmetry with respect to or is already built in, since only even spins are exchanged [10]. On the contrary, the symmetry with respect to gives a condition
(2.6) 
which is not automatically satisfied for given by (2.3). This nontrivial constraint on dimensions, spins, and OPE coefficients of all operators appearing in the OPE is known as the conformal bootstrap equation. Physically it means that OPE applied in 1234 and 1423 channels should give the same result (Fig. 1).
In the plane of Section 2.1, the LHS of (2.6) has a cut along while the RHS has a cut along Thus, if (2.6) is satisfied, the cuts have to cancel, and the resulting is real analytic everywhere except for
In [7], we found it useful to rewrite (2.6) by separating the unit operator contribution, which gives
(2.7) 
The LHS of this equation is the “crossing symmetry deficit” created by the presence of the unit operator in the OPE. This deficit has to be balanced by contributions of the other fields in the RHS.
In practice it is convenient to normalize (2.7) by dividing both sides by . The resulting sum rule takes the form:
(2.8) 
The “Ffunctions” are real and regular in the full plane cut along In particular, the behavior at the selfdual point is regular.
All Ffunctions vanish near the points and Thus the sum rule can never be satisfied near these points if only finitely many terms are present in the RHS. The OPEs containing finitely many primaries are ruled out.
3 Positivity argument
The main idea of [7] was very simple, and can be described as follows. Suppose that for a given spectrum of operator dimensions and spins the sum rule (2.8), viewed as an equation for the coefficients , has no solution. Then of course such a spectrum would be ruled out.
Any concrete realization of this idea needs a practical criterium to show that there is no solution. For a prototypical example of such a criterium, imagine that a certain derivative, e.g. (see (2.5)), when applied to every and evaluated at a certain point, is strictly positive (“positivity property”). Since the same derivative applied to the LHS of (2.8) gives identically zero, a solution where all coefficients are nonnegative would clearly be impossible. We refer to this simple reasoning as the “positivity argument”.
One can imagine more general criteria using different differential operators, and applying them at different points. In [7], we found it convenient to apply differential operators precisely at the selfdual point , One can show that the Ffunctions are even with respect to this point both in the and directions:
Thus, all oddorder derivatives vanish, and a sufficiently general differential operator (“linear functional”) takes the form:
(3.1) 
where is some fixed finite number, and are fixed real coefficients.^{5}^{5}5In [7], we analytically continued to the Minkowski space by Wickrotating In this picture and are both real and independent, and conformal blocks are real regular functions in the region For our purposes Minkowski and Euclidean pictures are exactly equivalent. In particular, derivatives of Ffunctions in and are trivially proportional to each other. Notice the exclusion of the constant term , in order to have .
Assume that for certain fixed and we manage to find a linear functional of this form such that (“positivity property”)
(3.2)  
and 
Moreover, assume that all but a finite number of these inequalities are actually strict: Then the sum rule cannot be satisfied, and such a spectrum, corresponding to a putative OPE (1.2), is ruled out.
The proof uses the above “positivity argument”. Since the positivity property implies that only those primaries for which would be allowed to appear in the RHS of the sum rule with nonzero coefficients. By assumption, there are at most a finite number of such primaries. However, as noted in Section 2.2, finitely many terms can never satisfy the sum rule globally, because of the behavior near Q.E.D.
While the above formal reasoning is quite sufficient to understand our results, in [7] the sum rule was also given an alternative interpretation in terms of convex geometry. In this more visual picture, linear combinations of Ffunctions with arbitrary positive coefficients form a convex cone in the space of twovariable functions. One can consider the full function space or its finitedimensional subspace corresponding to Taylorexpanding up to order Positivity property (3.2) means that there is a hyperplane separating the function 1 from the convex cone. Thus it implies that the sum rule cannot be satisfied. The converse is “almost true”, modulo questions of convergence.
Clearly, the language of linear functionals provides an equivalent, dual formulation of the problem. This formulation is also especially convenient from the point of view of checking our results independently. It’s not so important how we find the functionals. As long as we publish the functional coefficients , anyone can verify that the inequalities (3.2) are satisfied.
4 Results, discussion, and 2D analogue
As discussed in Section 1, we are interested in computing an upper bound 1.3 for the dimension of the leading scalar in the OPE , universal for all unitary 4D CFTs. In [7], we have computed such a bound in the interval using the sum rule of Section 2.2 truncated to the derivative order. That bound is reproduced in Fig. 3.
We now present the results of our latest study, obtained for larger values of . These results^{6}^{6}6See Appendix B for the same results in tabular form. are plotted in Fig 6 as a collection of curves , , where the index denotes the number of derivatives used to obtain the bound. The bound naturally gets stronger as increases (see below), and thus the lowest curve is the strongest bound to date. In the considered interval this bound is well approximated (within ) by
(4.1) 
To obtain the bounds of Fig. 6, we used the positivity argument from [7], as reviewed in Section 3. Namely, for points lying on the curves we are able to find a linear functional of the form (3.1) satisfying the positivity property (3.2).^{7}^{7}7Thus actually the bound is strict: , except at The numerical procedure that we use to find these “positive functionals” is described in some detail in Appendix A.
Several comments are in order here.

We have actually computed the bound only for a discrete number of values, shown as points in Fig. 6. The tables of these computed values are given in Appendix B. Behavior for can be better appreciated from the logarithmicscale plot in Fig. 7.
We do not see any significant indication which could suggest that the curves do not interpolate smoothly in between the computed points. Small irregularities in the slope are however visible at several points in Figs. 6,7. These irregularities are understood; they originate from the necessity to discretize the infinite system of inequalities (3.2), see Appendix A for a discussion. In our computations the discretization step was chosen so that these irregularities are typically much smaller than the improvement of the bound that one gets for .

For each the bound is nearoptimal, in the sense that no positive functional involving derivatives up to order exists for
We estimate from the analysis of residuals in the fit of by a smooth curve like in (4.1).
On the other hand, by increasing we are allowing more general functionals, and thus the bound can and does get stronger. This is intuitively clear since for larger the Taylorexpanded sum rule includes more and more constraints.
Compared to the results of [7], the bound on the anomalous dimension is improved by in the range that we explored.

We have pushed our analysis to such large values of in the hope of seeing that the bound saturates as . Indeed, we do observe signs of convergence in Figs. 6,7, especially at . In fact, we have observed that the bounds starting from follow rather closely the asymptotic behavior
An approximation to the optimal bound can thus be found by performing for each a fit to this formula. This approximation is shown by a dashed line in Fig. 6. From this rough analysis we conclude that the optimal bound on the anomalous dimension is probably within from our current bound.

We have continuously as The point corresponds to the free scalar theory.
We don’t know of any unitary CFTs that saturate our bound at , see the discussion in Section 6 of [7]. We know however a family of unitary 4D CFTs in which and which are consistent with our bound (the red dotted line in Fig. 6). This “generalized free scalar” theory is defined for a fixed by specifying the 2point function
and defining all other correlators of via Wick’s theorem. This simple procedure gives a welldefined CFT, unitary as long as , which can be described by a nonlocal action
The full operator content of this theory can be recovered by studying the OPE . In particular, the leading scalar in this OPE has dimension ^{8}^{8}8This theory can also be realized holographically by considering a free scalar field of a particular dependent mass in the AdS geometry and taking the limit in which 5D gravity is decoupled. We are grateful to Kyriakos Papadodimas for discussions about the generalized free scalar CFT.
4.1 2D analogue
Although our main interest is in the 4D CFTs, our methods allow a parallel treatment of the 2D case. The main characteristics of the 2D situation were described in Section 6.1 of [7], here will briefly review them.

At present we can only take advantage of the finitedimensional symmetry and not of the full Virasoro algebra of the 2D CFTs. In particular, our results are independent of the 2D central charge .

The unitarity bounds for primaries^{9}^{9}9Known as quasiprimaries in 2D CFT literature. in 2D have the form
where is the Lorentz spin.

The conformal blocks in 2D are known explicitly [10]:
(4.2)
Using the unitarity bounds, the known conformal blocks, and the sum rule (2.8), valid in any dimension, we can try to answer the same question as in 4D. Namely, for a scalar primary of dimension , what is an upper bound on the dimension of the first scalar operator appearing in the OPE ? I.e. we want a 2D analogue of the bound (1.3). Since the free scalar is dimensionless in 2D, the region of interest is .
Fig. 8 summarizes our current knowledge of this bound:^{10}^{10}10See Appendix B for the results in tabular form.

The dotted line is the old bound presented in [7]. The solid line is the improved bound obtained by us.^{11}^{11}11We are grateful to Erik Tonni for providing us with the large asymptotics of the 2D conformal block expansion coefficients, necessary to obtain this bound.. An numerical fit to this bound is given by:
Clearly, the improvement compared to [7] is significant.
It is interesting to note that in 2D we have observed a much faster convergence for increasing than in 4D. In fact, already with it is possible to obtain a bound rather close to the one shown in Fig. 8, although with a slightly rounded “knee”. We have also computed several points for and haven’t seen much improvement.

The dashed line and scattered crosses correspond to various OPEs realized in explicit examples of exactly solvable unitary 2D CFTs (minimal models and the free scalar theory), see [7]. They all respect our bound.
It is instructive to compare this plot with its 4D counterpart, Fig. 6. While we do not know of any CFTs saturating the 4D bound, the 2D unitary minimal models , , contain the OPEs
(4.3) 
which come quite close to saturating the 2D bound.
More precisely, our 2D bound starts at tangentially to the line realized in the free scalar theory, then grows monotonically and passes remarkably closely above the Ising model point . After a “knee” at the Ising point, the bound continues to grow linearly, passing in the vicinity of the higher minimal model points (4.3).
It is curious to note that if we did not know beforehand about the Ising model, we could have conjectured its field dimensions and the basic OPE based on the singular behavior of the 2D bound at
On the other hand, nothing special happens with the 2D bound at the higher minimal model points, it just interpolates linearly in between^{12}^{12}12The straight line fitting the bound would cross the dashed Free theory line just above , which is the accumulation point of the minimal models . For larger values of we expect that the bound modifies its slope and eventually asymptotes to the Free line.. Most likely, this does not mean that there exist other unitary CFTs with intermediate operator dimensions. Rather, this behavior suggests that the single conformal bootstrap equation used to derive the bound is not powerful enough to fully constrain a CFT.
In comparison, it is a bit unfortunate that the 4D bound does not exhibit any singular points which would immediately stand out as CFT candidates. Nevertheless, if we assume that the shape of the 4D bound is a result of an interpolation between existing CFTs (as it is the case in 2D), we may conjecture that the upward convex behavior of the functions in Fig. 6 is due to the presence of a family of points satisfying the sum rule that can correspond to exact CFTs. This observation, though speculative, shows how the presented method can provide a guideline in the study of 4D CFTs.
5 Future research directions
The results of this paper and of [7] open up many interesting research directions, which we would like to list here.
First, there are several important problems in 4D Conformal Field Theory which can be analyzed by our method and its simple modifications. For example:

One should be able to derive a generalization of our bounds in the situation when the CFT has a global symmetry, and we are interested in the lowest dimension singlet appearing in the OPE. This is going to have phenomenological implications by constraining the socalled conformal technicolor scenarios of ElectroWeak Symmetry Breaking [11]. This connection was extensively discussed in [7].
Second, the method can also be used in 2D Conformal Field Theory, as was already demonstrated in Section 4.1. The main interest here lies in potential applications to string theory. We will now briefly describe two such applications.
Physical states of (super)string theory are in 11 correspondence with Virasoro primary operators of a 2D CFT living on the string worldsheet. The mass of a string state (in string units) is related to the corresponding primary operator dimension via
We are considering closed string theory for concreteness. When strings propagate in flat space, the CFT is solvable and the full spectrum of operator dimensions is known. Realistic string constructions require compactifications of extra dimensions. In some examples, such as toroidal compactifications, the CFT is still solvable. In others, such as superstring compactifications on a generic CalabiYau threefold, the CFT cannot be solved exactly. All what is generally known is the spectrum of the massless states, which can be obtained in the supergravity approximation. Of course we expect the massive string states to be always present, but just how heavy can they be? We know from the experience with toroidal compactifications that it is impossible to completely decouple the massive states: as the compactification radius the KaluzaKlein states become more massive, but the winding modes come down. Clearly, massive string states are crucial for the consistency of the theory. What exactly are they doing? A partial answer may be that without their presence, 4point functions of the massless state vertex operators would not be crossingsymmetric. If this intuition is right, it could be used to obtain modelindependent bounds on the lightest massive states in string compactifications, generalizing the wellknown bounds valid for toroidal compactifications. A similar in spirit general prediction of string gravity, although in a different context and by using different methods, was obtained recently in [14].
When working towards results of this kind, it may be necessary to generalize our methods so that the information about the 2D CFT central charge, which is fixed in string theory, can be taken into account. In practice, one needs an efficient method to evaluate the full Virasoro conformal blocks. While no closedform expression as simple as Eq. (4.2) is known, Zamolodchikov’s expansion (see [6]) can probably be applied.
Finally, as mentioned above in the 4D context, it should be possible to derive modelindependent bounds on the OPE coefficients. Such results must be accessible via a simple modification of our method, in particular the full Virasoro conformal blocks are not needed here. One can then apply such bounds to the dimension 2 operators corresponding to the massless string states (in an arbitrary compactification). Via the usual dictionary, this would then translate into general bounds on the treelevel coupling constants in the lowenergy string effective actions.
6 Summary
Prime principles of Conformal Field Theory, such as unitarity, OPE, and conformal block decomposition, imply the existence of an upper bound on the dimension of the leading scalar operator in the OPE , which depends only on ’s dimension .
Moreover, there is an efficient method which allows numerical determination of with arbitrary desired accuracy. The method is based on the sum rule, a functionspace identity satisfied by the conformal block decomposition of the 4point function , which follows from the crossing symmetry constraints. In practical application of the method the sum rule is Taylorexpanded: replaced by finitely many equations for the derivatives up to a certain order . The bound improves monotonically as more and more derivatives are included. In [7], where the above paradigm was first developed, we numerically computed the bound for .
The present paper extended the study of [7] to higher The goals were to improve the bound, and perhaps to approach the bestpossible bound in case a convergence of the bound is observed.
Our analysis went up to , see Fig. 6, and we have achieved both goals. First, in the range that we explored, the bound on the anomalous dimension is improved by compared to the results of [7]. Second, we do observe signs of convergence of the bound. We believe that our current results are close (within ) to the best ones achievable with this method.
7 Acknowledgements
We are grateful to J. Maldacena, A.M. Polyakov and N. Seiberg for discussions of possible applications of our methods in string theory, to A.M. Polyakov for bringing the subtleties of the analytic structure of the conformal blocks to our attention, and to K. Papadodimas for discussions of the generalized free scalar theory. We are especially grateful to our collaborators R. Rattazzi and E. Tonni for many discussions related to this project, and in particular to E. Tonni for providing us with the large asymptotics of the 2D conformal block expansion coefficients. This work is partially supported by the EU under RTN contract MRTNCT2004503369, by MIUR under the contract PRIN2006022501, and by the Swiss National Science Foundation under contract No. 200021116372. V.R. thanks Laboratoire de Physique Théorique de l’Ecole Normale Supérieure for hospitality.
Appendix A Details about numerical algorithms
We now discuss in more detail the issues introduced in Section 3, namely how we can find in practice a linear functional of the form (3.1) satisfying the positivity property (3.2). We will first describe the general procedure and how it can implemented in a computer code, and then mention possible algorithmic improvements and shortcuts that we found useful in our analysis.
Given the complexity of the functions , the search for a positive functional is too hard a task to be attacked analytically. As already mentioned, we reduce the complexity of the problem by looking for a functional which is a linear combination of derivatives up to a given order . The derivative are taken w.r.t. the selfdual point , since the sum rule is expected to converge fastest around this point and, in addition, the functions are even in both arguments. The choice of the functional (3.1) simplifies our task enormously since we can now work in a finite dimensional space, and the only information concerning that we need are their derivatives up to a certain order. Put another way, the Ffunctions are now considered as elements not of a function space but of a finitedimensional vector space , .
The sum rule (2.8) in this picture represents a constraints on these vectors that, in any CFT, must sum to zero. This interpretation is discussed in details in [7]. Here we adopt an equivalent point of view in terms of the dual space of linear functionals defined on since we find this prospective closer to the method used to obtain numerically .
Let us fix the notation. We define the dimensional vector of Taylor coefficients:
(A.1)  
and the same vector normalized to the unit length:
(A.2) 
where the norm is the usual Euclidean length of the vector .
We form the vector out of the Taylor coefficients of the function rather then of its derivatives, because this way all elements turn out to have approximately the same order of magnitude, which is preferable in the subsequent numerical computation. The definition of the normalized vector serves the same purpose. Indeed, as explained in the following, our numerical analysis consist in finding a solution of a system of linear inequalities where the coefficient are given by the elements of . The solution is more accurate and easier to extract if all the coefficient are of the same order of magnitude. Since the existence of the functional is not affected by these rescalings, we opted for the definition A.1 (A.2).
According to the positivity property we look for a functional which is strictly positive on all but finitely many vectors . Let us fix the dimension of the scalar . Then each pair identifies the semispace of of the functionals positivedefinite on the vectors ; let us call this open sets . With this notation the positivity property (3.2) can be restated in the following way: If for fixed and
(A.3) 
then the sum rule cannot be satisfied. The issue is thus to be able to check whether the intersection (A.3) is nonempty, and to compute the smallest for which this is the case.
Clearly it is not possible nor needed to check all the values of as required by the condition (A.3). We can indeed consider only a finite number of them and check if they admit the existence of a functional or not. This can be achieved with a double simplification. First, we consider values of and only up to a given maximum value (“truncation”), and secondly, we discretize the kept range of (“discretization”). The truncation does not produce a loss of information since we take into account the large and contributions using the asymptotic expressions computed in Appendix D of [7]. The discretization step requires special care, see below.
We used Mathematica 7 to perform the computations. The algorithm to extract the smallest value of proceeds in several steps:

Setting up an efficient procedure to compute vectors

Selection of the ’s and ’s to be used in checking the positivity property (A.3). For concreteness we report here the range of that we were including:^{13}^{13}13In some cases was needed to obtain a functional which would later pass the positivity check on the nonincluded values of , see below.
(A.4) For each the range of was discretized, and a discrete set of points was chosen, called below. The derivatives of the Ffunctions approach zero as and reach the asymptotic behavior for sufficiently large values. We take a finer discretization where the function are significantly varying while we can allow to increase the step in the asymptotic region. More details are given below.

Reduction to a Linear Programming problem. With only a finite number of equations to check, the determination of the intersection of the becomes a standard problem of Linear Programming which can be solved in finite amount of time. Hence we look for a solution of the linear system of inequalities
(A.5) Clearly, the coefficients are related to those appearing in (3.1) by a trivial rescaling depending on :
Further, the asymptotic behavior of the Ffunctions (see below) tells us that for large the inequality is dominated by the derivative
(A.6) hence needs to be positive. By an overall rescaling of we can always achieve
(A.7) which we choose as a normalization condition.

Extraction of the smallest for which a positive functional exists. We begin by selecting two points and (see (A.4)) such that we know a priori that in the first case a positive functional does not exists, while in the second case it does^{14}^{14}14We can choose these points blindly as , , however prior experience can suggest a choice closer to the final . Starting from these values we apply the bisection method to determine the critical up to the desired precision: we test if a functional exists for and we increase or decrease the extremes of the interval [ depending on the outcome. The procedure we follow is such that in the end the critical is contained in an interval of relative width , i.e. we terminate if . The plots and the tables presented in this work correspond to the upper end of the final interval, i.e. to the the end for which we have found a functional.
Let us now come back to the point 1. Although computation of the derivatives can be carried on by brute force Taylorexpanding the Ffunctions, we can save time decomposing the computation in various blocks. From equation (2.8) we see the rather simple dependence on the parameter , which translates in a polynomial dependence once the function is Taylorexpanded in and . We therefore separately computed the dependence on once and for all as a matrix . To compute Taylor coefficients of Ffunctions, this matrix is contracted with two vectors containing onedimensional Taylor coefficients of the function see (2.4). The latter derivatives are precomputed for several values of with a fine step and stored. For definiteness we report the interval we used:
(A.8) 
For larger we made use of the analytic expression of the asymptotics instead of computing the derivatives numerically (see below).
Finally let us discuss the choice of the discretization and the truncation in and . This step is of fundamental importance in order to reduce the time needed to perform the computation.
In Appendix D of [7] it is shown that for large values of and the functions approach an asymptotic behavior. We have checked that outside the range of values (A.4) we can safely use the approximate expression
For large the vector is dominated by the components where assumes the highest allowed value . Hence we can take into account this large behavior imposing additional constraints:
where we have dropped irrelevant positive constants not depending on .
Now comes the discretization: in the range of values (A.4), as well as in the interval , we can allow to take only a discrete, finite number of points. For we take a fixed small step. However, for we try to concentrate the points in the region where the unit vector is significantly varying. A measure of this is given by the norm of its derivative w.r.t. :^{15}^{15}15In practice the derivative is evaluated by using the finitedifference approximation.
We discretize by taking the spacing between two consecutive values of equal , where is a small fixed number ( was typically taken in our work). Clearly when the unit vector is slowly varying the discretization step is large, while it is refined where it is changing rapidly, and where presumably more information is encoded. Typically we get about a hundred values for each but only a few dozen of those above .
The sets , one for each , of values of obtained in this way are the ones referred to at point 2 above. In constructing the linear system that we use at point 3 we consider additional intermediate points between two subsequent ’s. In order to understand why we do this, let us assume that we have found a functional which is positive for all the values of contained in . Since we considered a discrete set of values, it may and actually does happen that for intermediate values of (which were not included in ) the functional becomes slightly negative. In [7] this issue was solved looking for solution of the form , so that for intermediate values this condition could be violated but the positivity was safe. In the current work we found it more convenient to build the linear system in the following way:

for each we evaluate the vector .

for any two consecutive points , we consider the firstorder Taylor expansion of the vector around and evaluate it at halfspacing between and :
(A.9) and we add the constraints to the linear system (A.5).
These additional constraints become important to keep the functional positive near the ’s for which the inequalities are close to saturation, while they are redundant away from those points. Indeed, assume that for some and the functional is exactly vanishing. Then at the intermediate point the functional would be strictly negative, which is not allowed. However, in the presence of the additional constraint this cannot happen, since is generically a convex function of near the minimum. See Figure 9 for an illustration. Thus we can be certain that the found functional will be positive also for those which were not included into .
This certainty has a price. Namely, the opposite side of the coin is that the added constraints are somewhat stronger than needed, and the bigger the discretization parameter , the bigger the difference. As a result, the found critical value of will be somewhat above the optimal critical value, corresponding to . This observation explains why the curves in Figs. 6,7 have small irregularities in the slope. These irregularities could be decreased by decreasing the value of .
Several comments concerning the numerical accuracy are in order. The components of the vector have been computed using standard doubleprecision arithmetic (16 digits). As a consequence all the numerical results must be rounded to this precision. In particular, quantities smaller that are considered zero.
In addition, the builtin Mathematica 7 function LinearProgramming, which we used, has an undocumented Tolerance parameter. Most of the computations were done with Tolerance equal (default value). However for and , and for , we found that LinearProgramming terminates prematurely, concluding that no positive linear functional exists, even for some values of for which a positive functional for smaller was in fact found. The problem disappeared once we set Tolerance to a lower value (). In our opinion, Tolerance is probably the socalled pivot tolerance, the minimal absolute value of a number in the pivot column of the Simplex Method to be considered nonzero. Recall that a nonzero (actually negative) pivot element is necessary in each step of the Simplex Method [15]. This interpretation explains why the above problem could occur, and why it could be overcome by lowering Tolerance.
As described above, our numerical procedure has been designed to be robust with respect to the effects of truncation and discretization. In addition, for each , we have tested the last found functional (i.e. for at the upper end of the final interval ) on the much bigger set of :
(A.10) 
and found that indeed , within the declared accuracy.
Finally, we have checked that in all cases the found functionals are such that the inequality is in fact strict: , for all but finitely many values of and Thus they satisfy the requirements stated in Section 3.
Appendix B Tables
Table 1 contains the sequence of 4D bounds , , for a discrete set of points in the interval . Table 2 contains the 2D bound for a discrete set of points in the interval . Figs. 6, 7, 8 are based on these tables.
A text file with the unrounded versions of Tables 1,2 and the functionals used to derive these bounds is included in the source file of this arXiv submission. The Mathematica codes can be obtained from the authors upon request.
References

[1]
A. A. Belavin and A. A. Migdal, “Calculation of anomalous dimensions in nonabelian gauge field
theories,” Pisma Zh. Eksp. Teor. Fiz. 19, 317
(1974); JETP
Letters 19, 181 (1974).
T. Banks and A. Zaks, “On The Phase Structure Of VectorLike Gauge Theories With Massless Fermions,” Nucl. Phys. B 196, 189 (1982).  [2] N. Seiberg, “Electric  magnetic duality in supersymmetric nonAbelian gauge theories,” Nucl. Phys. B 435, 129 (1995) arXiv:hepth/9411149.

[3]
Y. Iwasaki, K. Kanaya, S. Kaya, S. Sakai and T. Yoshie,
“Phase structure of lattice QCD for general number of flavors,”
Phys. Rev. D 69, 014507 (2004)
arXiv:heplat/0309159.
T. Appelquist, G. T. Fleming and E. T. Neil, “Lattice Study of the Conformal Window in QCDlike Theories,” Phys. Rev. Lett. 100, 171607 (2008) arXiv:0712.0609.
T. Appelquist, G. T. Fleming and E. T. Neil, “Lattice Study of Conformal Behavior in SU(3) YangMills Theories,” arXiv:0901.3766
A. Deuzeman, M. P. Lombardo and E. Pallante, “The physics of eight flavours,” Phys. Lett. B 670, 41 (2008) arXiv:0804.2905.
A. Deuzeman, M. P. Lombardo and E. Pallante, “Evidence for a conformal phase in SU(N) gauge theories,” arXiv:0904.4662  [4] G. Mack, “All Unitary Ray Representations Of The Conformal Group SU(2,2) With Positive Energy,” Commun. Math. Phys. 55, 1 (1977).

[5]
A. M. Polyakov, “Nonhamiltonian approach to conformal
quantum field theory,” Zh. Eksp. Teor. Fiz. 66, 23 (1974).
A. A. Belavin, A. M. Polyakov and A. B. Zamolodchikov, “Infinite conformal symmetry in twodimensional quantum field theory,” Nucl. Phys. B 241, 333 (1984). 
[6]
A. B. Zamolodchikov and A. B. Zamolodchikov, “Conformal field
theory and 2D critical phenomena. 3. Conformal bootstrap and degenerate
representations of conformal algebra,”
ITEP9031
A. B. Zamolodchikov and A. B. Zamolodchikov, “Structure constants and conformal bootstrap in Liouville field theory,” Nucl. Phys. B 477, 577 (1996) arXiv:hepth/9506136.  [7] R. Rattazzi, V. S. Rychkov, E. Tonni and A. Vichi, “Bounding scalar operator dimensions in 4D CFT,” JHEP 0812, 031 (2008) arXiv:0807.0004.
 [8] J. Polchinski, “String theory. Vol. 1: An introduction to the bosonic string,”Cambridge, UK: Univ. Pr. (1998) 402 p
 [9] G. Mack, “Convergence Of Operator Product Expansions On The Vacuum In Conformal Invariant Quantum Field Theory,” Commun. Math. Phys. 53, 155 (1977).

[10]
F. A. Dolan and H. Osborn, “Conformal four
point functions and the operator product expansion,” Nucl. Phys. B 599, 459 (2001)
arXiv:hepth/0011040.
F. A. Dolan and H. Osborn, “Conformal partial waves and the operator product expansion,” Nucl. Phys. B 678, 491 (2004) arXiv:hepth/0309180. 
[11]
M. A. Luty and T. Okui, “Conformal
technicolor,” JHEP 0609, 070 (2006)
arXiv:hepph/0409274.
M. A. Luty, “Strong Conformal Dynamics at the LHC and on the Lattice,” arXiv:0806.1235. 
[12]
J. L. Feng, A. Rajaraman and H. Tu, “Unparticle SelfInteractions and Their Collider
Implications,” Phys. Rev. D 77, 075007 (2008)
[arXiv:0801.1534 [hepph]].
H. Georgi and Y. Kats, “Unparticle selfinteractions,” arXiv:0904.1962 [hepph].  [13] H. Georgi, “Unparticle Physics,” Phys. Rev. Lett. 98, 221601 (2007) [arXiv:hepph/0703260].
 [14] S. Hellerman, “A Universal Inequality for CFT and Quantum Gravity,” arXiv:0902.2790 [hepth].
 [15] W.H.Press, S.A.Teukolsky, W.T.Vetterling, B.P.Flannery, “Numerical Recipes. The Art of Scientific Computing”, 3rd Edition (2007), Cambridge University Press.