Statistical limits of spiked tensor models
Abstract
We study the statistical limits of both detecting and estimating a rankone deformation of a symmetric random Gaussian tensor. We establish upper and lower bounds on the critical signaltonoise ratio, under a variety of priors for the planted vector: (i) a uniformly sampled unit vector, (ii) \iid entries, and (iii) a sparse vector where a constant fraction of entries are \iid and the rest are zero. For each of these cases, our upper and lower bounds match up to a factor as the order of the tensor becomes large. For sparse signals (iii), our bounds are also asymptotically tight in the sparse limit for any fixed (including the case of sparse PCA). Our upper bounds for (i) demonstrate a phenomenon reminiscent of the work of Baik, Ben Arous and Péché: an ‘eigenvalue’ of a perturbed tensor emerges from the bulk at a strictly lower signaltonoise ratio than when the perturbation itself exceeds the bulk; we quantify the size of this effect. We also provide some general results for larger classes of priors. In particular, the large asymptotics of the threshold location differs between problems with discrete priors versus continuous priors. Finally, for priors (i) and (ii) we carry out the replica prediction from statistical physics, which is conjectured to give the exact informationtheoretic threshold for any fixed .
Of independent interest, we introduce a new improvement to the second moment method for contiguity, on which our lower bounds are based. Our technique conditions away from rare ‘bad’ events that depend on interactions between the signal and noise. This enables us to close factor gaps present in several previous works.
1 Introduction
Among the central problems in random matrix theory is to determine the spectral properties of spiked or deformed random matrix ensembles. Introduced by Johnstone [Joh01], such matrices consist of a random matrix (e.g. Wigner or Wishart) with a lowrank perturbation. These distributions serve as models for data consisting of “signal plus noise” and thus, results on these models form the basis of our theoretical understanding of principal components analysis (PCA) throughout the sciences.
Perhaps the most studied phenomenon in these spiked ensembles is the transition first examined by [BBP05] in the Wishart setting. We will be interested in the Wigner analogue: if is a Gaussian Wigner matrix^{2}^{2}2 is symmetric with offdiagonal entries , diagonal entries , and all entries independent (except for symmetry). and is a unit vector (the ‘spike’), the spectrum of the spiked matrix undergoes a sharp phase transition at (see e.g. [Péc06, FP07, CDMF09, BGN11]). Namely, when , many properties of the spectrum resemble those of a random (‘unspiked’) matrix: the empirical distribution of eigenvalues is semicircular and the top eigenvalue concentrates about . When , however, the spectrum becomes indicative of the spike: a single eigenvalue exceeds 2, exiting the semicircular bulk, and the associated eigenvector is correlated with the spike (and the precise correlation is known).
We emphasize that this ‘BBPstyle’ transition [BBP05] exhibits a pushout effect: the top eigenvalue of a random Wigner matrix is (in the highdimensional limit), but one only needs to add a planted signal of spectral norm before it becomes visible in the spectrum [FP07]. Once , the planted signal aligns well enough with fluctuations of the random matrix in order to create an eigenvalue greater than .
More recent work shows a second, statistical role of this threshold: not only does the top eigenvalue fail to distinguish the spiked and unspiked models for , but in fact every hypothesis test fails with constant probability [OMH13, MRZ15, PWBM16]. Thus, this transition indicates the point at which the spiked and unspiked models become markedly different.
It is natural to ask how the phenomena above generalize to tensors of higher order. Such tensors lack a wellbehaved spectral theory, and many standard tools of random matrix theory (e.g. the method of resolvents) fail to cleanly generalize. However, there remain a number of interesting probabilistic questions to ask in this setting:

Is there a sharp transition point for , below which the spiked model resembles the unspiked model?
In particular, we compare the Wigner tensor model (entries of are Gaussian, \iidapart from permutation symmetry; see Definition 1.1) to a spiked analogue . Previous work of [MRZ15] provides a bound on below which these two models are informationtheoretically indistinguishable. On the other hand, [RM14] notes that once exceeds the injective norm (defined below) of the noise, the spiked and unspiked models can be distinguished via injective norm. There remained a factor gap between these bounds as the order . In this paper, we improve both the lower and upper bounds, saving an asymptotic factor in the lower bound, and thus obtaining a factor gap as ; see Figure 1.

By analogy with the top eigenvalue (spectral norm) of a random matrix, what is the injective norm
of a spiked Wigner tensor?
For unspiked tensors, the value of the injective norm was predicted by [CS92] through nonrigorous methods from statistical physics, and was later rigorously proven [Tal06a, ABČ13, Sub15]. However, the spiked question has not been studied to our knowledge. Our statistical lower bound shows in particular that the injective norm of a spiked tensor remains identical to this value for all below a critical value. However, starting slightly above this critical value, our work provides a strong lower bound on the injective norm of the spiked model , which exceeds the injective norm of the unspiked model.

Do tensors exhibit a BBPstyle pushout effect, and if so, how large is it?
We show that the injective norm of a spiked tensor exceeds that of an unspiked tensor strictly before the injective norm of the spike exceeds that of the noise, much as the threshold in the matrix case strictly precedes the spectral norm of in the noise. We identify the asymptotic size of this gap as the order becomes large, up to a small constant factor.
Much as random matrix theory provides a theoretical foundation for PCA, these questions probe at the statistical limits of tensor PCA, the estimation of a lowrank spike from a spiked tensor. Such problems arise naturally in topic modeling [AGH14], in the study of hypergraphs [DBKP11], and more generally in momentbased approaches to estimation, when it may be desirable to detect lowrank structure in higher empirical moments. As many such problems involve extra structure such as sparsity in the signal, we allow the spike to be drawn from various priors, such as a distribution of sparse vectors. For each prior, we will investigate the detection problem of distinguishing the spiked and unspiked models, as well as the recovery problem of estimating the spike.
Our techniques and prior work.
We will prove informationtheoretic lower bounds using the second moment method associated with the statistical notion of contiguity. By computing a particular second moment, one can show that the spiked and unspiked models are contiguous, implying that they cannot be reliably distinguished. This second moment method originated in the study of random graphs (see e.g. [Wor99, RW94, Jan95, MRRW97]) but has since been applied to various averagecase computational problems such as the stochastic block model [MNS15, BMNN16], submatrix localization [BMV17], Gaussian mixture clustering [BMV17], synchronization problems over compact groups [PWBM16], and even spiked tensor PCA [MRZ15] and sparse PCA [BMV17]. However, many of the previous results are not tight in particular asymptotic regimes. In fact, there are many instances where curiously, in certain regimes they are loose by precisely a factor of in the signaltonoise ratio [MRZ15, BMNN16, BMVX16, PWBM16].
Our main technical contribution is a modification of the second moment method that closes (at least some of) these gaps. Specifically, we close the gap for sphericallyspiked^{3}^{3}3Here, the spike is a uniformly random unit vector. tensor PCA in the limit of large tensor order [MRZ15], and the gap^{4}^{4}4A more recent update [BMV17] independently closes the asymptotic factor gap from [BMVX16], using a different modification of the second moment method. for sparse PCA in the limit of low sparsity [BMVX16]. Our technique, which we call noise conditioning, is based on conditioning away from rare bad events that depend jointly on the signal and noise. We expect that this technique can also be used to close several other factor gaps of the same nature. Another application in which we have found noise conditioning to be fruitful is in contiguity results for the Rademacherspiked Wishart model [PWBM16]. For this probem the basic second moment method struggles quite badly because, due to a certain symmetry, it gives the same results for the positively and negativelyspiked regimes, even though these two regimes have very different thresholds; the noise conditioning method is able to break this symmetry, yielding tight or almosttight results in both regimes.
Our noise conditioning method is somewhat reminiscent of other modified second moment methods that have appeared in other contexts such as branching Brownian motion [Bra78], branching random walks [Aïd13, BDZ14], the Gaussian free field [BDG01, BZ12, BDZ16], cover times for random walks [DPRZ04], and thresholds for random satisfiability problems (e.g. colorability, sat) [COZ12, COP12, COV13, COP13, COP16].
Our upper bound for the spherical prior, via a lower bound on the spiked injective norm, is based on a direct analysis of how vectors close to the spike can align constructively with fluctuations in the noise to produce a larger injective norm than the spike alone. In particular, we consider how such vectors align with submatrices of the given tensor, and leverage existing results from spiked matrix models. Upper bounds for structured priors are obtained through naïve union bounds on the maximum likelihood value.
A variety of other techniques originating from statistical physics have also been successful in tackling structured inference or optimization problems involving large random systems, including random matrices and tensors. For instance, random tensors are intimately connected to spin glasses with spin interactions [Gar85, CS92]. The socalled replica method gives extremely precise nonrigorous solutions to these types of problems (see [MM09] for an introduction). In some cases, such as the celebrated Parisi formula for the ground state of a SherringtonKirkpatrick spin glass^{5}^{5}5This can be thought of as the maximum value of over where is a Gaussian Wigner matrix., the replica prediction has been rigorously proven to be correct [Tal06b]; we also rigorously know the injective norm of a random tensor of any order (in the highdimensional limit), as well as various structural properties of the critical points of the associated maximization problem [Tal06a, ABČ13, Sub15]. Furthermore, for a variety of structured spiked matrix problems such as sparse PCA (with constantfraction sparsity), the statisticallyoptimal mean squared error can be exactly characterized in the highdimensional limit for any level of sparsity and any signaltonoise ratio [KXZ16, BDM16, LM16]. Additionally, [KM09] implies bounds on the statistical threshold for tensor PCA with a valued spike (which we discuss in Appendix B). Some key techniques used in the above works include Guerra interpolation [Gue03], the Kac–Rice formula (see [ABČ13]), and the approximate message passing (AMP) framework [DMM09, DMM10, BM11, JM13] (see also [DM14, DAM16, MR16, BDM16]).
In comparison to these techniques from statistical physics, the second moment method for contiguity typically does not yield results that are as sharp, but it has the advantage of being quite simple and widely applicable. In particular, it can be applied to problems such as the sparse stochastic block model [MNS15, BMNN16] (constant average degree) for which various techniques in statistical physics do not seem to apply (see e.g. [DAM16, LM16]). Another advantage of the second moment method is that it addresses the detection problem instead of only the recovery problem, and furthermore implies bounds on hypothesis testing power below the detection threshold [PWBM16]. (However, unlike e.g. AMP, the second moment method only tells us about the threshold for nontrivial recovery and not the optimal recovery error at each value of above the threshold.)
We remark that our results on statistical indistinguishability have concrete implications for various probabilistic quantities. In particular, for any quantity that converges in probability to a constant under the spiked and unspiked models (such as the injective norm), the limiting values must agree throughout the subcritical region of the spiked model (signal strength below the detection threshold)^{6}^{6}6In fact, for tensors of order we show that the unspiked and subcriticalspiked distributions differ by in total variation distance (in the highdimensional limit), implying that any quantity with a limit in distribution must converge to the same distribution under both models.. For example, in the unspiked model, [ABČ13, Sub15] give a detailed description of the energy landscape, i.e. the number of critical points of of any given value and index; our results immediately imply that the same is true for subcritical spiked tensors. An interesting problem, that we largely do not address here, is to characterize the energy landscape above the detection threshold.
It is important to note that we are studying informationtheoretic limits rather than computational ones. All of the upper bounds in this paper are inefficient algorithms such as exhaustive search over all possible spikes. There is good evidence in the form of sumofsquares lower bounds that there is a significant gap between what is possible statistically and what is possible computationally for tensor PCA and related problems [HSS15, BHK16]. The case of matrices is of course an exception; in this case spectral algorithms achieve the optimal detection and recovery threshold in many cases. Various efficient algorithms for spiked tensor PCA are considered by [RM14], but (as is believed to be necessary) these operate only in a regime that is quite far from the informationtheoretic threshold.
1.1 Preliminaries
{definition}We define a Wigner tensor of order by the following sampling procedure: an asymmetric precursor is drawn with entries sampled \iidfrom , and is then averaged over all permutations of the indices to form a symmetric tensor . Thus, the distribution of has density proportional to on the space of symmetric tensors. We denote this distribution by .
Note that a typical entry of (with no repeated indices) is distributed as , and for any unit vector we have . This normalization agrees with that of [MRZ15], but other conventions exist in the literature.
Let the prior be a family of distributions (one for each ) over unit vectors , . Define the spiked distribution by , where is drawn from , is a signaltonoise parameter, and is drawn from .
The injective norm of a tensor is defined as over unit vectors . For each it is known that the injective norm of a random tensor converges in probability to a particular value (see Theorem 3.3) [CS92, Tal06a, ABČ13, RM14, Sub15]. For we have , the spectral norm of a Wigner matrix. We also have, for instance, , and as .
In the matrix case we have the following classical eigenvalue transition for spiked Wigner matrices. {theorem}[[Péc06, FP07, CDMF09, BGN11]] Let where is a unit vector in and is an Gaussian Wigner matrix (symmetric with offdiagonal entries , diagonal entries , and all entries independent up to symmetry).

If , the top eigenvalue of converges almost surely to as , and the top eigenvector (unitnorm) has trivial correlation with the spike: almost surely.

If , the top eigenvalue converges almost surely to and has nontrivial correlation with the spike: almost surely.
All of our results will consider the limit. We will be interested in the following detection and recovery problems. For the detection problems, we are given a single sample from either the spiked distribution or the corresponding unspiked distribution (say each is chosen with probability ) and our goal is to decide which distribution the sample came from.

Strong detection: Distinguish between the spiked and unspiked distributions with success probability as .

Weak detection: Distinguish between the spiked and unspiked distributions with success probability as , for some that does not depend on .

Weak recovery: Given a sample from the spiked model, output a unit vector with nontrivial correlation with the spike: for some that does not depend on .
Note that the exponent of in captures the fact that when is even we can only hope to learn the spike up to a global sign flip.
The three problems above are related as follows. Clearly strong detection implies weak detection, but there is no formal connection between detection and recovery in general (see e.g. [BMV17] for a simple counterexample). Typically, however, strong detection and weak recovery tend to be equivalent in the sense that they exhibit the same threshold for above which they are possible. For spiked tensors with , we will see that this also tends to coincide with the weak detection threshold. However, for matrices (), weak detection is actually possible below the strong detection threshold; in fact it is possible for any simply by inspecting the trace of the matrix.
1.2 Summary of results
Our results apply to a wide range of spike priors, but here we present specific results for three priors:

[label=()]

the spherical prior , in which is drawn uniformly from the unit sphere in ,

the Rademacher prior , in which the entries of are drawn \iidfrom ,

the sparse Rademacher prior for , in which a random support of exactly entries is chosen uniformly, and those entries of are drawn \iidfrom (with all other entries ).
Although we will give bounds for every , our results are most precisely stated in the limit in which case the lower and upper bounds match asymptotically.
[Spherical prior, large ] Consider the spherical prior . There exist bounds and , both behaving as as , and with for each , such that

if then weak detection and weak recovery are impossible,

if then strong detection and weak recovery are possible.
Recall that is the value of the injective norm of , which also behaves as ; see Theorem 3.3 for a formal statement, including the value of . Explicit formulas for the bounds are given by Theorem 3.2 () and Theorem 3.3 (). We obtain more detailed asymptotic descriptions in Appendix A:
in the limit . Hence the quantities , , and all behave as . Our lower bound closes a factor gap in [MRZ15]. A nonrigorous calculation via the replica method (Appendix B) suggests that the true statistical threshold matches the asymptotics of the upper bound. These various quantities are depicted in Figure 1.
It is clear that once , strong detection is possible (by thresholding the injective norm). Our upper bound implies that for each , the sphericallyspiked and unspiked tensor models can be distinguished as soon as , a threshold strictly below ; indeed, we show (Theorem 3.3) that the injective norm of exceeds that of for such . This mirrors the eigenvalue transition for (see Theorem 1.1), in which an eigenvalue leaves the spectrum bulk when , exhibiting a gap from when the spike exceeds the bulk in spectral norm at . The results above imply that the size of this “BBP gap” lies between and and is therefore of order as .
We now present our results for discrete priors. One qualitative difference from the above is that for discrete priors, the statistical threshold tends to remain bounded as :
[Rademacher prior, large ] Consider the Rademacher prior . There exists a bound , with , such that

if then weak detection and weak recovery are impossible,

if then strong detection and weak recovery are possible.
[Sparse Rademacher prior, large ] Consider the sparse Rademacher prior with fixed. There exists a bound , with , such that

if then weak detection and weak recovery are impossible,

if then strong detection and weak recovery are possible.
Here denotes the binary entropy: .
See Theorems 4.3 and 4.4 for explicit formulas for the lower bounds. The upper bounds follow from Proposition 4.1. We remark that the hardest case for sparse Rademacher is (maximizing ), where the three types of entries () occur in equal proportion. In the above three cases (spherical, Rademacher, sparse Rademacher), the lower and upper bounds agree up to a factor as . The bounds for each are described as finitedimensional optimization problems and are easy to compute numerically. See Figure 1 for a comparison of the bounds for various values of .
The special case of has been previously understood for the spherical and Rademacher priors [DAM16, MRZ15, PWBM16]. Namely, the threshold for both strong detection and weak recovery occurs precisely at , matching the eigenvalue transition.
Another asymptotic regime that we consider is the sparse Rademacher prior with fixed and sparsity , i.e. the limit of extremely sparse vectors.
[Sparse Rademacher prior, ] Fix and consider . There exists a bound that behaves as in the limit , such that

if then strong detection is impossible,

if then strong detection and weak recovery are possible.
For , the lower bound also rules out weak detection and weak recovery.
The case of the above theorem was previously considered by [BMVX16] where they give a tight upper bound and a lower bound that is loose^{7}^{7}7A more recent update [BMV17] independently closes the asymptotic factor gap from [BMVX16], using a different modification of the second moment method. by a factor of . Here we close the gap by improving the lower bound. Our upper bounds for discrete priors are straightforward generalizations of their upper bound, based on exhaustive search over all possible spikes. The recovery problem for the sparse Rademacher prior (with ) has also been studied using tools from statistical physics. In particular, the weak recovery threshold is known exactly, as well as the optimal recovery quality at each value of [LKZ15, KXZ16, BDM16, LM16]. (These results actually consider a variant of the sparse Rademacher prior where the entries of the spike are \iidbut we believe this does not change the informationtheoretic limits of the problem.) See Figure 2 for a comparison of bounds for sparse Rademacher.
In Appendix B, we present nonrigorous calculations through the replica method, predicting the precise location of the phase transition for each , in the spherical and Rademacher cases. We have high confidence in the correctness of these predictions since replica predictions have been rigorously shown to be correct in various related settings (e.g. [Tal06b, Tal06a, KXZ16, BDM16, LM16]). For the Rademacher prior, it can be deduced from [KM09] that the replica prediction is a rigorous upper bound on the true threshold; see Appendix B.
Our upper bounds for discrete priors (Rademacher and sparse Rademacher) are obtained by straightforward analysis of the MLE (maximum likelihood estimator), i.e. the nonefficient procedure that enumerates all possible spikes and tests which one is most likely. Our upper bound for the spherical prior showing strict separation from the injective norm, is proven by harnessing known properties of the case (eigenvalue transition in spiked matrices). Our lower bound techniques are discussed in the next section.
The rest of the paper is organized as follows. In Section 2 we present the main tools used in our lower bounds, including the statement of our main lower bound theorem (Theorem 2.3) along with a sketch of its proof using our noise conditioning method. In Section 3 we prove our lower and upper bounds for the spherical prior (assuming Theorem 2.3). In Section 4 we prove our lower and upper bounds for discrete priors (again assuming Theorem 2.3), including the Rademacher and sparse Rademacher priors. In Section 5 we prove Theorem 2.3. Some results are deferred to the appendix, including the replica calculations for the spherical and Rademacher priors (Appendix B).
2 Lower bound techniques
2.1 divergence, contiguity, and nonrecovery
divergence and TV distance
For probability distributions and , with absolutely continuous with respect to , the divergence is defined as
When are continuous distributions with densities respectively, note that this is simply
Let be the spiked ensemble, and let be its unspiked analogue. Our lower bounds will proceed by bounding (or slight variants) and drawing various conclusions from the value.
For tensors of order , we will bound the TV (total variation) distance via the inequality
(1) 
In particular, if we can establish that as , it follows that , implying that weak detection is impossible.
divergence and contiguity
For matrices (), we cannot hope to show because weak detection is possible for all using the trace of the matrix. We instead show lower bounds against strong detection, using the notion of contiguity introduced by Le Cam [LC60].
[[LC60]] Let , be sequences of distributions defined on the measurable space . We say that is \defncontiguous to , and write , if for any sequence of events with as , we also have as .
Note that implies that strong detection is impossible; to see this, suppose we had a distinguisher and let be the event that it outputs ‘’ to arrive at a contradiction. (The definition of contiguity is asymmetric but contiguity in either direction implies that strong detection is impossible.)
The following second moment method connects the divergence with contiguity.
[see e.g. [MRZ15, BMV17]] If as , then . This is referred to as a second moment method due to the “second moment” appearing in the definition of divergence.
Proof.
Let be a sequence of events. Using Cauchy–Schwarz,
The bound on divergence implies that the first factor on the righthand side is bounded. This means if then also . ∎
To summarize, we now know that if the divergence is then weak detection is impossible, and if the divergence is then strong detection is impossible.
We remark that computing seems significantly harder than computing (where is spiked and is unspiked). Establishing contiguity in the opposite direction typically requires additional work such as the small subgraph conditioning method (see e.g. [MNS15, BMNN16, Wor99]). Our methods will only require us to compute divergence in the ‘easy’ direction.
divergence and conditioning
It turns out that in some cases, the divergence can be dominated by extremely rare ‘bad’ events, causing it to be unbounded even though . Towards fixing this, a key observation is that if we replace by a modified distribution that only differs from with probability as (i.e. ), this does not affect whether or not the detection and recovery problems can be solved. Therefore, by choosing to condition on the highprobability ‘good’ events we can hope to make controlled even in some cases when is not. If we can show it follows that , which implies as desired. Prior work (e.g. [BMNN16, BMVX16, PWBM16]) has applied this conditioning technique to ‘good’ events depending on the spike ; specifically, enforces that the entries of occur in closetotypical proportions. As the main technical novelty of the current work, we introduce a technique that we call noise conditioning where we condition on ‘good’ events that depend jointly on the spike and the noise . Specifically, we require to have closetotypical value. This method will allow us to obtain asymptotically tight lower bounds in cases where prior work [MRZ15, BMVX16] has been loose by a constant factor of . The noise conditioning method is discussed further in Section 2.4.
divergence and nonrecovery
First consider the case where we are able to show as . It follows immediately that weak recovery is impossible, provided that weak recovery is impossible in the unspiked model . Weak recovery is of course impossible in for any reasonable spike prior, and if we had an ‘unreasonable’ prior then we should not have had in the first place. Our nonrecovery proof in Section 5.5 makes this precise, showing that for the spiked tensor model, if then weak recovery is impossible.
In the case , nonrecovery is less straightforward. Various works have forged a connection between bounded divergence and nonrecovery [BMNN16, BMVX16]. For a wide class of problems with additive Gaussian noise, Theorem 4 of [BMVX16] implies that if the divergence is then weak recovery is impossible. Unfortunately their result cannot be immediately applied in our setting because, due to our noise conditioning technique, we do not exactly have an additive Gaussian noise model. We will leave the proof of nonrecovery for for future work, although we strongly believe it to be true (in the same regime where strong detection is impossible).
2.2 Large deviation rate functions
Our lower bounds on tensor PCA problems will depend on the prior through tail probabilities of the correlation of two independent spikes drawn from the prior; note that this quantity is typically of order but may be as large as . We require detailed tail information, which we summarize in two objects: a rate function, describing the asymptotic behavior of deviations of order , and a local subgaussian condition [CCK06], bounding the deviations of size nonasymptotically.
First we define the rate function corresponding to a prior . Intuitively, is roughly defined by . More formally, one should think of as being equal to but we will not technically require this in our definition since it will be ok to use a weaker (smaller) rate function than the ‘true’ one (although we will always use the ‘true’ one in our examples). For technical reasons, we formally define the rate function as follows.
Let be a prior supported on unit vectors in . For drawn independently from and , let
Suppose we have for some sequence of functions that converges uniformly on to as . Then we call such the rate function of the prior .
The condition (uniformly) is satisfied if we have a tail bound of the form .
We will assume that the distribution of is symmetric about zero, so that only the upper tail bound is required. Note the following basic properties of the rate function (which we can assume without loss of generality): and is monotone increasing. We will see that for continuous priors (such as the spherical prior), the rate function blows up to infinity at , whereas for discrete priors (such as Rademacher) it remains finite at .
We now present the local subgaussianity condition which gives tighter control of the small deviations.
Let be a prior supported on unit vectors in . We say that is locally subgaussian with constant if for all , there exists with
Here means there exists a constant such that . (Above, is allowed to depend on but not .) Note that if is subgaussian, then is locally subgaussian with constant . However, the converse is false: for instance, the sparse Rademacher prior with sufficiently small density has a strictly better local subgaussian constant than its subgaussian constant.
2.3 Main lower bound theorem
The following theorem encapsulates our lower bound result:
Let , and let be a prior supported on unit vectors in . Suppose the following holds for some .

[label=()]

The distribution of (where are drawn independently from ) is symmetric about zero,

has a rate function ,

, for all

is locally subgaussian with some constant ,

if , .
Then for all :

is contiguous to , so no hypothesis test to distinguish them achieves full power (i.e. strong detection is impossible),

if , , so every hypothesis test has asymptotically zero power (i.e. weak detection is impossible),

if , no estimator , when given a sample , achieves expected correlation (where is the true spike) bounded above as (i.e. weak recovery is impossible).
The proof is deferred to Section 5. The essential condition is (iii) and the others exist for technical reasons. We remark that condition (v) is simply a slight strengthening of condition (iii) in the following sense.
Let . Suppose the rate function admits a local Chernoffstyle tail bound of the following form: there exists such that
Then condition (iii) implies conditions (iv) and (v). Note that this is stronger than Remark 2.2 because there is no factor.
Proof.
Fix . Let , to be chosen later. For all we have such that for all . For , and so there exists
Choose small enough so that this is at most , the local subgaussian condition. ∎
2.4 Proof idea: noise conditioning
The proof of our lower bounds hinge on a simple “noise conditioning” modification to the second moment method (divergence) for proving contiguity. We have found that this method yields asymptotically tight bounds in various regimes where the basic second moment method is loose by a factor of . (Below we will see how the number arises.) We expect that the same idea could be used to close the various other factor gaps that exist for contiguity arguments in prior work: the sparse stochastic block model in the regime of a constant number of equallysized communities with low signaltonoise [BMNN16]; submatrix localization in the limit of a large constant number of blocks [BMVX16]; Gaussian mixture clustering with a large constant number of clusters [BMVX16]; and synchronization over a finite group of large constant size (with either truthorHaar noise or Gaussian noise on all frequencies) [PWBM16]. However, we leave investigation of these other problems for future work^{8}^{8}8The latest version of [BMV17] closes the asymptotic gaps for sparse PCA and submatrix localization using a different modification of the second moment method; the problem for Gaussian mixtures remains open, to our knowledge..
The following is a proof sketch of our main lower bound theorem (Theorem 2.3) using the noise conditioning method. As discussed in Section 2.1, if the second moment is bounded as then is contiguity to and so it is impossible to reliably distinguish the two distributions. Taking to be the spiked tensor model and the corresponding unspiked model, the second moment can be computed to be
(2) 
where are drawn independently from the prior . The standard approach used by previous work is to next apply the Gaussian momentgenerating function to compute the expectation over in closed form, yielding
(3) 
where . Our approach is instead to use the fact that (as discussed in Section 2.1) we can change to that excludes lowprobability bad events, without affecting contiguity. Specifically, we take to condition on (informally) , which is a highprobability event under . Now, instead of (2), the second moment can be computed to be
which can be computed to be
(4) 
where again . Note that when the contribution from dominates, this gives a factor of advantage on compared to (3). This explains the gaps of present in prior work, and allows us to close them.
The final step is to bound (4) using the rate function of . Roughly, the rate function gives us the tail bound . To use this, we write (4) as a tail bound integral and then apply a change of variables:
where  
Note that this would be bounded as if we had , which gives condition (iii) in Theorem 2.3. Note however that this cannot be satisfied at (since ), so our proof will require extra work in order to handle the interval. This is where the local subgaussianity condition will be used. for all
3 Results for the spherical prior
In this section we discuss lower and upper bounds for the spherical prior , the uniform prior on the unit sphere.
3.1 Rate function
{lemma}The spherical prior has rate function and is locally subgaussian.
3.2 Lower bound
We can now complete the lower bound portion of Theorem 1.2. From the discussion above together with Theorem 2.3, we have the following. {theorem} Consider the spherical prior with . Let be the supremum of all values for which
Then weak detection and weak recovery are impossible for all . Note that the case is already wellunderstood: strong detection and weak recovery are possible when and impossible when [MRZ15, PWBM16], matching the eigenvalue transition. (Weak recovery is possible for all simply by inspecting the trace of the matrix.) In Appendix A we establish the following asymptotics as :
3.3 Upper bound
In this section we prove an upper bound showing that for sufficiently large , detection and recovery are possible in the sphericallyspiked tensor model. Indeed, our upper bound will apply to any prior supported on the unit sphere. Recall that the injective norm of a tensor is defined as . As noted by [RM14], as soon as the injective norm of the spike exceeds that of the noise, detection and recovery are possible. We improve on this bound, showing that the injective norm of the spiked model exceeds that of the unspiked model at a lower threshold. This is achieved by studying how perturbations away from the spike can line up with fluctuations in the noise to achieve a large injective norm.
For each , the precise limit value (in probability, as ) of has been nonrigorously computed using the replica method [CS92]; following [RM14], we refer to this limit value as (although our normalization differs from theirs). This was later proved rigorously, first only for even values of [Tal06a, ABČ13] (see [RM14] for a summary) and later for all [Sub15].
[[CS92, Tal06a, ABČ13, RM14, Sub15]] Fix . Define where is the unique solution to
(5) 
With probability as we have
Now suppose is a spiked tensor. Clearly if then we will have with probability and so strong detection is possible by thresholding the injective norm. However, recall that for matrices () this is not tight: yet detection is possible for all (see Theorem 1.1). Our result will imply that for all it remains true that the detection threshold is strictly less than , but the gap between them shrinks as . The argument simply harnesses the result in a “black box” fashion (which, perhaps surprisingly, gives quite a good upper bound; see Figure 1). We will first give a lower bound on the injective norm of a spiked tensor, stronger than the trivial lower bound of . This easily implies results for strong detection and weak recovery (Corollary 3.3 below).
Fix and . Let be any prior supported on the unit sphere in . Let be a spiked tensor drawn from . With probability as we have
where
(6) 
Proof.
By Gaussian spherical symmetry we can assume without loss of generality that the spike is , the first standard basis vector; thus we write with . Writing a general unit vector as , where is a unit vector, we expand to write
where is the subtensor of for which the first indices are not and the remaining indices are .
Note that the are independent. We will choose to optimize the interactions with and , independently from all other . It follows that for all , the terms are as with high probability, and we have
(7) 
Specifically, let us take to be the top eigenvector of the auxiliary spiked matrix
for some parameter to be optimized later. Since has norm converging to 1 in probability and is a Gaussian Wigner matrix, the classical eigenvalue transition for spiked Wigner matrices (Theorem 1.1) implies that the top eigenvalue converges almost surely to (as ), and the top eigenvector is correlated with the normalized spike as almost surely [Péc06, FP07, CDMF09, BGN11]. Noting that
we solve for 7) to obtain and plug into (
To obtain the strongest possible bound, we maximize this expression over all and . We can optimize in closed form for
The result as stated now follows by simple algebra. ∎
Fix and . Let be any prior supported on the unit sphere in . Let be given by Theorem 3.3 and let be defined as in Theorem 3.3. If then strong detection and weak recovery are possible for .
Proof.
If we have with probability as . If instead is unspiked, we have . It follows that strong detection is possible by thresholding .
To perform weak recovery given , output the unit vector maximizing . With probability we have
and so , which is bounded above 0 as (by assumption). Therefore weak recovery is possible. ∎
To prove the upper bound portion of Theorem 1.2, we let be the point at which implies . Note that by considering in Theorem 3.3, we obtain that the injective norm of a spiked tensor is at least the size of the spike (i.e. ), as noted in [RM14]. However, the derivative of (6) at is , implying a strict separation: and so . Therefore, for any it is possible to distinguish the spiked and unspiked models for some strictly less than the injective norm of a random tensor.
4 Results for discrete priors
In contrast to the upper and lower bounds discussed in the previous section for the spherical prior, which diverge with , the statistical thresholds for discrete priors tend to remain bounded as . This dichotomy is somewhat surprising given that the spherical and Rademacher thresholds agree exactly () for . In this section we present some general and some specific results confirming this phenomenon. In particular, we establish upper and lower bounds under the Rademacher and sparse Rademacher priors, which match in either of the limits and (recall that is the density of nonzeros). We thus establish Theorems 1.2, 1.2, and 1.2.
4.1 Upper bounds
A natural approach to hypothesis testing is the MLE (maximum likelihood estimate), the maximum of the tensor form over the support of the prior. For discrete priors of exponential cardinality (including Rademacher and sparse Rademacher), we can control this via a naïve union bound to obtain positive results for both detection and recovery (similar to [BMVX16]). Note however that this only yields nonefficient procedures for detection and recovery, since they require exhaustive search over all possible spikes. {proposition} Let be a prior supported on the unit sphere in and with exponential support:
For any , when , there exists a hypothesis test distinguishing from with probability of error as (i.e. strong detection is possible) and furthermore there exists an estimator that achieves nontrivial correlation with the spike (i.e. weak recovery is possible).
Proof.
Given a tensor , consider the maximum likelihood statistic
(8) 
We will show that thresholding suitably yields a hypothesis test that distinguishes the spiked and unspiked models with