metadata
dict | paper
dict | review
dict | citation_count
int64 0
0
| normalized_citation_count
int64 0
0
| cited_papers
listlengths 0
0
| citing_papers
listlengths 0
0
|
---|---|---|---|---|---|---|
{
"id": "a_2FNuSxo7F",
"year": null,
"venue": "CoRR 2020",
"pdf_link": "http://arxiv.org/pdf/2009.13500v1",
"forum_link": "https://openreview.net/forum?id=a_2FNuSxo7F",
"arxiv_id": null,
"doi": null
}
|
{
"title": "A priori estimates for classification problems using neural networks",
"authors": [
"Weinan E",
"Stephan Wojtowytsch"
],
"abstract": "We consider binary and multi-class classification problems using hypothesis classes of neural networks. For a given hypothesis class, we use Rademacher complexity estimates and direct approximation theorems to obtain a priori error estimates for regularized loss functionals.",
"keywords": [],
"raw_extracted_content": "arXiv:2009.13500v1 [stat.ML] 28 Sep 2020A PRIORI ESTIMATES FOR CLASSIFICATION PROBLEMS USING\nNEURAL NETWORKS\nWEINAN E AND STEPHAN WOJTOWYTSCH\nAbstract. We consider binary and multi-classclassification problems using hypothesis classes\nof neural networks. For a given hypothesis class, we use Rade macher complexity estimates\nand direct approximation theorems to obtain a priori error e stimates for regularized loss func-\ntionals.\n1.Introduction\nMany of the most prominent successes of neural networks have b een in classificationproblems,\nand many benchmark problems for architecture prototypes and o ptimization algorithms are on\ndata sets for image classification. Despite this situation, theoretic al results for classification are\nscarce compared to the more well-studied field of regression proble ms and function approxima-\ntion. In this article, we extend the a priori error estimates of [EMW1 8] for regression problems\nto classification problems.\nCompared with regression problems for which we almost always use sq uare loss, there are\nseveral different common loss functions for classification problems , which have slightly different\nmathematical and geometricproperties. For the sake of simplicity, we restrict ourselvesto binary\nclassification in the introduction.\n(1) Square loss can be used also in classification problems with a targe t function that takes\na discrete set of values on the different classes. Such a function co incides with a Barron\nfunction P-almost everywhere if the data distribution is such that the differen t classes\nhave positive distance. In this setting, regression and classificatio n problems are indis-\ntinguishable, and the estimates of [EMW18] (in the noiseless case) ap ply. We therefore\nfocus on different models in this article.\n(2) More often, only one-sided L2-approximation(square hinge loss) or one-sided L1-approx-\nimation (hinge loss) is considered, as we only need a function to be larg e positive/large\nnegative on the different classes with no specific target value. This s etting is similar\ntoL2-approximation since minimizers exist, which leads to basically the same a priori\nestimates as for L2-regression.\nOn the other hand, the setting is different in the fact that minimizers are highly non-\nunique. In particular, anyfunction which is sufficiently large and has the correct sign on\nboth classes is a minimizer of these risk functionals. No additional reg ularity is encoded\nin the risk functional.\n(3) Loss functionals of cross-entropy type also encourage func tions to be large positive or\nnegative on the different classes, but the loss functions are strict ly positive on the whole\nreal line (with exponential tails). This means that minimizers of the los s functional do\nnot exist, causing an additional logarithmic factor in the a priori est imates.\nDate: September 29, 2020.\n2020 Mathematics Subject Classification. 68T07, 41A30, 65D40, 60-08.\nKey words and phrases. Neural network, binary classification, multi-label classi fication, a priori estimate,\nBarron space.\n1\n2 WEINAN E AND STEPHAN WOJTOWYTSCH\nOn the other hand, these risk functionals regularize minimizing seque nces. In a higher\norder expansion, it can be seen that loss functions with exponentia l tails encourage\nmaximum margin behaviour, i.e. they prefer the certainty of classific ation to be as high\nas possible uniformly over the different label classes. This is made more precise below.\nThe article is organized as follows. In Section 2, we study binary class ification using abstract\nhypothesis classes and neural networks with a single hidden layer in t he case that the two\nlabelled classes are separated by a positive spatial distance. After a brief discussion of the links\nbetween correct classification and risk minimization (including the implic it biases of different\nrisk functionals), we obtain a priori estimates under explicit regular ization. In particular, we\nintroduce a notion of classification complexity which takes the role of the path-norm in a priori\nerror estimates compared to regression problems.\nThe chief ingredients of our analysis are estimates on the Rademach er complexity of two-\nlayer neural networks with uniformly bounded path-norm and a dire ct approximation theorem\nfor Barron functions by finite neural networks. We briefly illustrat e how these ingredients can\nbe used in other function classes, for example those associated to multi-layer networks or deep\nresidual networks in Section 2.5. The most serious omission in this art icle are convolutional\nneural networks, since we are not aware of a corresponding func tion space theory.\nThe analysis is extended to multi-label classification in Section 3, and t o the case of data sets\nin which different classes do not have a positive spatial separation in S ection 4. The setting\nwhere minimizers do not exist is also studied in the case of general L2-regression in Appendix A.\n1.1.Notation. We denote the support of a Radon measure µby sptµ. Ifφisµmeasurable\nand locally integrable, we denote by φ·µthe measure which has density φwith respect to µ. If\nΦ :X→Yis measurable and µis a measure on X, we denote by Φ ♯µthe push-forward measure\nonY. If (X,d) is a metric space, and A,B⊆X, we denote the distance between AandBby\ndist(A,B) = inf x∈A,x′∈Bd(x,x′) and dist(x,A) := dist( {x},A).\nAll measures on Rdare assumed to be defined on the Borel σ-algebra (or the larger σ-algebra\nwhich additionally contains all null sets).\n2.Complexity of binary classification\n2.1.Preliminaries. In this section, we introduce the framework in which we will consider c las-\nsification problems.\nDefinition 2.1. Abinary classification problem is a triple ( P,C+,C−) wherePis a probability\ndistribution on RdandC+,C−⊆Rdare disjoint P-measurable sets such that P(C+∪C−) = 1.\nThecategory function\ny:Rd→R, y x=\n\n1x∈C+\n−1x∈C−\n0 else\nisP-measurable.\nDefinition 2.2. We say that a binary classification problem is solvable in a hypothesis class H\nofP-measurable functions if there exists h∈ Hsuch that\n(2.1) yx·h(x)≥1P−almost everywhere .\nWe consider the closure of the classes C±, which is slightly technical since the classes are\ncurrently only defined in the P-almost everywhere sense. Let\nC+=/intersectiondisplay\nA∈C+A,C+=/braceleftbig\nA⊆Rd/vextendsingle/vextendsingleAis closed and y≤0P−a.e. outside of A/bracerightbig\n.\nA PRIORI ESTIMATES FOR CLASSIFICATION PROBLEMS 3\nThe closure of C−is defined the same way with −yin place ofy.\nLemma 2.3. Assume that every function in His Lipschitz continuous with Lipschitz constant\nat mostL>0. If(P,C+,C−)is solvable in H, then\n(2.2) dist( C+,C−)≥2\nL.\nProof.Considerh∈ Hsuch thath≥1P-almost everywhere on C+. By continuity, we expect\nh≥1 onC+. In the weak setting, we argue as follows: Since his continuous, we find that\nh−1[1,∞) is closed, and since y≤0P-almost everywhere outside of h−1[1,∞), we find that\nC+⊆h−1[1,∞). The same argument holds for C−andC−.\nIf (P,C+,C−) is solvable in H, then there exists a function h∈ Hsuch thatyxh(x)≥1\nP-almost everywhere, so in particular h≥1 onC+andh≤ −1 onC−. SincehisL-Lipschitz,\nwe find that for any x∈C+andx′∈C−we have\n2 =/vextendsingle/vextendsingleh(x)−h(x′)/vextendsingle/vextendsingle≤L|x−x′|.\nThe result follows by taking the infimum over x∈C+andx′∈C−. /square\nMany hypothesis classes Hare further composed of sub-classes of different complexities (e.g .\nby norm in function spaces or by number of free parameters). In t hat situation, we can quantify\nsolvability.\nDefinition 2.4. Let{HQ}Q∈(0,∞)be a family of hypothesis classes such that Q<Q′⇒ HQ⊆\nHQ′. IfH=/uniontext\nQ>0HQ, we say that a binary classification problem ( P,C1,C2) issolvable in H\nwith complexity Qif the problem is solvable in HQ+εfor everyε>0. We denote\n(2.3) Q=QH(P,C1,C2) = inf/braceleftbig\nQ′>0 : (P,C1,C2) is solvable in HQ′/bracerightbig\n.\nClearly, a problem is solvable in Hif and only if it is solvable in HQfor sufficiently large Q,\nthus if and only if QH<∞.\nExample 2.5.IfHQis the space of functions which are Lipschitz-continuous with Lipsch itz-\nconstantQ, then the lower bound (2.2) is sharp as\nh(x) =dist(x,C−)−dist(x,C+)\nδ\nis Lipschitz-continuous with Lipschitz-constant ≤2/δand satisfies h≥1 onC+,h≤ −1 onC−.\nRemark 2.6.Assume that his a hypothesis class of continuous functions. If Pn=1\nn/summationtextn\ni=1δxi\nis the empirical measure of samples drawn from P, then the complexity of ( Pn,C−,C+) is lower\nthan that of ( P,C−,C+) since a function h∈ Hwhich solves yx·h(x)≥1P-almost everywhere\nalso satisfies yx·h(x)≥1Pn-almost everywhere (with probability 1 over the sample points\nx1,...,x n).\n2.2.Two-layer neural networks. Let us consider classification using two-layer neural net-\nworks(i.e. neuralnetworkswith asingle hidden layer). We considert he ReLUactivationfunction\nσ(z) = max{z,0}.\nDefinition 2.7 (Barron space) .Letπbe a probability measure on Rd+2. We setfπ:Rd→R,\nfπ(x) =E(a,w,b)∼π/bracketleftbig\naσ(wTx+b)/bracketrightbig\n,(a,w,b)∈R×Rd×R,\nassuming the expression is well-defined. We consider the norm\n/ba∇dblf/ba∇dblB= inf/braceleftbig\nE(a,w,b)∼π/bracketleftbig\n|a|(|w|+|b|)/bracketrightbig\n:f≡fπ/bracerightbig\nand introduce Barron space\nB={f:Rd→R:/ba∇dblf/ba∇dblB<∞}.\n4 WEINAN E AND STEPHAN WOJTOWYTSCH\nRemark 2.8.For ReLU-activated networks, the infimum in the definition of the no rm is attained\n[EW20c]. To prove this, one can exploit the homogeneity of the activa tion function and use a\ncompactness theorem in the space of (signed) Radon measures on the sphere.\nRemark2.9.LikethespaceofLipschitz-continuousfunctions, Barronspaceit selfdoesnotdepend\non the norm on Rd, but when the norm on Rdis changed, also the Barron norm is replaced by\nan equivalent one. Typically, we consider Rdto be equipped with the ℓ∞-norm. In general, the\nnorm on parameter space ( w-variables) is always as dual to the one on data space ( x-variables)\nsuch that |wTx| ≤ |w||x|. Usually, this means that the ℓ1-norm is considered on parameter\nspace.\nThe only result in this article which depends on the precise choice of no rm on data space is\nthe Rademacher complexity estimate in Lemma 2.34, which can be trac ed also into the a priori\nerror estimates. Occasionally in the literature, both data and para meter space are equipped\nwith the Euclidean norm. In that case, the results remain valid and mild ly dimension-dependent\nconstants log(2 d+2) can be eliminated.\nRemark 2.10.Like other function spaces, Barron spaces depend on the domain u nder considera-\ntion. Enforcing the equality f=fπonRdis often too rigid and leads to meaningless assignments\na large portion of data space. More commonly, we only require that f=fπP-almost everywhere\n(wherePdescribesthedatadistribution)orequivalently f≡fπonsptP,e.g.in[EW20c,EW20a].\nThis is the smallest sensible definition of the Barron norm, since the infi mum is taken over the\nlargest possible class. If Pis a probability distribution and Kis a compact set, we may denote\n/ba∇dblf/ba∇dblB(P)= inf/braceleftbig\nE(a,w,b)∼π/bracketleftbig\n|a|(|w|+|b|)/bracketrightbig\n:f(x) =fπ(x) forP-a.e.x∈Rd/bracerightbig\n/ba∇dblf/ba∇dblB(K)= inf/braceleftbig\nE(a,w,b)∼π/bracketleftbig\n|a|(|w|+|b|)/bracketrightbig\n:f(x) =fπ(x) for allx∈K/bracerightbig\n.\nAlways considering B(P) leads to the smallest possible constants. Early works on Barron sp ace\nfocused on B([0,1]d) to estimate quantities uniformly over unknown distributions P[EMW19b,\nEMW19a]. This uniformity leads to greater convenience. For example , we do not need to\ndistinguish that f(x) = 1{x>1/2}is inB(P1) but not B(P2) whereP1=1\n2(δ1+δ0) andP2is the\nuniform distribution on (0 ,1).\nThe hypothesis class of Barron space is stratified as in Definition 2.4 b y\nB=/uniondisplay\nR>0BR\nwhereBRdenotes the closed ball of radius R>0 around the origin in Barron space. We show\nthat if the hypothesis class used for a classification problem is Barro n space, a positive distance\nbetween classes is not only necessary, but also sufficient for solvab ility. However, the necessary\ncomplexity may be polynomial in the separation and exponential in the dimension.\nTheorem 2.11. Assume that δ:= dist(C+,C−)>0and there exists R>0such that spt(P)⊆\nBR(0). Then there exists a f∈ Bsuch that\n/ba∇dblf/ba∇dblB≤cd/parenleftbiggR+δ\nδ/parenrightbiggd\n, fy ≡1P−a.e.\nwherecddepends only on the volume of the d-dimensional unit ball and the properties of a suitable\nmollifier.\nProof.We define\n˜y:Rd→R,˜y(x) =\n\n1 if dist( x,C+)<dist(x,C−) and|x|<R+δ\n−1 if dist(x,C−)<dist(x,C+) and|x|<R+δ\n0 if dist( x,C+) = dist(x,C−) or|x| ≥R+δ, f=ηδ∗˜y\nA PRIORI ESTIMATES FOR CLASSIFICATION PROBLEMS 5\nwhereηδis a standard mollifier supported on Bδ/2(0). We find that y·f≡1 onC−∪C+.\nAdditionally, the Fourier transform of fsatisfies the inequality\n/integraldisplay\nRd|ˆf||ξ|dξ=/integraldisplay\nRd|/hatwiderηδ∗˜y||ξ|dξ\n=/integraldisplay\nRd|/hatwideηδ||/hatwide˜y||ξ|dξ\n≤ /ba∇dbl/hatwide˜y/ba∇dblL∞/integraldisplay\nRd|δˆη1(δξ)| |ξ|dξ\n≤ /ba∇dbl˜y/ba∇dblL1(BR+δ)δ−d/integraldisplay\nRd|ˆη1(δξ)| |δξ|δddξ\n=cd/parenleftbiggR+δ\nδ/parenrightbiggd/integraldisplay\nRd|ˆη1||ξ|dξ,\nso by Barron’s spectral criterion [Bar93] we have\n/ba∇dblf/ba∇dblB≤/integraldisplay\nRd|ˆf||ξ|dξ≤cd/parenleftbiggR+δ\nδ/parenrightbiggd/integraldisplay\nRd|ˆη1||ξ|dξ.\n/square\nThe exponential dependence on dimension is a worst case scenario a nd not expected if we can\nidentify low-dimensional patterns in the classification problem. In pa rticularly simple problems,\neven the rate obtained in Lemma 2.3 is optimal.\nExample 2.12.IfC−⊆ {x:x1<−δ/2}andC+⊆ {x:x1>δ/2}, then\nf(x) =x\nδ=σ(eT\n1x)−σ(−eT\n1x)\nδ\nis a Barron function of norm 2 /δonRdwhich satisfies y·f≥1 onC+∪C−, soQB≤2\nδ.\nExample 2.13.Another example with a relatively simple structure are radial distribu tions. As-\nsume thatC+=λ·Sd−1andC−=µ·Sd−1for someλ,µ>0. The data distribution Pdoes not\nmatter, only which sets are P-null. For our convenience, we recognize the radial structure of t he\nproblem and equip Rdwith the Euclidean norm, both in the xandwvariables.\nWe assume that spt P=C+∪C−such that the inequality y·h≥1 has to be satisfied on the\nentire setC+∪C−. Consider the function\nfα,β(x) =α+β|x|=ασ(0)+cdβ/integraldisplay\nSd−1σ(wTx)π0(dw)\nwhereπ0denotes the uniform distribution on the sphere and cdis a normalizing factor. We\ncompute that\n/integraldisplay\nSd−1σ(wTx)π0(dw) =/integraltext1\n0w1(1−w2\n1)d−2dw1/integraltext1\n−1(1−w2\n1)d−2dw1|x|=1\n2d−2\n2+2\n√πΓ(d−2\n2+1)\nΓ(d−2\n2+3\n2)|x|=Γ((d+1)/2)√πdΓ(d/2)|x|\nThe normalizing factor satisfies\ncd=√πdΓ(d/2)\nΓ((d+1)/2)=√\n2πd+O/parenleftBig\nd−1/2/parenrightBig\n.\nThus for sufficiently large dwe have /ba∇dblfα,β/ba∇dblB≤ |α|+√\n2πd+1|β|and\nfα,β=α+βµonC−, f α,β≡α+λµonC+.\n6 WEINAN E AND STEPHAN WOJTOWYTSCH\nThusfα,βsatisfiesy·fα,β≥1P-almost everywhere if and only if\n/braceleftBigg\nα+βµ≤ −1\nα+βλ≥1⇒/braceleftBigg\nβ(λ−µ)≥2\nα+βλ≥1.\nIf we assume that λ>µ, we recognize that R=λandδ=λ−µ, so we can choose\nβ=2\nλ−µ=2\nδ, α= 1−2λ\nλ−µ= 1−2R\nδ\nand obtain that\nQB(P,C+,C−)≤1+2R\nδ+4√\n2πd+1\nδ.\nA mild dimension-dependence is observed, but only in the constant, n ot the dependence on δ. If\nthe data symmetry is cylindrical instead (i.e. C±=λ±·Sk−1×Rd−k), the constant√\ndcan be\nlowered to√\nkby considering a Barron function which only depends on the first kcoordinates.\nRemark 2.14.The fact that cylindrical structures can be recognized with lower n orm than radial\nstructures is an advantage of shallow neural networks over isotr opic random feature models.\nIf the data distribution is radial, but the classes are alternating (e.g .C−=Sd−1∪3·Sd−1\nandC+= 2·Sd−1), it may be useful to use a neural network with at least two hidden la yers,\nthe first of which only needs to output the radial function |x|. A related observation concerning\na regression problem can be found in [EW20c, Remark 5.9]. Classificatio n using deeper neural\nnetworks is discussed briefly in Section 2.5.\nOn the other hand, in general the complexity of a classification prob lem may be exponentially\nlarge in dimension.\nExample 2.15.Forδ=1\nN, letP=1\n(2N+1)d/summationtext\nx∈{−N,...,N}dδx/Nbe the uniform distribution\non the grid ( δZ)dinside the hypercube [ −1,1]d. In particular, Pis a probability distribution\non [−1,1]d. Since any two points in the grid on which Pis supported have distance at least\nδ= 1/N, we find that for any of the (2 N+1)dbinary partitions of spt P, the distance between\nthe classes is δ. We use Rademacher complexity to show that at least one of these p artitions has\nclassification complexity at least\n(2.4) Q≥/parenleftbigg2+δ\nδ/parenrightbiggd/2\nin Barron space. A more thorough introduction to Rademacher com plexity is given below in\nDefinition 2.33 and Lemma 2.34 for further background, but we pres ent the proof here in the\nmore natural context.\nLetQbe such that for any category function y: sptP→ {−1,1}, there exists h∗such that\nh∗(x)·yx≥1 for allx∈sptPand/ba∇dblh∗/ba∇dblB≤Q. As usual, HQdenotes the ball of radius Q>0\nin Barron space. Furthermore, we denote by ψ:R→Rthe 1-Lipschitz function which is −1 on\n(−∞,−1] and 1 on [1 ,∞). By assumption, the hypothesis class FQ={ψ◦h:h∈ HQ}coincides\nwith the class F∗of all function h: sptP→ {−1,1}. Finally, we abbreviate nd= (2N+1)dand\nsptP={x1,...,x nd}. Ifξare iid random variables which take values ±1 with equal probability,\nthen\n1 =Eξ/bracketleftBigg\nsup\nf∈F∗1\nndnd/summationdisplay\ni=1ξif(xi)/bracketrightBigg\n= Rad(F∗,sptP)\n= Rad(FQ,sptP)\nA PRIORI ESTIMATES FOR CLASSIFICATION PROBLEMS 7\n≤Rad(HQ,sptP)\n≤Q√nd,\nwhere we used the contraction lemma for Rademacher complexities, [SSBD14, Lemma 26.9].\nA particularly convenient property in Barron space is the fact that a finitely parametrized\nsubset has immense approximation power, as formalized below. The s calar-valued L2-version\nof the following result was proved originally in [EMW18] and a weaker ver sion in [Bar93]. The\nproof goes through also in the Hilbert space-valued case and with th e slightly smaller constant\nclaimed below.\nTheorem 2.16 (Direct Approximation Theorem, L2-version) .Letf∗∈ Bkbe a vector-valued\nBarron function, i.e.\nf∗(x) =E(a,w,b)∼π/bracketleftbig\naσ(wTx+b)/bracketrightbig\nwhereπis a parameter distribution on Rk×Rd×Rand\n/ba∇dblf∗/ba∇dblB=E(a,w,b)∼π/bracketleftbig\n|a|ℓ2/parenleftbig\n|w|+|b|/parenrightbig/bracketrightbig\n.\nLetPbe a probability distribution such that spt(P)⊆[−R,R]d. Then there exists fm(x) =\n1\nm/summationtextm\ni=1aiσ(wT\nix+bi)such that\n(2.5)1\nmm/summationdisplay\ni=1|ai|ℓ2/bracketleftbig\n|wi|+|bi|/bracketrightbig\n≤ /ba∇dblf∗/ba∇dblB,/ba∇dblfm−f∗/ba∇dblL2(P)≤/ba∇dblf∗/ba∇dblBmax{1,R}√m.\nA similar result holds in the uniform topology on any compact set. A pro of in the scalar-\nvalued case can be found in [EMWW20, Theorem 12], see also [EMWW20, R emark 13]. The\nscalar version is applied component-wise below.\nTheorem 2.17 (Direct Approximation Theorem, L∞-version) .LetK⊂[−R,R]dbe a compact\nset inRdandf∗∈ Bka vector-valued Barron function. Under the same conditions as above,\nthere exists a two-layer neural network with kmparameters ˜fm(x) =1\nkm/summationtextkm\ni=1˜aiσ(˜wT\nix+˜bi)\nsuch that\n(2.6)1\nkmkm/summationdisplay\ni=1|ai|ℓ2/bracketleftbig\n|wi|+|bi|/bracketrightbig\n≤ /ba∇dblf∗/ba∇dblB,/ba∇dblfm−f∗/ba∇dblC0(K)≤ /ba∇dblf∗/ba∇dblBmax{1,R}/radicalbigg\nk(d+1)\nm.\nRemark2.18.Theorem2.17admitsaslightdimension-dependentimprovementto /ba∇dblfm−f∗/ba∇dblL∞≤\nClog(m)\nm1/2+1/dat the expense of a less explicit constant [Mak98].\nIn the context of classification problems, this gives us the following im mediate application.\nCorollary 2.19 (Mostlycorrectclassificationandcorrectclassification) .Letm∈Nand(P,C+,C−)\na binary classification problem which can be solved in Barron space with complexity Q >0. If\nspt(P)⊆[−R,R]d, there exists a two-layer neural network hmwithmneurons such that\nP/parenleftbig/braceleftbig\nx∈Rd:yx·hm(x)<0/bracerightbig/parenrightbig\n≤Q2max{1,R}2\nm, (2.7)\nP/parenleftbig/braceleftbig\nx∈Rd:εx·hm(x)<1/2/bracerightbig/parenrightbig\n≤4Q2max{1,R}2\nm. (2.8)\nFurthermore, if m≥Q2max{1,R}2(d+ 1), there exists a two-layer neural network with m\nneurons such that P/parenleftbig/braceleftbig\nx∈Rd:yx·hm(x)<0/bracerightbig/parenrightbig\n= 0.\n8 WEINAN E AND STEPHAN WOJTOWYTSCH\nProof.Leth∈ Bsuch thath≥1 onC+andh≤ −1 onC−. Set Ω =C+∪C−and recall that\nP(Rd\\Ω) = 0. Let hmbe like in the direct approximation theorem. By Chebysheff’s inequality\nwe have\nP/parenleftbig/braceleftbig\nx∈Ω :yxhm(x)<0/bracerightbig/parenrightbig\n≤P/parenleftbig/braceleftbig\nx∈Ω :|hm(x)−h(x)|>1/parenrightbig\n≤/integraltext\nΩ|hm−h|2P(dx)\n1≤Q2max{1,R}2\nm.\nThe second inequality is proved analogolously. The correct classifica tion result is proved using\ntheL∞-version of the direct approximation theorem. /square\nRemark 2.20.The complexity Qof a binary classification problem in [ −1,1]dis a priori bounded\nbyCdδ−d, whereδis the spatial separation between the two classes, dis the dimension of the\nembedding space, and Cdis a constant depending only on d. While this can be prohibitively\nlarge, all such problems are ‘good’ in the sense that we only need to d ouble the number of\nparameters in order to cut the probability of the misclassified set in h alf in the a priori bound.\nWhile the constant of classification complexity is affected, the rate is not. This is different from\nthe classical curse of dimensionality, where relationship between th e number of parameters m\nand the error εwould take a form like ε∼m−α/dforα≪dinstead ofε∼Cm−1with a\npotentially large constant.\n2.3.Classification and risk minimization. Typically, we approachclassification by minimiz-\ning a risk functional R. We focus on a specific type of loss functional\nR(h) =/integraldisplay\nRdL/parenleftbig\n−yxh(x)/parenrightbig\nP(dx)\nwhere again Pis the data distribution and yis the category function as above. The loss function\nLis assumed to be monotone increasing and sufficiently smooth, facilita ting alignment between\nyandh. Furthermore, we assume that lim z→−∞L(z) = 0 (which is equivalent to the assumption\nthatLis lower-bounded, up to a meaningless translation).\nRisk minimization can be seen as a proxy for classification due to the fo llowing observation:\n(2.9)\nP({x:yxh(x)<0})≤/integraldisplay\n{x:yxh(x)<0}L(−yxh(x))\nL(0)P(dx)≤1\nL(0)/integraldisplay\nRdL/parenleftbig\n−yxh(x)/parenrightbig\nP(dx)≤R(h)\nL(0).\nThe most pressing question when minimizing a functional is whether min imizers exist.\nRemark 2.21.In (2.9), we used that L≥0. IfLis not bounded from below, the loss functional\nmay not encourage correct classification since classification with hig h certainty on parts of the\ndomain may compensate incorrect classification in others.\nRemark 2.22.An interesting dynamic connection between risk minimization and corr ect classi-\nfication is explored in [BJS20] where it is shown that (under condition s) for every ε >0 there\nexistsδ >0 such that the following holds: if at time t0≥0 a set of probability at least 1 −δ\nis classified correctly, and network weights are trained using gradie nt descent, then at all later\ntimes, a set of probability at least 1 −εis classified correctly.\nExample 2.23 (Hinge-loss) .LetL(y) = max {0,1 +y}. Then in particular L(−yxh(x)) = 0 if\nand only if yxh(x)≥1. Thus any function hwhich satisfies yxh(x)≥1 is a minimizer of R.\nSince any function which correctly classifies the data is also a risk minim izer (up to rescaling),\nno additional geometric regularity is imposed. In particular, a minimize r which transitions close\nto one class instead of well between the classes is just as competitiv e as a minimizer which gives\nboth classes some margin. This may make minimizers of hinge loss more v ulnerable to so-called\nadversarial examples where small perturbations to the data lead to classification errors .\nIn this situation, we may search for a minimizer of the risk functional with minimal Barron\nnorm to obtain stable classification. Such a minimizer may be found by in cluding an explicit\nA PRIORI ESTIMATES FOR CLASSIFICATION PROBLEMS 9\nregularizing term as a penalty in the risk functional. This is the approa ch taken in this article,\nwhere we obtain a priori estimates for such functionals. Another a pproach is that of implicit\nregularization, which aims to show that certain training algorithms fin d low norm minimizers.\nThis requires a precise study of training algorithm and initialization.\nWe note that unlike loss functions with exponential tails (see below), hinge-loss does not have\nimplicit regularization properties encoded in the risk functional.\nWith different loss functions, minimizers generally do not exist. This is a mathematical\ninconvenience, but can introduce additional regularity in the proble m as we shall see.\nLemma 2.24. Assume that the following conditions are met:\n(1)L(z)>0for allz∈Randinfz∈RL(z) = lim z→−∞L(z) = 0.\n(2) The hypothesis class His a cone which contains a function hsuch thaty·h<0P-almost\neverywhere.\nThenRdoes not have any minimizers.\nProof.Note thatL(−λyxh(x)) is monotone decreasing in λ, so by the monotone convergence\ntheorem we have\nlim\nλ→∞R(λh) =/integraldisplay\nRdlim\nλ→∞L/parenleftbig\nλyxh(x)/parenrightbig\nP(dx) =L(−∞) = 0.\nOn the other hand, L(y)>0 for ally∈R, so a minimizer cannot exist. /square\nWhen considering minimizing sequences for such a risk functional, we e xpect the norm to\nblow up along the sequence. This motivates us to study the ‘profile at infinity’ of such risk func-\ntionals by separating magnitude and shape. Let Hbe the closed unit ball in a hypothesis space\nof Lipschitz-continuous function (e.g. Barron space). For simplicit y, assume that Lis strictly\nmonotone increasing and continuous, thus continuously invertible. We define the functionals\nRλ:H →(0,∞),Rλ(h) =R(λh) =/integraldisplay\nRdL/parenleftbig\n−λyx·h(x)/parenrightbig\nP(dx).\nTo understand what the limit of Rλis, it is convenient to normalize the functionals as\nL−1/parenleftbigg/integraldisplay\nRdL/parenleftbig\n−yx·h(x)/parenrightbig\nP(dx)/parenrightbigg\n.\nSinceLis strictly monotone increasing, we immediately find that\nλmin\nx∈spt(P)/parenleftbig\n−yx·h(x)/parenrightbig\n= min\nx∈spt(P)/parenleftbig\n−yx·λh(x)/parenrightbig\n=L−1/parenleftbigg/integraldisplay\nRdL/parenleftbigg\nmin\nx∈spt(P)−λyx·h(x)/parenrightbigg\nP(dx)/parenrightbigg\n≤L−1/parenleftbigg/integraldisplay\nRdL/parenleftbig\n−λyx·h(x)/parenrightbig\nP(dx)/parenrightbigg\n=L−1/parenleftbigg/integraldisplay\nRdL/parenleftbigg\nmax\nx∈spt(P)−yx·λh(x)/parenrightbigg\nP(dx)/parenrightbigg\n(2.10)\n≤max\nx∈spt(P)/parenleftbig\n−λyx·h(x)/parenrightbig\n=λmax\nx∈spt(P)/parenleftbig\n−yx·h(x)/parenrightbig\n.\nThus to consider the limit, the correct normalization is\nFλ:H →(0,∞),Fλ(h) =L−1/parenleftbig\nRλ(h)/parenrightbig\nλ=1\nλL−1/parenleftbigg/integraldisplay\nRdL/parenleftbig\n−λyx·h(x)/parenrightbig\nP(dx)/parenrightbigg\n.\n10 WEINAN E AND STEPHAN WOJTOWYTSCH\nNote the following:\n(2.11) h∈argmin\nh′∈¯B1Fλ(h′)⇔λh∈argmin\nh′∈¯BλR(h′).\nThus, if Fλconverges to a limiting functional F∞in the sense of Γ-convergence (such that\nminimizers of Fλconverge to minimizers of F∞), then studying F∞describes the behavior that\nminimizers of Rhave for very large norms. We call F∞themargin functional ofR.\nExample 2.25 (Power law loss) .Assume that yx·h(x)≥εforP-almost all xfor someε>0 and\nthatL(y) =|y|−βfor someβ >0 and ally≤ −µ<0. Then\nFλ(h) =−1\nλ/parenleftbigg/integraldisplay\nRd/vextendsingle/vextendsingle−λyx·h(x)/vextendsingle/vextendsingle−βP(dx)/parenrightbigg−1\nβ\n=−/parenleftbigg/integraldisplay\nRd/vextendsingle/vextendsingle−yx·h(x)/vextendsingle/vextendsingle−βP(dx)/parenrightbigg−1\nβ\nfor allλ>µ\nε. Thus ifLdecays to 0 at an algebraic rate, the margin functional\n(2.12) F∞= lim\nλ→∞Fλ=−/parenleftbigg/integraldisplay\nRd/vextendsingle/vextendsingleyx·h(x)/vextendsingle/vextendsingle−βP(dx)/parenrightbigg−1\nβ\nis anLp-norm for negative p(and in particular, an integrated margin functional). The limit\nis attained e.g. in the topology of uniform convergence (since the se quence is constant). In\nparticular, the loss functional and the margin functional coincide d ue to homogeneity.\nTakingβ→ ∞in a second step, we recover the integrated margin functionals con verge to the\nmore classical maximum margin functional . While /ba∇dblf/ba∇dblLp→ /ba∇dblf/ba∇dblL∞asp→ ∞, theLβ-“norm”\nfor large negative βapproaches the value closest to zero /ba∇dbl1/f/ba∇dblL∞¡ i.e.\nlim\nβ→∞/parenleftbigg\n−/integraldisplay\nRd/vextendsingle/vextendsingleyx·fπ(x)/vextendsingle/vextendsingle−βP(dx)/parenrightbigg−1\nβ\n=−min\nx∈sptPyx·fπ(x).\nMinimizing F∞corresponds to maximizing yx·h(x) uniformly over spt Punder a constraint on\nthe size of . Similar behaviour is attained with exponential tails.\nExample 2.26 (Exponential loss) .Assume that either\n(1)yx·h(x)≤0 forP-almost all xand thatL(y) = exp(y) for ally≤0 or\n(2)L(y) = exp(y) for ally∈R.\nThen\nlim\nλ→∞Fλ(h) = lim\nλ→∞/bracketleftbigg1\nλlog/parenleftbigg/integraldisplay\nRdexp(−λyxh(x))P(dx)/parenrightbigg/bracketrightbigg\n= max\nx∈sptP/parenleftbig\n−yx·h(x)/parenrightbig\n=−min\nx∈sptP(yxh(x)).\nThus the limiting functional here is the maximum margin functional . The pointwise limit may\nbe improved under reasonable conditions. More generally, we prove the following.\nLemma 2.27. LetLbe a function such that for every ε>0the functions\ngλ(z) =L−1(εL(λz))\nλ\nconverge to g∞(z) =zlocally uniformly on R(or on(0,∞)). ThenF∞(h) = min x∈sptPyx·h(x)\nis the maximum margin functional (at functions which classi fy all points correctly if the second\ncondition is imposed). If there exists a uniform neighbourh ood growth function\n(2.13) ρ: (0,∞)→(0,∞),P/parenleftbig\nBr(x)/parenrightbig\n≥ρ(r)∀r>0, x∈spt(P)\nthen the limit is uniform (and in particular in the sense of Γ-convergence).\nA PRIORI ESTIMATES FOR CLASSIFICATION PROBLEMS 11\nRemark 2.28.The same result can be proven in any function class which has a unifor m modulus\nof continuity (e.g. functions whose α-H¨ older constant is uniformly bounded for some α>0).\nThe uniform neighbourhood growth condition holds for example when Phas a reasonable\ndensity with respect to Lebesgue measure or the natural measur e on a smooth submanifold of\nRd, whenPis an empirical measure, and even when Pis the natural distribution on a self-similar\nfractal. A reasonable density is for example one whose decay to zer o admits some lower bound\nat the edge of its support.\nRemark 2.29.The exponential function satisfies the uniform convergence cond ition onRsince\nL−1(εL(λz))\nλ=log(εexp(λz))\nλ=log(ε)\nλ+z.\nThe function L(z) = log(1+exp( z)) satisfies the uniform convergencecondition on ( −∞,0) since\nL−1(z) = log(ez−1), L−1(εL(z)) = log/parenleftBig\neεlog(1+eλz)−1/parenrightBig\n= log/parenleftbig\n(1+eλz)ε−1/parenrightbig\nasλ→ ∞,e−λzbecomes uniformly close to 0 when zis bounded away from zero and the first\norder Taylor expansion\nL−1(εL(z))≈log(εe−λz) = log(ε)+λz\nbecomes asymptotically valid. This function has analytic advantages over exponential loss since\nthe exponential tail of the loss function is preserved, but the fun ction is globally Lipschitz-\ncontinuous.\nProof of Lemma 2.27. Recall that Fλ(h)≤maxx′∈sptP/bracketleftbig\n−yx′h(x′)/bracketrightbig\nindependently of L, so only\nthe inverse inequality needs to be proved.\nWe observe that max x′∈sptP[−yx′h(x′)] is bounded due to the Barron norm bound, so the\nlocally uniform convergence holds. Let r >0. Let ¯xbe a point where max x′∈sptP[−yx′h(x′)] is\nattained. Since all h∈ Hare 1-Lipschitz, we find that −yxh(x)≥maxx′∈sptP/bracketleftbig\n−yx′h(x′)/bracketrightbig\n−r\nfor allx∈Br(¯x). In particular\nFλ(h) =1\nλL−1/parenleftbigg/integraldisplay\nRdL/parenleftbig\n−λyx·h(x)/parenrightbig\nP(dx)/parenrightbigg\n≥1\nλL−1/parenleftBigg/integraldisplay\nBr(¯x)L/parenleftbig\n−λyx·h(x)/parenrightbig\nP(dx)/parenrightBigg\n≥1\nλL−1/parenleftbigg\nρ(r)L/parenleftbigg\nλmax\nx′∈sptP/bracketleftbig\n−yx′h(x′)/bracketrightbig\n−λr/parenrightbigg/parenrightbigg\n≥g∞/parenleftbig\n−yx′h(x′)−r/parenrightbig\n−eλ,r\nwhere lim λ→∞eλ,r= 0. Thus taking λto infinity first with a lowerlimit and r→0 subsequently,\nthe theorem is proved. /square\nThe connection between risk minimization and classificationhardness thus is as follows: When\nminimizing Rin a hypothesis space space, elements of a minimizing sequence will (af ter nor-\nmalization) resemble minimizers of the margin functional F∞. IfF∞is the maximum margin\nfunctional, then\nmin\nh∈HF∞(h) = min\nh∈Hmax\nx∈sptP/parenleftbig\n−yx·h(x)/parenrightbig\n=−max\nh∈Hmin\nx∈sptP/parenleftbig\nyx·h(x)/parenrightbig\ni.e. the minimizer of F∞maximizes min x∈sptP/parenleftbig\nyx·h(x)/parenrightbig\nin the unit ball of the hypothesis space.\nAssuming that\nmax\nh∈Hmin\nx∈sptP/parenleftbig\nyx·h(x)/parenrightbig\n>0\n12 WEINAN E AND STEPHAN WOJTOWYTSCH\nwe obtain that\nQB(P,C+,C−)≤Q⇔ ∃h∈BQ(0)⊆ Bs.t. min\nx∈C+∪C−yxh(x)≥1\n⇔ ∃h∈B1(0)⊆ Bs.t. min\nx∈C+∪C−yxh(x)≥1\nQ\n⇔min\nh∈HF∞(h)≤ −1\nQ\nwhere the dependence on P,C+,C−on the right hand side is implicit in the risk functional by\nthe choice of C±for the margin functional. In particular, exponential loss encodes a regularity\nmargin property which is not present when using hinge loss.\nRemark 2.30.Chizat and Bach show that properly initialized gradient descent train ing finds\nmaximum margin solutions of classification problems, if it converges to a limit at all [CB20].\nThe result is interpreted as an implicit regularization result for the tr aining mechanism. We\nmaintain that it is the margin functional instead which drives the shap es to maximum margin\nconfigurations, and that gradient flow merely follows the asymptot ic energy landscape. We\nremark that the compatibility result is highly non-trivial since the limit d escribed here is only\nofC0-type and since the map from the weight of a network to its realizatio n is non-linear.\n2.4.A priori estimates. In this section, we show that there exist finite neural networks of low\nrisk, with an explicit estimate in the number of neurons. Furthermor e, we prove a priori error\nestimates for regularized empirical loss functionals.\nLemma 2.31 (Functions of low risk: Lipschitz loss) .Assume that h∗∈ BandLhas Lipschitz\nconstant [L]. Then for every m∈Nthere exists a two-layer network hmwithmneurons such\nthat\n/ba∇dblhm/ba∇dblB≤ /ba∇dblh∗/ba∇dblBandR(hm)≤ R(h∗)+[L]·/ba∇dblh∗/ba∇dblBmax{1,R}√m.\nIn particular, if (P,C+,C−)has complexity ≤Qin Barron space, then for any λ>0there exists\na two-layer network hmwithmneurons such that\n(2.14) /ba∇dblhm/ba∇dblB≤λQ, R(hm)≤L(−λ)+[L]·λQmax{1,R}√m\nProof.Lethmsuch that /ba∇dblhm/ba∇dbl ≤ /ba∇dblh∗/ba∇dblBand/ba∇dblh∗−hm/ba∇dblL1≤/bardblh∗/bardblBmax{1,R}√m. Then\nR(hm) =/integraldisplay\nRdL(−yxhm(x))P(dx)\n=/integraldisplay\nRdL/parenleftbig\nyxh∗(x)+yx[h∗(x)−hm(x)]/parenrightbig\nP(dx)\n≤/integraldisplay\nRdL/parenleftbig\n−yxh∗(x)/parenrightbig\n+/vextendsingle/vextendsingleh∗(x)−hm(x)/vextendsingle/vextendsingleP(dx)\n=R(h∗)+/ba∇dblh∗−hm/ba∇dblL1(P).\nFor the second claim consider λh∗whereh∗∈ Bsuch that /ba∇dblh∗/ba∇dblB=Qandyx·h∗(x)≥1./square\nThe rate can be improved for more regular loss functions.\nLemma 2.32 (Functions of low risk: Smooth loss) .Assume that L∈W2,∞(R)and forλ>0\ndenoteδλ:= max{|L′|(ξ) :ξ <−λ}. Assume further that (P,C+,C−)has complexity ≤Qin\nBarron space. Then for any λ>0there exists a two-layer network hmwithmneurons such that\n(2.15) /ba∇dblhm/ba∇dblB≤λ/bracketleftbig\n1+δλ/bracketrightbig\nQ,R(hm)≤L(−λ)+[1+δλ]2/ba∇dblL′′/ba∇dblL∞(Q)(λQ)2max{1,R}2\n2m\nA PRIORI ESTIMATES FOR CLASSIFICATION PROBLEMS 13\nProof.Note that for z,z∗∈Rthe identity\nL(z)−L(z∗) =L′(z∗)(z−z∗)+/bracketleftbig\nL(z)−L(z∗)−L′(z∗)(z−z∗)/bracketrightbig\n=L′(z∗)(z−z∗)+/bracketleftbig\nL(z)−L(z∗)−L′(z∗)(z−z∗)/bracketrightbig\n=L′(z∗)(z−z∗)+/integraldisplayz\nz∗L′′(ξ)(z−ξ)dξ\nholds. Leth∗∈ B. For any function hm, note that\nR(hm)−R(h∗) =/integraldisplay\nRdL′(yx·h∗(x))yx(hm−h∗)(x)P(dx) +/integraldisplay\nRd/integraldisplayyx·hm(x)\nyx·h∗(x)L′′(ξ)(yxhm(x)−ξ)dξP(dx)\n≤/integraldisplay\nRdL′(yx·h∗(x))yx(hm−h∗)(x)P(dx) +/ba∇dblL′′/ba∇dblL∞(R)\n2/integraldisplay\nRd/vextendsingle/vextendsinglehm−h∗/vextendsingle/vextendsingle2(x)P(dx)\nNow let˜hmsuch that /ba∇dblhm/ba∇dbl ≤ /ba∇dblh∗/ba∇dblBand/ba∇dblh∗−hm/ba∇dblL2≤/bardblh∗/bardblBmax{1,R}√m. Then in particular\n|cm|:=/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplay\nRdL′(yx·h∗(x))yx(hm−h∗)(x)P(dx)/vextendsingle/vextendsingle/vextendsingle/vextendsingle≤max\nx∈sptP/vextendsingle/vextendsingleL′(yx·h∗(x))/vextendsingle/vextendsingle/ba∇dblh∗/ba∇dblBmax{1,R}√m.\nNote thathm:=˜hm+cmis a two-layer neural network satisfying\n/ba∇dblhm/ba∇dblB≤ /ba∇dblh∗/ba∇dblB/bracketleftBigg\n1+max{1,R}maxx∈sptP/vextendsingle/vextendsingleL′(yx·h∗(x))/vextendsingle/vextendsingle\n√m/bracketrightBigg\n/ba∇dblhm−h∗/ba∇dblL2(P)≤ /ba∇dbl˜hm−h∗/ba∇dblL2(P)+|cm| ≤/bracketleftbigg\n1+ max\nx∈sptP/vextendsingle/vextendsingleL′(yx·h∗(x))/vextendsingle/vextendsingle/bracketrightbigg/ba∇dblh∗/ba∇dblBmax{1,R}√m.\nIn particular\nR(hm)≤ R(h∗)+/integraldisplay\nRdL′(yx·h∗(x))yx(hm−h∗)(x)P(dx) +/ba∇dblL′′/ba∇dblL∞(R)\n2/integraldisplay\nRd/vextendsingle/vextendsinglehm−h∗/vextendsingle/vextendsingle2(x)P(dx)\n≤ R(h∗)+0+/bracketleftbigg\n1+/ba∇dblL′′/ba∇dblL∞(R)\n2max\nx∈sptP/vextendsingle/vextendsingleL′(yx·h∗(x))/vextendsingle/vextendsingle/bracketrightbigg2/ba∇dblh∗/ba∇dbl2\nBmax{1,R}2\nm.\nThe Lemma followsby considering λh∗whereh∗∈ Bsuch that /ba∇dblh∗/ba∇dblB=Qandyx·h∗(x)≥1./square\nWe use this result to present the a priori error estimate for a regu larized model. As we want\nto derive an estimate with relevance in applications, we need to bound the difference between\nthe (population) risk functional and the empirical risk which we acce ss in practical situations\nby means of a finite data sample. A convenient tool to bound this disc repancy is Rademacher\ncomplexity.\nDefinition 2.33. LetHbe ahypothesisclassand S={x1,...,x N}adataset. TheRademacher\ncomplexity of Hwith respect to Sis defined as\n(2.16) Rad( H;S) =Eξ/bracketleftBigg\nsup\nh∈H1\nNN/summationdisplay\ni=1ξih(xi)/bracketrightBigg\n,\nwhereξiare iid random variables which take the values ±1 with probability 1 /2.\nThe Rademacher complexity is a convenient tool to decouple the size of oscillations from their\nsign by introducing additional randomness. It can therefore be us ed to estimate cancellations\nand is a key tool in controlling generalization errors.\n14 WEINAN E AND STEPHAN WOJTOWYTSCH\nLemma 2.34. [Bac17, EMW18] LetHbe the unit ball in Barron space. Then\nRad(H;S)≤2/radicalbigg\nlog(2d+2)\nN\nfor any sample set S⊆[−1,1]dwithNelements.\nRemark 2.35.The proof relies on the ℓ∞-norm being used for x-variables and thus the ℓ1-norm\nbeing used for w-variables. If the ℓ2-norm was used on both hypothesis classes instead, the factor\nlog(2d+2) could be replaced by a dimension-independent factor 1.\nThe proof can easily be modified to show that Rad( H;S)≤2 max{1,R}/radicalBig\nlog(2d+2)\nNif sptP⊆\n[−R,R]d. While the bound is not sharp and becomes dimension-independent as R→0, it does\nnot approach zero since the class of constant functions has non- trivial Rademacher complexity.\nTheuniform bound on the Rademacher complexity of Hwith respect to any training set in\nthe unit cube in particular implies a bound on the expectation. This est imate is particularly\nuseful for Lipschitz-continuous loss functions. Recall the followin g result.\nLemma 2.36. [SSBD14, Theorem 26.5] Assume that L(−yx·h(x))≤¯cfor allh∈ Hand\nx∈spt(P). Then with probability at least 1−δover the choice of set S∼Pnwe have\nsup\nh∈H/bracketleftBigg/integraldisplay\nRdL(−yx·h(x))P(dx)−1\nnn/summationdisplay\ni=1L(−yxi·h(xi))/bracketrightBigg\n≤2ES′∼PnRad(H;S′)+¯c/radicalbigg\n2 log(2/δ)\nn.\nWe consider the Barron norm as a regularizer for risk functionals. N ote that for a finite neural\nnetworkfm(x) =1\nm/summationtextm\ni=1aiσ(wT\nix+bi), the estimates\nm/ba∇dblfm/ba∇dblB≤m/summationdisplay\ni=1|ai|/bracketleftbig\n|wi|+|bi|/bracketrightbig\n≤1\n2m/summationdisplay\ni=1/bracketleftbig\n|ai|2+|wi|2+|ai|2+|bi|2/bracketrightbig\n=m/summationdisplay\ni=1/bracketleftbigg\n|ai|2+|wi|2+|bi|2\n2/bracketrightbigg\nhold. In fact, choosing optimal parameters ( ai,wi,bi) and scaling such that |ai|=|wi|=|bi|\n(using the homogeneity of the ReLU function), the left and right ha nd side can be made equal.\nTheorem 2.37 (A priori estimate: Hinge loss) .LetL(z) = max{0,1+z}. Assume that Pis a\ndata distribution such that sptP⊆[−R,R]d. Consider the regularized (empirical) risk functional\n/hatwideRn,λ(a,w,b) =1\nnn/summationdisplay\ni=1L/parenleftbig\n−yxif(a,w,b)(xi)/parenrightbig\n+λ\nmm/summationdisplay\ni=1|ai|/bracketleftbig\n|wi|+|bi|/bracketrightbig\n.\nwhereλ=max{1,R}√mandf(a,w,b)=fmwith weights a= (ai)m\ni=1,w= (wi)m\ni=1,(bi)m\ni=1. For any\nδ∈(0,1), with probability at least 1−δover the choice of iid data points xisampled from P, the\nminimizer (ˆa,ˆw,ˆb)satisfies\n(2.17)R(f(ˆa,ˆw,ˆb))≤2Qmax{1,R}√m+2 max{1,R}/radicalbigg\nlog(2d+2)\nn+2Qmax{1,R}/radicalbigg\nlog(2/δ)\nn.\nSinceL(0) = 1, the estimate (2.9)implies furthermore that\nP({x:yx·f(ˆa,ˆw,ˆb)(x)<0})≤ R(f(ˆa,ˆw,ˆb)).\nProof.The penalty term is nothing else than the Barron norm in a convenient parametrization.\nWe can therefore just treat it as /ba∇dblhm/ba∇dblBand work on the level of function spaces instead of the\nlevel of parameters.\nStep 1. Leth∗be a function such that yx·h∗(x)≥1 on spt Pand/ba∇dblh∗/ba∇dblB≤Q. Since the\nrandom sample points xilie in spt Pwith probability 1, we find that Rn(h∗) = 0 for any sample.\nA PRIORI ESTIMATES FOR CLASSIFICATION PROBLEMS 15\nUsing Lemma 2.31 for the empirical distribution Pn=1\nn/summationtextn\ni=1δxi, we know that there exists a\nnetworkhmwithmneurons such that\n/ba∇dblhm/ba∇dblB≤ /ba∇dblh∗/ba∇dblB,/hatwideRn(hm)≤/hatwideRn(h∗)+/ba∇dblh∗/ba∇dblBmax{1,R}√m≤Qmax{1,R}√m.\nThus\n/hatwideRn,λ(ˆa,ˆw,ˆb)≤max{1,R}Q√m+λQ= 2λQ.\nIn particular,\nm/summationdisplay\ni=1|ˆai|/bracketleftbig\n|ˆwi|+|ˆbi|/bracketrightbig\n≤2Q.\nStep 2. Note that /ba∇dblf/ba∇dblL∞((−R,R)d)≤ /ba∇dblf/ba∇dblBmax{1,R}and thatL(z)≤ |z|. Using the\nRademacher risk bound of Lemma 2.36, we find that with probability at least 1−δ, the es-\ntimate\nR(ˆa,ˆw,ˆb)≤/hatwideRn(ˆa,ˆw,ˆb)+2 max {1,R}/radicalbigg\nlog(2d+2)\nn+2Qmax{1,R}/radicalbigg\nlog(2/δ)\nn\n≤2Qmax{1,R}√m+2 max{1,R}/radicalbigg\nlog(2d+2)\nn+2Qmax{1,R}/radicalbigg\nlog(2/δ)\nn\nholds. /square\nRemark 2.38.The rate is1√m+1√n. The rate is worse in mcompared to L2-approximation,\nsinceL1-estimates were used and L1-loss behaves like a norm whereas L2-loss behaves like the\nsquare of a norm. On the other hand, the classification complexity Q(which takes the place of\nthe norm) only enters linearly rather than quadratically.\nWhile Lemma 2.32does notapply directlyto hinge-loss, abetter ratec anbe obtaineddirectly:\nChooseh∗∈ Bsuch that /ba∇dblh∗/ba∇dblB≤Qandyx·h∗(x)≥1 almost everywhere. Use the direct\napproximation theorem to approximate 2 h∗byhm. Then\n/vextendsingle/vextendsingleL(yx·h∗(x))−L(yx·hm(x))/vextendsingle/vextendsingle≤/vextendsingle/vextendsinglehm−h∗/vextendsingle/vextendsingle2(x)∀x∈sptP\nsinceL(yx·h∗(x)) =L(yx·hm(x)) if|hm−h∗|(x)≤1. Considering the differently regularized\nloss functional\n(2.18) /hatwideR∗\nn,λ(a,w,b) =1\nnn/summationdisplay\ni=1L(−yx·h(a,w,b)(x))+/parenleftBigg\nλ\nmm/summationdisplay\ni=1|ai|/bracketleftbig\n|wi|+|bi|/bracketrightbig/parenrightBigg2\nfor the same λ=max{1,R}√m, we obtain the a priori estimate\n(2.19)\nR(h(a,w,b))≤4Q2max{1,R}2\nm+2 max{1,R}/radicalbigg\nlog(2d+2)\nn+4Q2max{1,R}2/radicalbigg\nlog(2/δ)\nn\nfor the empirical risk minimizer with probability at least 1 −δover the choice of training sample\nx1,...,x n.\nSince loss functions with exponential tails have implicit regularizing pro perties, they might\nbe preferred over hinge loss. We therefore look for similar estimate s for such loss functions. The\nexponential function has a convenient tail behaviour, but inconve nient fast growth, which makes\nit locally but not globally Lipschitz. While this can be handled using the L∞-version of the\ndirect approximation theorem, the estimates remain highly unsatisf actory due to the dimension-\ndependent constant. The loss function L(y) = log(1+exp( y)) combines the best of both worlds,\nas both powerful a priori estimates and implicit regularization are av ailable.\n16 WEINAN E AND STEPHAN WOJTOWYTSCH\nThe proof is only slightly complicated by the fact that unlike for hinge lo ss, minimizers do\nnot exist. Note that the risk decay rate log m/bracketleftbig\nm−1/2+n−1/2/bracketrightbig\nis almost the same as in the case\nwhere minimizers exist, due to the fast decrease of the exponentia l function.\nTheorem 2.39 (A priori estimates: log-loss, part I) .LetL(z) = log/parenleftbig\n1+exp(z)/parenrightbig\n. Consider the\nregularized (empirical) risk functional\n/hatwideRn,λ(a,w,b) =1\nnn/summationdisplay\ni=1L/parenleftbig\n−yxif(a,w,b)(xi)/parenrightbig\n+λ\nmm/summationdisplay\ni=1|ai|/bracketleftbig\n|wi|+|bi|/bracketrightbig\n.\nwhereλ=max{1,R}√m. With probability at least 1−δover the choice of iid data points xisampled\nfromP, the minimizer (ˆa,ˆw,ˆb)satisfies\nR(ˆhm)≤2Qmax{1,R}/parenleftbigg\n1+/vextendsingle/vextendsingle/vextendsingle/vextendsinglelog/parenleftbigg2Qmax{1,R}√m/parenrightbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/parenrightbigg\n·/parenleftBigg\n1√m+/radicalbigg\n2 log(2/δ)\nn/parenrightBigg\n(2.20)\n+2 max{1,R}/radicalbigg\n2 log(2d+2)\nn.\nBy(2.9), this implies estimates for the measure of mis-classiified o bjects.\nProof.There exists h∗∈ Bsuch that /ba∇dblh∗/ba∇dblB≤Qandyx·h∗(x)≥1 forP-almost every x.\nWith probability 1 over the choice of points x1,...,x nwe have/hatwideRn(µh∗)≤L(−µ)≤exp(−µ).\nIn particular, Using Lemma 2.32 for the empirical risk functional, the re exists a finite neural\nnetworkhmwithmneurons such that\n/ba∇dblhm/ba∇dblB≤Q,/hatwideRn(µhm)≤exp(−µ)+µQmax{1,R}√m+λµQ= exp(−µ)+µ2Qmax{1,R}√m\nThe optimal value of µ=−log/parenleftBig\n2Qmax{1,R}√m/parenrightBig\nis easily found, so if (ˆ a,ˆw,ˆb)∈Rm×Rmd×Rm\nis the minimizer of the regularized risk functional, then\n/hatwideRn,λ(ˆa,ˆw,ˆb)≤/parenleftbigg\n1+/vextendsingle/vextendsingle/vextendsingle/vextendsinglelog/parenleftbigg2Qmax{1,R}√m/parenrightbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/parenrightbigg2Qmax{1,R}√m.\nIn particular, using that λ=max{1,R}√mwe find that ˆhm=h(ˆa,ˆw,ˆb)satisfies the norm bound\n/ba∇dblˆhm/ba∇dblB≤/hatwideRn,λ(ˆhm)\nλ≤2Q/parenleftbigg\n1+/vextendsingle/vextendsingle/vextendsingle/vextendsinglelog/parenleftbigg2Qmax{1,R}√m/parenrightbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/parenrightbigg\n.\nSince/ba∇dblh/ba∇dblC0≤ /ba∇dblh/ba∇dblBmax{1,R}, we find that with probability at least 1 −δover the choice of\nsample points, we have\nR(ˆhm)≤/hatwideRn(ˆhm)+2 max {1,R}/radicalbigg\n2 log(2d+2)\nn\n+/bracketleftbigg\n2Q/parenleftbigg\n1+/vextendsingle/vextendsingle/vextendsingle/vextendsinglelog/parenleftbigg2Qmax{1,R}√m/parenrightbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/parenrightbigg/bracketrightbigg\nmax{1,R}/radicalbigg\n2 log(2/δ)\nn\n= 2Qmax{1,R}/parenleftbigg\n1+/vextendsingle/vextendsingle/vextendsingle/vextendsinglelog/parenleftbigg2Qmax{1,R}√m/parenrightbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/parenrightbigg\n·/parenleftBigg\n1√m+/radicalbigg\n2 log(2/δ)\nn/parenrightBigg\n+2 max{1,R}/radicalbigg\n2 log(2d+2)\nn.\n/square\nA PRIORI ESTIMATES FOR CLASSIFICATION PROBLEMS 17\nWe carry out a second argument for different regularization. Note that forL(z) = log/parenleftbig\n1 +\nexp(z)/parenrightbig\nwe have\nL′(z) =exp(z)\n1+exp(z)=1\n1+exp(−z), L′′(z) =exp(−z)\n(1+exp( −z))2=1\n/parenleftbig\nez/2+e−z/2/parenrightbig2\nso 0≤L′′(z)≤1\n4andδλ≤exp(−λ).\nTheorem 2.40 (A priori estimates: log-loss, part II) .LetL(z) = log/parenleftbig\n1 + exp(z)/parenrightbig\n. Consider\nthe regularized (empirical) risk functional\n/hatwideR∗\nn,λ(a,w,b) =1\nnn/summationdisplay\ni=1L/parenleftbig\n−yxif(a,w,b)(xi)/parenrightbig\n+/parenleftBigg\nλ\nmm/summationdisplay\ni=1|ai|/bracketleftbig\n|wi|+|bi|/bracketrightbig/parenrightBigg2\n.\nwhereλ=max{1,R}√m. With probability at least 1−δover the choice of iid data points xisampled\nfromP, the minimizer (ˆa,ˆw,ˆb)satisfies\nR(ˆhm)≤/parenleftBigg\n1+2/vextendsingle/vextendsingle/vextendsingle/vextendsinglelog/parenleftbiggQ2max{1,R}2\n4m/parenrightbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle2/parenrightBigg\nQ2max{1,R}2\n4m(2.21)\n+2Qmax{1,R}/parenleftbigg\n1+/vextendsingle/vextendsingle/vextendsingle/vextendsinglelog/parenleftbigg4Q2max{1,R}2\nm/parenrightbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/parenrightbigg/radicalbigg\n2 log(2/δ)\nn\n+2 max{1,R}/radicalbigg\n2 log(2d+2)\nn.\nBy(2.9), this implies estimates for the measure of mis-classiified o bjects.\nProof.There exists h∗∈ Bsuch that /ba∇dblh∗/ba∇dblB≤Qandyx·h∗(x)≥1 forP-almost every x. With\nprobability 1 over the choice of points x1,...,x nwe have/hatwideRn(µh∗)≤L(−µ)≤exp(−µ). In\nparticular, Using Lemma 2.31, for given µ >0 there exists a finite neural network hmwithm\nneurons such that /ba∇dblhm/ba∇dblB≤µ/bracketleftbig\n1+δµ/bracketrightbig\nQ≤2µQand\n/hatwideRn(hm)≤L(−µ)+[1+δµ]2(µQ)2max{1,R}2\n8m≤exp(−µ)+(µQ)2max{1,R}2\n4m\nfor sufficiently large µ. We choose specifically µ=−log/parenleftBig\nQ2max{1,R}2\n4m/parenrightBig\nto simplify expressions.\nIf (ˆa,ˆw,ˆb)∈Rm×Rmd×Rmis the minimizer of the regularized risk functional, then\n/hatwideRn,λ(ˆa,ˆw,ˆb)≤/parenleftBigg\n1+2/vextendsingle/vextendsingle/vextendsingle/vextendsinglelog/parenleftbiggQ2max{1,R}2\n4m/parenrightbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle2/parenrightBigg\nQ2max{1,R}2\n4m.\nIn particular, using that λ=max{1,R}√mwe find that ˆhm=h(ˆa,ˆw,ˆb)satisfies the norm bound\n/ba∇dblˆhm/ba∇dbl2\nB≤/hatwideRn,λ(ˆhm)\nλ2≤Q2/parenleftBigg\n1+2/vextendsingle/vextendsingle/vextendsingle/vextendsinglelog/parenleftbigg4Q2max{1,R}2\nm/parenrightbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle2/parenrightBigg\n.\nSince/ba∇dblh/ba∇dblC0≤ /ba∇dblh/ba∇dblBmax{1,R}, we find that with probability at least 1 −δover the choice of\nsample points, we have\nR(ˆhm)≤/hatwideRn(ˆhm)+2 max {1,R}/radicalbigg\n2 log(2d+2)\nn\n+/parenleftBigg\n1+2/vextendsingle/vextendsingle/vextendsingle/vextendsinglelog/parenleftbiggQ2max{1,R}2\n4m/parenrightbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle2/parenrightBigg\nQ2max{1,R}2\n4m\n18 WEINAN E AND STEPHAN WOJTOWYTSCH\n+/parenleftBigg\n1+2/vextendsingle/vextendsingle/vextendsingle/vextendsinglelog/parenleftbiggQ2max{1,R}2\n4m/parenrightbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle2/parenrightBigg\nQ2max{1,R}2/radicalbigg\n2 log(2/δ)\nn\n=/parenleftBigg\n1+2/vextendsingle/vextendsingle/vextendsingle/vextendsinglelog/parenleftbiggQ2max{1,R}2\n4m/parenrightbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle2/parenrightBigg\nQ2max{1,R}2/bracketleftBigg\n1\n4m+/radicalbigg\n2 log(2/δ)\nn/bracketrightBigg\n+2 max{1,R}/radicalbigg\n2 log(2d+2)\nn.\n/square\n2.5.General hypothesis classes: Multi-layer neural networks a nd deep residual net-\nworks.Most parts of the discussion above are not specific to two-layer ne ural networks and\ngeneralize to other function classes. In the proofs, we used the f ollowing ingredients.\n(1) The fact that every sufficiently smooth function is Barron was n ecessary to prove that\na binary classification problem is solvable using two-layer neural netw orks if and only if\nthe classes have finite distance.\n(2) The direct approximationtheorem in Barron space was used to o btain the mostly correct\nclassification result Corollary 2.19 and in Section 2.4 to obtain a priori e stimates.\n(3) The Rademacher complexity of the unit ball of Barron space ent ered in the a priori\nestimates to bound the discrepancybetween empirical risk and pop ulation risk uniformly\non the class of functions with bounded Barron norm.\nAll function classes discussed below are such that Barron space em beds continuously into\nthem, and they embed continuously into the space of Lipschitz-con tinuous functions. Thus the\nfirst of the three properties is trivially satisfied. In particular, the set of solvable classification\nproblems in all function spaces is the same – the important question is how large the associated\nclassification complexity Qis. For binary classification problems in which the classes have a pos-\nitive spatial separation, the question of choosing an appropriate a rchitecture really is a question\nabout variance reduction.\nWe consider a general hypothesis class Hwhich can be decomposed in two ways: H=/uniontext\nQ>0HQ(as a union of sets of low complexity HQ=Q· H1) andH=/uniontext∞\nm=1Hm(as the\nunion of finitely parametrized sub-classes). The closure is to be con sidered in L1(P). Set\nHm,Q=Hm∩HQ. We make the following assumptions:\n(1)Pis a data distribution on a general measure space Ω.\n(2)yx: Ω→ {−1,1}isP-measurable.\n(3)H ⊆L2(P) and there exists h∈ HQfor someQ>0 such that yx·h(x)≥1 forP-almost\nallx∈Ω.\n(4) The hypothesis class Hhas the following direct approximation property: If f∈ HQ,\nthen for every m∈Nthere exists fm∈ Hm,Qsuch that /ba∇dblfm−f/ba∇dblL2(P)≤c1Qm−1/2.\n(5) The hypothesis class Hsatisfies the Rademacher bound ES∼PnRad(HQ;S)≤c2Qn−1/2.\n(6) The hypothesis class Hsatisfies/ba∇dblh/ba∇dblL∞(P)≤c3Qfor allh∈ HQ.\nThen Corollary 2.19 and Theorems 2.37, 2.39 and 2.40 are valid also for Hwith slightly\nmodified constants depending on c1,c2,c3. Estimates for classes of multi-layer networks can be\npresented in this form.\nFrom the perspective of approximation theory, multi-layer neural networks are elements of\ntree-like function spaces, whereas deep (scaled) residual netwo rks are elements of a flow-induced\nfunction space. Estimates for the Rademacher complexity and the direct approximation theorem\nintheflow-inducedfunctionandtree-likefunctionspacescanbefo undin[EMW19a]and[EW20b]\nrespectively.\nA PRIORI ESTIMATES FOR CLASSIFICATION PROBLEMS 19\nRemark 2.41.All currently available function spaces describe fully connected neu ral networks\n(FNNs) or residual networks with fully connected residual blocks ( ResNets). Usually for classi-\nfication of images, convolutional neural networks (CNNs) are use d. The class of convolutional\nneural networks is a subset of the class of fully connected networ ks, so all estimates on the func-\ntion space level equally apply to convolutional networks. We conjec ture that sharper estimates\nshould be available for CNNs.\n3.Multi-label classification\nWe now consider the more relevant case of multi-label classification. Many arguments from\nbinary classification carry over directly, and we follow the outline of t he previous sections.\n3.1.Preliminaries. Binary classification problems are particularly convenient mathemat ically\nsince two identical classes can be easily included on the real line by the labels±1. Three or more\nclasses will have different properties with respect to one another s ince the adjacency relations.\nDefinition 3.1. Amulti-label classification problem is a collection ( P,C1,...,C k,y1,...,y k)\nwherePis a probability distribution on Rd,C1,...,C k⊆Rdare disjoint P-measurable sets such\nthatP(/uniontextk\nj=1Cj) = 1 and the vectors yi∈Rkare the labels of the classes.\nThecategory function\n(3.1) y:Rd→R, y x=/braceleftBigg\nyjx∈Cj\n0 else\nisP-measurable. Furthermore, we consider the complementary set- valuedexcluded category se-\nlection\nY◦:Rd→(Rd)k−1,Y◦(x) =/braceleftBigg\n{y1,...,y j−1,yj+1,...,y k}x∈Cjfor somej\n{0,...,0} else.\nDefinition 3.2. We say that a multi-label classification problem is solvable in a hypothesis class\nHofP-measurable functions if there exists h∈ Hsuch that\n(3.2) /a\\}b∇acketle{th(x),yj/a\\}b∇acket∇i}ht ≥max\ny∈Y◦(x)/a\\}b∇acketle{th(x),y/a\\}b∇acket∇i}ht+1P−almost everywhere .\nIfH=/uniontext\nQ>0HQ, we again set\n(3.3) QH(P,C1,...,C k) = inf{Q′: (P,C1,...,C k) is solvable in HQ′}.\nRemark 3.3.If all classes are assumed to be equally similar (or dissimilar), then yi≡ei. If\nsome classes are more similar than others (and certain misclassificat ions are worse than others),\nit is possible to encode these similarities in the choice of the category f unctionyby making the\nvectors not orthonormal.\nWe defineCjas before.\nLemma 3.4. LetHbe a hypothesis class of L-Lipschitz functions from RdtoRk. If a multi-label\nclassification problem is solvable in H, then\ndist(Ci,Cj)≥2\nL|yi−yj|\nfor alli/\\e}atio\\slash=j.\n20 WEINAN E AND STEPHAN WOJTOWYTSCH\nProof.Letxi∈Ciandxj∈Cj. Then\n/a\\}b∇acketle{th(xi),yi/a\\}b∇acket∇i}ht ≥ /a\\}b∇acketle{th(xj),yi/a\\}b∇acket∇i}ht+1,/a\\}b∇acketle{th(xj),yj/a\\}b∇acket∇i}ht ≥ /a\\}b∇acketle{th(xi),yj/a\\}b∇acket∇i}ht+1.\nso\n2≤ /a\\}b∇acketle{th(xi)−h(xj),yi/a\\}b∇acket∇i}ht−/a\\}b∇acketle{th(xi)−h(xj),yj/a\\}b∇acket∇i}ht\n=/a\\}b∇acketle{th(xi)−h(xj),yi−yj/a\\}b∇acket∇i}ht\n≤ |h(xi)−h(xj)||yi−yj|\n≤L|xi−xj||yi−yj|.\nTaking the infimum over xi,xjestablishes the result. /square\nTo be able to solve classification problems, we need to assume that fo r everyithere exists\nzi∈Rdsuch that /a\\}b∇acketle{tzi,yi/a\\}b∇acket∇i}ht>maxj/ne}ationslash=i/a\\}b∇acketle{tzi,yj/a\\}b∇acket∇i}ht. Then up to normalization, there exists a set of\nvectors{z1,...,z k} ⊆Rksuch that\n(3.4) /a\\}b∇acketle{tzi,yi/a\\}b∇acket∇i}ht ≥max\nj/ne}ationslash=i/a\\}b∇acketle{tzi,yj/a\\}b∇acket∇i}ht+1.\nLemma 3.5. We consider the hypothesis class given by Barron space. Then the classification\nproblem (P,C1,...,C k)is solvable if and only if δ:= inf i/ne}ationslash=jdist(Ci,Cj)>0and ifspt(P)⊆\nBR(0), then\nQB(P,C1,...,C k,y1,...,y k)≤cd√\nk/parenleftbiggR+δ\nδ/parenrightbiggd\nmax\n1≤i≤k|zi|.\nProof.The finite separation condition is necessary, since every Barron fu nction is Lipschitz. To\nprove that it is also sufficient, we proceed like for binary classification .\nSet¯h(x) =ziif dist(x,Ci)<δ/2 and 0 if there exists no isuch that dist( x,Ci)<δ/2. By the\ndefinition of δ, this uniquely defines the function. Again, we take the mollification h:=ηδ/2∗¯h\nand see that\n(1)h(x) =¯h(x) =zion¯Cifor alliand\n(2)/ba∇dblh/ba∇dblB≤cd√\nk/parenleftbigR+δ\nδ/parenrightbigdmax1≤i≤k|zi|.\n/square\nLemma 3.6 (Mostly correct classification) .Assume that the multi-label classification problem\n(P,C1,...,C k,y1,...,y k)has complexity at most Qin Barron space. Then there exists a neural\nnetworkhm(x) =1\nm/summationtextm\ni=1aiσ(wT\nix+bi)such that\n/ba∇dblfm/ba∇dblB≤Q,P/parenleftbigg\n{x:/a\\}b∇acketle{th(x),yx/a\\}b∇acket∇i}ht ≤max\ny∈Y◦(x)/a\\}b∇acketle{th(x),y/a\\}b∇acket∇i}ht}/parenrightbigg\n≤Q2\nm\n3.2.Risk minimization. While different loss functions are popular in binary classification,\nalmost all works on multi-label classification use cross-entropy loss . In this setting, the output\nof a classifier function h(x) is converted into a probability distribution on the set of classes,\nconditional on x. The loss function is the cross-entropy/Kullback-Leibler (KL) dive rgence of\nthis distribution with the distribution which gives probability 1 to the tr ue label. The most\ncommon way of normalizing the output to a probability distribution lead s to a loss function with\nexponential tails and comparable behavior as in Theorems 2.39 and 2.4 0.\nDue to its practical importance, we focus on this case. Other sche mes like vector-valued\nL2-approximation, or approximation of hinge-loss type with loss funct ion\nL(x) = min/braceleftbig\n0,1−min\ny∈Y◦(x)/a\\}b∇acketle{th(x),yx−y/a\\}b∇acket∇i}ht/bracerightbig\nare also possible and lead to behavior resembling results of Theorem 2 .37 and Remark 2.38.\nA PRIORI ESTIMATES FOR CLASSIFICATION PROBLEMS 21\nConsider the cross-entropy risk functional\nR(h) =−/integraldisplay\nRdlog/parenleftBigg\nexp(/a\\}b∇acketle{th(x),yx/a\\}b∇acket∇i}ht)\n/summationtextk\ni=1exp(/a\\}b∇acketle{th(x),yi/a\\}b∇acket∇i}ht)/parenrightBigg\nP(dx) =/integraldisplay\nRdL(h(x),yx)P(dx)\nwhereL(z,y) =−log/parenleftBig\nexp(/an}bracketle{tz,y/an}bracketri}ht)/summationtextk\ni=1exp(/an}bracketle{tz,yi/an}bracketri}ht)/parenrightBig\n. This is the most commonly used risk functional in\nmulti-class classification. The quantities\npj(z,t) =exp(/a\\}b∇acketle{tz,y/a\\}b∇acket∇i}ht)\n/summationtextk\ni=1exp(/a\\}b∇acketle{tz,yi/a\\}b∇acket∇i}ht)\nare interpreted as the probability of the event y=yipredicted by the model, conditional on x.\nIf/a\\}b∇acketle{tz,yj/a\\}b∇acket∇i}ht>maxi/ne}ationslash=j/a\\}b∇acketle{tz,yi/a\\}b∇acket∇i}htfor somej, then lim λ→∞pi(λz,yj) =δijotherwise the probability is\nshared equally between all categories which have the same (maximal) inner product as λ→ ∞.\nLemma 3.7 (Risk minimization and correct classification) .Assume that R(h)<ε. Then\n(3.5) P/parenleftBiggk/uniondisplay\ni=1{x∈Ci:/a\\}b∇acketle{th(x),yi/a\\}b∇acket∇i}ht ≤max\nj/ne}ationslash=i/a\\}b∇acketle{th(x),yj/a\\}b∇acket∇i}ht}/parenrightBigg\n≤ε\nlog2.\nProof.Ifx∈Ciand/a\\}b∇acketle{th(x),yi/a\\}b∇acket∇i}ht ≤maxj/ne}ationslash=i/a\\}b∇acketle{th(x),yj/a\\}b∇acket∇i}ht, then\nexp(/a\\}b∇acketle{th(x),yx/a\\}b∇acket∇i}ht)\n/summationtextk\ni=1exp(/a\\}b∇acketle{th(x),yi/a\\}b∇acket∇i}ht)≤exp(/a\\}b∇acketle{th(x),yx/a\\}b∇acket∇i}ht)\nexp(/a\\}b∇acketle{th(x),yx/a\\}b∇acket∇i}ht)+exp(max 1≤j≤k/a\\}b∇acketle{th(x),yj/a\\}b∇acket∇i}ht)≤1\n2\nso\nL(h(x),x)≥ −log(1/2) = log(2).\nThus\nε≥/integraldisplay\nRdL(h(x),yx)P(dx)\n≥/integraldisplay\n/uniontextk\ni=1{x∈Ci:/an}bracketle{th(x),yi/an}bracketri}ht≤maxj/negationslash=i/an}bracketle{th(x),yj/an}bracketri}ht}L(h(x),yx)P(dx)\n≥/integraldisplay\n/uniontextk\ni=1{x∈Ci:/an}bracketle{th(x),yi/an}bracketri}ht≤maxj/negationslash=i/an}bracketle{th(x),yj/an}bracketri}ht}log(2)P(dx)\n= log(2) P/parenleftBiggk/uniondisplay\ni=1{x∈Ci:/a\\}b∇acketle{th(x),yi/a\\}b∇acket∇i}ht ≤max\nj/ne}ationslash=i/a\\}b∇acketle{th(x),yj/a\\}b∇acket∇i}ht}/parenrightBigg\n.\n/square\nBy a similar argument as before, the cross-entropy functional do es not have minimizers. We\ncompute the margin functionals on general functions and function s which are classified correctly.\nLemma 3.8 (Margin functional) .LetHbe the unit ball in Barron space. Then\n(1)\nlim\nλ→∞R(λh)\nλ=/integraldisplay\nRdmax\n1≤i≤k/a\\}b∇acketle{th(x),yi/a\\}b∇acket∇i}ht−/a\\}b∇acketle{th(x),yx/a\\}b∇acket∇i}htP(dx).\nThe convergence is uniform over the hypothesis class H.\n(2) If/a\\}b∇acketle{th(x),yx/a\\}b∇acket∇i}ht ≥maxy∈Y◦(x)/a\\}b∇acketle{th(x),y/a\\}b∇acket∇i}ht+εP-almost everywhere for some ε>0, then\n(3.6) lim\nλ→∞log/parenleftbig\nR(λh)/parenrightbig\nλ=−min\nx∈sptP/bracketleftbigg\n/a\\}b∇acketle{th(x),yx/a\\}b∇acket∇i}ht−max\ny∈Y◦(x)/a\\}b∇acketle{th(x),y/a\\}b∇acket∇i}ht/bracketrightbigg\n.\n22 WEINAN E AND STEPHAN WOJTOWYTSCH\nIfPsatisfies the uniform neighbourhood growth condition (2.13), then for any ε>0the\nconvergence is uniform over the class Hc\nε={h∈ H:/a\\}b∇acketle{th(x),yx/a\\}b∇acket∇i}ht ≥maxy∈Y◦(x)/a\\}b∇acketle{th(x),y/a\\}b∇acket∇i}ht+\nε}.\nProof.First claim. Note that\nexp(λ/a\\}b∇acketle{th(x),yx/a\\}b∇acket∇i}ht)\nkmax1≤j≤kexp(λ/a\\}b∇acketle{th(x),yj/a\\}b∇acket∇i}ht)≤exp(λ/a\\}b∇acketle{th(x),y/a\\}b∇acket∇i}ht)\n/summationtextk\nj=1exp(λ/a\\}b∇acketle{th(x),yj/a\\}b∇acket∇i}ht)≤exp(λ/a\\}b∇acketle{th(x),y/a\\}b∇acket∇i}ht)\nmax1≤j≤kexp(λ/a\\}b∇acketle{th(x),yj/a\\}b∇acket∇i}ht)\nso\nλ/bracketleftbigg\n/a\\}b∇acketle{th(x),yx/a\\}b∇acket∇i}ht−max\n1≤j≤k/a\\}b∇acketle{th(x),yj/a\\}b∇acket∇i}ht/bracketrightbigg\n−log(k)≤log/parenleftBigg\nexp(λ/a\\}b∇acketle{th(x),y/a\\}b∇acket∇i}ht)\n/summationtextk\nj=1exp(λ/a\\}b∇acketle{th(x),yj/a\\}b∇acket∇i}ht)/parenrightBigg\n≤λ/bracketleftbigg\n/a\\}b∇acketle{th(x),yx/a\\}b∇acket∇i}ht−max\n1≤j≤k/a\\}b∇acketle{th(x),yj/a\\}b∇acket∇i}ht/bracketrightbigg\n.\nWe compute\nlim\nλ→∞R(λh)\nλ=−lim\nλ→∞1\nλ/integraldisplay\nRdlog/parenleftBigg\nexp(λ/a\\}b∇acketle{th(x),yx/a\\}b∇acket∇i}ht)\n/summationtextk\ni=1exp(λ/a\\}b∇acketle{th(x),yi/a\\}b∇acket∇i}ht)/parenrightBigg\nP(dx)\n=/integraldisplay\nRdmax\n1≤i≤k/a\\}b∇acketle{th(x),yi/a\\}b∇acket∇i}ht−/a\\}b∇acketle{th(x),yx/a\\}b∇acket∇i}htP(dx)\nThe integrand is non-negative, so the functional is minimized ifand on ly if everythingis classified\ncorrectly.\nSecond claim. Assume that /a\\}b∇acketle{th(x),yx/a\\}b∇acket∇i}ht ≥maxy∈Y◦(x)/a\\}b∇acketle{th(x),y/a\\}b∇acket∇i}ht+εon sptPfor someε >0.\nThen\n−log/parenleftBigg\nexp(λ/a\\}b∇acketle{th(x),yx/a\\}b∇acket∇i}ht)\n/summationtextk\ni=1exp(λ/a\\}b∇acketle{th(x),yi/a\\}b∇acket∇i}ht)/parenrightBigg\n=−log/parenleftBigg\n1\n1+/summationtext\ny∈Y◦(x)exp/parenleftbig\nλ/bracketleftbig\n/a\\}b∇acketle{th(x),y/a\\}b∇acket∇i}ht−/a\\}b∇acketle{th(x),yx/a\\}b∇acket∇i}ht/bracketrightbig/parenrightbig/parenrightBigg\n=/summationdisplay\ny∈Y◦(x)exp/parenleftbig\nλ/bracketleftbig\n/a\\}b∇acketle{th(x),y/a\\}b∇acket∇i}ht−/a\\}b∇acketle{th(x),yx/a\\}b∇acket∇i}ht/bracketrightbig/parenrightbig\n+O(exp(−2λε))\nby Taylor expansion. The proof now follows that of Lemma 2.27. /square\nIf/ba∇dblh/ba∇dblB≤1 andλ≫1, we find that\nR(λh)≈/braceleftBigg\nλ/integraltext\nRdmax1≤i≤k/a\\}b∇acketle{th(x),yi/a\\}b∇acket∇i}ht−/a\\}b∇acketle{th(x),yx/a\\}b∇acket∇i}htP(dx) in general\ne−λexp/parenleftbig\n−minx∈sptP/bracketleftbig\n/a\\}b∇acketle{th(x),yx/a\\}b∇acket∇i}ht−maxy∈Y◦(x)/a\\}b∇acketle{th(x),y/a\\}b∇acket∇i}ht/bracketrightbig/parenrightbig\nif everything is classified correctly.\nPrimarily, the functional strives for correct classification with a (v ery weak) drive towards max-\nimum margin within correct classification. We briefly note that the fun ction\n(3.7)\nL:Rk×Rd→(0,∞), L(z,x) =−log/parenleftBigg\nexp(/a\\}b∇acketle{tyx,z/a\\}b∇acket∇i}ht)\n/summationtextk\ni=1exp(/a\\}b∇acketle{tyi,z/a\\}b∇acket∇i}ht)/parenrightBigg\n=/a\\}b∇acketle{tyx,z/a\\}b∇acket∇i}ht−log/parenleftBiggk/summationdisplay\ni=1exp(/a\\}b∇acketle{tyi,z/a\\}b∇acket∇i}ht)/parenrightBigg\nis Lipschitz-continuous in zsince\n∇zL(z,x) =yx−k/summationdisplay\ni=1exp(/a\\}b∇acketle{tyi,z/a\\}b∇acket∇i}ht)/summationtextk\nj=1exp(/a\\}b∇acketle{tyj,z/a\\}b∇acket∇i}ht)yi (3.8)\nis uniformly bounded in the ℓ2-norm by max 1≤i≤k|yi|. To use the direct approximation theorem,\nonly the continuity in the z-variables is needed. The proofs for the following results follow as in\nLemma 2.31 and Theorem 2.39.\nA PRIORI ESTIMATES FOR CLASSIFICATION PROBLEMS 23\nLemma 3.9 (Functions of low risk) .•Assume that h∗∈ B. Then for every m∈Nthere\nexists a two-layer network hmwithmneurons such that\n/ba∇dblhm/ba∇dblB≤ /ba∇dblh∗/ba∇dblBandR(hm)≤ R(h∗)+/ba∇dblh∗/ba∇dblBmax{1,R}max1≤i≤k|yi|√m.\n•In particular, if (P,C1,...,C k)has complexity ≤Qin Barron space, then for any λ>0\nthere exists a two-layer network hmwithmneurons such that\n/ba∇dblhm/ba∇dblB≤λQ, R(hm)≤exp(−λ)+λQmax{1,R}max1≤i≤k|yi|√m\n•Specifyingλ=−log/parenleftBig\nQmax{1,R}max1≤i≤k|yi|√m/parenrightBig\n, we find that there exists hmsuch that\n/ba∇dblhm/ba∇dblB≤/vextendsingle/vextendsingle/vextendsingle/vextendsinglelog/parenleftbiggQmax{1,R}max1≤i≤k|yi|√m/parenrightbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingleQ,\nR(hm)≤/bracketleftbigg\n1+/vextendsingle/vextendsingle/vextendsingle/vextendsinglelog/parenleftbiggQmax{1,R}max1≤i≤k|yi|√m/parenrightbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/bracketrightbiggQmax{1,R}max1≤i≤k|yi|√m. (3.9)\nBeforeestablishingaprioriestimatesformulti-labelclassificationw ithcross-entropyloss,letus\nrecall the following vector-valued version of the ‘contraction lemma ’ for Rademacher complexity.\nLemma 3.10. [Mau16, Corollary 1] LetS={x1,...,x n} ⊆Rd,Hbe a class of functions\nh:Rd→Rkand letG:Rk→Rhave Lipschitz constant [G]with respect to the Euclidean norm.\nThen\nE/bracketleftBigg\nsup\nh∈H1\nnn/summationdisplay\ni=1ξiG(f(xi))/bracketrightBigg\n≤√\n2[G]E\nsup\nh∈H1\nnn/summationdisplay\ni=1k/summationdisplay\nj=1ξijfj(xi)\n\nwhereξi,ξijare iid Rademacher variables and fj(xi)is thej-th component of f(xi).\nIn particular\nE/bracketleftBigg\nsup\nh∈H1\nnn/summationdisplay\ni=1ξiG(f(xi))/bracketrightBigg\n≤√\n2[G]E\nsup\nh∈H1\nnn/summationdisplay\ni=1k/summationdisplay\nj=1ξijfj(xi)\n\n≤√\n2[G]E\nk/summationdisplay\nj=1sup\nh∈H1\nnn/summationdisplay\ni=1ξijfk(xi)\n\n=√\n2[G]k/summationdisplay\nj=1E/bracketleftBigg\nsup\nh∈H1\nnn/summationdisplay\ni=1ξijfk(xi)/bracketrightBigg\n=√\n2[G]kRad(H;S).\nTheorem 3.11 (A priori estimates) .Consider the regularized empirical risk functional\n/hatwideRn,λ(a,w,b) =1\nnn/summationdisplay\ni=1L/parenleftbig\n−yxif(a,w,b)(xi)/parenrightbig\n+λ\nmm/summationdisplay\ni=1|ai|/bracketleftbig\n|wi|+|bi|/bracketrightbig\n.\nwhereλ=max{1,R}√m. For anyδ∈(0,1), with probability at least 1−δover the choice of iid data\npointsxisampled from P, the minimizer (ˆa,ˆw,ˆb)satisfies\nR(ˆhm)≤2Qmax{1,R}max\n1≤i≤k|yi|/parenleftbigg\n1+/vextendsingle/vextendsingle/vextendsingle/vextendsinglelog/parenleftbigg2Qmax{1,R}max1≤i≤k|yi|√m/parenrightbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/parenrightbigg\n·/parenleftBigg\n1√m+/radicalbigg\n2 log(2/δ)\nn/parenrightBigg\n24 WEINAN E AND STEPHAN WOJTOWYTSCH\n+4kmax\n1≤i≤k|yk|max{1,R}/radicalbigg\n2 log(2d+2)\nn.(3.10)\nDue to Lemma 3.7, this implies a prioriestimates alsofor the measure o fmis-classifiedobjects.\nAs for binary classification, the rate can be improved at the expens e of larger constants. The\nproofs for the following results follow as in Lemma 2.32 and Theorem 2.4 0. We further observe\nthatLin (3.7) is smooth with Hessian\nD2\nzL(z,x) =−k/summationdisplay\ni=1exp(/a\\}b∇acketle{tyi,z/a\\}b∇acket∇i}ht)\n/summationtextk\nj=1exp(/a\\}b∇acketle{tyj,z/a\\}b∇acket∇i}ht)yi⊗yi+k/summationdisplay\ni,j=1exp(/a\\}b∇acketle{tyi,z/a\\}b∇acket∇i}ht) exp(/a\\}b∇acketle{tyj,z/a\\}b∇acket∇i}ht)\n/parenleftBig/summationtextk\nl=1exp(/a\\}b∇acketle{tyl,z/a\\}b∇acket∇i}ht)/parenrightBig2yi⊗yj.\nIn particular, the largest eigenvalue of D2\nzis bounded above by 2 max 1≤i≤k|yi|2. Thus the\nfollowing hold.\nLemma 3.12 (Functions of low risk: Rate improvement) .•If(P,C1,...,C k)has complex-\nity≤Qin Barron space, then for any λ>0there exists a two-layer network hmwithm\nneurons such that\n/ba∇dblhm/ba∇dblB≤λQ, R(hm)≤exp(−λ)+2 max\n1≤i≤k|yi|2/parenleftbiggλQmax{1,R}√m/parenrightbigg2\n•Specifyingλ=−log/parenleftBig\nQ2max{1,R}2max1≤i≤k|yi|\nm/parenrightBig\n, we find that there exists hmsuch that\n/ba∇dblhm/ba∇dblB≤/vextendsingle/vextendsingle/vextendsingle/vextendsinglelog/parenleftbiggQ2max{1,R}2max1≤i≤k|yi|2\nm/parenrightbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingleQ,\nR(hm)≤/bracketleftbigg\n2+/vextendsingle/vextendsingle/vextendsingle/vextendsinglelog/parenleftbiggQ2max{1,R}2max1≤i≤k|yi|2\nm/parenrightbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/bracketrightbiggQ2max{1,R}2max1≤i≤k|yi|2\nm. (3.11)\nTheorem 3.13 (A priori estimates: Rate improvement) .Consider the regularized empirical risk\nfunctional\n/hatwideRn,λ(a,w,b) =1\nnn/summationdisplay\ni=1L/parenleftbig\n−yxif(a,w,b)(xi)/parenrightbig\n+/parenleftBigg\nλ\nmm/summationdisplay\ni=1|ai|/bracketleftbig\n|wi|+|bi|/bracketrightbig/parenrightBigg2\n.\nwhereλ=max{1,R}√m. For anyδ∈(0,1), with probability at least 1−δover the choice of iid data\npointsxisampled from P, the minimizer (ˆa,ˆw,ˆb)satisfies\nR(ˆhm)≤2/parenleftBigg\n1+2/vextendsingle/vextendsingle/vextendsingle/vextendsinglelog/parenleftbiggQ2max{1,R}2max1≤i≤k|yi|2\nm/parenrightbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle2/parenrightBigg\nQ2max{1,R}2max1≤i≤k|yi|2\nm\n+2Qmax{1,R}max\n1≤i≤k|yi|/parenleftbigg\n1+/vextendsingle/vextendsingle/vextendsingle/vextendsinglelog/parenleftbigg4Q2max{1,R}2max1≤i≤k|yi|2\nm/parenrightbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/parenrightbigg/radicalbigg\n2 log(2/δ)\nn\n+4kmax{1,R}max\n1≤i≤k|yi|/radicalbigg\n2 log(2d+2)\nn.(3.12)\nUsing 3.7, we can also show that the measure of mis-classified object s obeys the same a priori\nbound (up to a factor log(2)−1).\nA PRIORI ESTIMATES FOR CLASSIFICATION PROBLEMS 25\n4.Problems with infinite complexity\nWe have shown above that classification problems have finite complex ity if and only if the\nclasses have positive distance. This includes many, but not all classifi cation problems of practical\nimportance. To keep things simple, we only discuss binary classificatio n using two-layer neural\nnetworks. When classes are allowed to meet, two things determine h ow difficult mostly correct\nclassification is:\n(1) The geometry of the boundary between classes. If, for exam ple,C+∩C−cannot be\nexpressedlocallyasthelevelsetofaBarronfunction, then everytwo-layerneuralnetwork\nclassifier must necessarily misclassify some data points, even in the in finite width limit.\n(2) The geometry of the data distribution. If most data points are well away from the class\nboundary, it is easier to classify a lot of data samples correctly than if a lot of data is\nconcentrated by the class boundary.\nTo distinguish the two situations, we introduce the following concept .\nDefinition 4.1. We say that a binary classification problem is weakly solvable in a hypoth esis\nclassHif there exists h∈ Hsuch thatyx·h(x)>0 everywhere.\nNote that some sample calculations for margin functionals in Section 2 .3 are only valid for\nstrongly solvable classification problems and cannot be salvaged at a ny expense if a problem\nfails to be weakly solvable. Weakly solvable classification problems and c oncentration at the\nboundary can be studied easily even in one dimension.\nExample 4.2.LetPα=α+1\n2·|x|α·L1\n(−1,1)be the data distribution with density |x|αon (−1,1)\nforα>−1, normalized to a probability measure. Assume the two classes are C−= (−1,0) and\nC+= (0,1). For any of the risk functionals discussed above, the best class ifier in the ball of\nradiusQBarron space is fQ(x) =Qx\n2=Qσ(x)−σ(−x)\n2. We compute\nR(fQ) =α+1\n2/integraldisplay1\n−1L/parenleftbigg\n−Q|x|\n2/parenrightbigg\n|x|αdx\n= (α+1)/parenleftbiggQ\n2/parenrightbigg−(1+α)/integraldisplay1\n0L/parenleftbigg\n−Qx\n2/parenrightbigg/parenleftbiggQx\n2/parenrightbiggαQ\n2dx\n= (α+1)/parenleftbiggQ\n2/parenrightbigg−(1+α)/integraldisplayQ/2\n0L(−z)|z|αdz\n∼/bracketleftbigg\n(α+1)/integraldisplay∞\n0L(−z)|z|αdz/bracketrightbigg/parenleftbigg2\nQ/parenrightbiggα+1\nfor any loss function of the form discussed above. Thus all data po ints are classified correctly\nand the risk of any standard functional decays as Q−(α+1)as the norm of the classifier increases\n– more slowly the more the closer αis to−1, i.e. the more the data distributions concentrates\nat the decision boundary.\nSince the risk of classification problems scales the same way in the nor m of the classifier\nindependently of which loss function is used, we may consider the mat hematically most conve-\nnient setting of one-sided L2-approximation since we more easily obtain a 1 /merror rate in the\nestimates.\nDefinition 4.3. We define the risk decay function\nρ: (0,∞)→[0,∞), ρ(Q) = inf\n/bardblh/bardblB≤Q/integraldisplay\nRdmax{0,1−yxh(x)}2P(dx).\n26 WEINAN E AND STEPHAN WOJTOWYTSCH\nBy the universal approximation theorem, any continuous function on a compact set can be\napproximated arbitrarily well by Barron functions [Cyb89]. For any probability measure Pon\nthe Borelσ-algebra on Rd, continuous functions lie dense in L2(P) [FL07, Theorem 2.11], so\nthe function 1 C+−1C−can be approximated arbitrarily well in L2(P) by Barron functions. In\nparticular,\nlim\nQ→∞ρ(Q) = inf\nQ>0ρ(Q) = inf\nh∈B/integraldisplay\nRdmax{0,1−yxh(x)}2P(dx) = 0.\nThe important quantity is the rate of decay of ρ. Note that ρis monotone decreasing and\nρ(Q) = 0 for some Q>0 if and only if the classification problem is (strongly) solvable, i.e. if and\nonly if the classes are well separated. Using this formalism, a more ge neral version of Corollary\n2.19 can be proved. Similar results are obtained for L2-regression in Appendix A.\nLemma 4.4 (Mostly correct classification) .Letm∈Nand(P,C+,C−)a binary classification\nproblem with risk decay function ρ. Then there exists a two-layer neural network hmwithm\nneurons such that\nP/parenleftbig/braceleftbig\nx∈Rd:yx·hm(x)<0/bracerightbig/parenrightbig\n≤2 inf\nQ>0/bracketleftbigg\nρ(Q)+Q2max{1,R}2\nm/bracketrightbigg\n. (4.1)\nProof.LethQ∈ Bsuch that /ba∇dblhQ/ba∇dblB≤Qand\nρ(Q) =/integraldisplay\nRdmax{0,1−yxhQ(x)}2P(dx).\nChoosehmlike in the direct approximation theorem. Then\nP/parenleftbig/braceleftbig\nx∈Rd:yx·hm(x)<0/bracerightbig/parenrightbig\n≤/integraldisplay\nRdmax{0,1−yxhm(x)}2P(dx)\n≤/integraldisplay\nRdmax/braceleftbig\n0,1−yxhQ(x)+yx(hQ−hm)(x)/bracerightbig2P(dx)\n≤2/integraldisplay\nRdmax{0,1−yxhQ(x)}2+|hQ−hm|2(x)P(dx).\n/square\nExample 4.5.Assume that ρ(Q) =cQ−γfor someγ >0. Then\n¯Q∈argmin\nQ>0/bracketleftbigg\nρ(Q)+Q2max{1,R}2\nm/bracketrightbigg\n⇔ρ′(Q)+2Qmax{1,R}2\nm= 0\n⇔ −cγQ−γ−2+2 max{1,R}2\nm= 0\n⇔Q=/parenleftbigg2 max{1,R}2\ncγm/parenrightbigg1\nγ+2\nand thus\ninf\nQ>0/bracketleftbigg\nρ(Q)+Q2\nm/bracketrightbigg\n=c/parenleftbigg2 max{1,R}2\ncγm/parenrightbiggγ\nγ+2\n+/parenleftbigg2 max{1,R}2\ncγm/parenrightbigg−2\nγ+21\nm\n=/bracketleftBigg/parenleftbigg2 max{1,R}2\nγ/parenrightbiggγ\nγ+2\n+/parenleftbiggγ\n2 max{1,R}2/parenrightbigg2\nγ+2/bracketrightBigg\nc2\nγ+2m−γ\nγ+2.\nThe correct classification bound deteriorates as γ→0 and asymptotically recovers the bound\nm−1for strongly solvable problems in the limit γ→ ∞.\nA PRIORI ESTIMATES FOR CLASSIFICATION PROBLEMS 27\nAs seen in Example 4.2, the constant γcan be arbitrarily close to zero even in one dimension\nif the data distribution has large mass close to the decision boundary . Thus we do not expect\nto be able to prove specific lack of ‘curse of dimensionality’ results sin ce even weakly solvable\none-dimensional problems can be very hard to solve. Stricter geom etric assumptions need to be\nimposed to obtain more precise results, like the boundary behaviour of a probability density with\nrespect to Lebesgue measure or the Hausdorff measure on a manif old, orL∞bounds.\nWe present a priori estimates for regularized loss functionals in the setting of Theorem 2.39.\nIfρdecays quickly, one can consider a regularized loss functional with q uadratic penalty /hatwideR∗\nn,λ\nto obtain a faster rate in mat the expense of larger constants.\nIfρdecays slowly, the linear penalty in /hatwideRn,λseems preferable since the rate in mcannot\nbe improved beyond a certain threshold, and the constant in front of the probabilistic term/radicalbig\nlog(2/δ)n−1is smaller.\nLemma 4.6 (Functions of low loss) .Assume that h∗∈ BandLhas Lipschitz constant [L]. For\nanyQ>0there exists a two-layer network hmwithmneurons such that\n/ba∇dblhm/ba∇dblB≤Q,R(hm)≤inf\n/bardblh/bardblB≤QR(h)+[L]·Qmax{1,R}√m\nThe Lemma is proved exactly like Lemma 2.31. The quantity inf /bardblh/bardblB≤λQR(h) is related to\nρ(λQ) and the two agree if L(z) = max{0,1+z}2(which is not a Lipschitz function).\nTheorem 4.7 (Aprioriestimates) .LetLbe a loss function with Lipschitz-constant [L]. Consider\nthe regularized (empirical) risk functional\n/hatwideRn,λ(a,w,b) =1\nnn/summationdisplay\ni=1L/parenleftbig\n−yxif(a,w,b)(xi)/parenrightbig\n+λ\nmm/summationdisplay\ni=1|ai|/bracketleftbig\n|wi|+|bi|/bracketrightbig\nwhereλ=max{1,R}√m. For anyδ∈(0,1), with probability at least 1−δover the choice of iid data\npointsxisampled from P, the minimizer (ˆa,ˆw,ˆb)satisfies\nR(ˆa,ˆw,ˆb)≤inf\nQ>0/bracketleftbigg\nmin\n/bardblh/bardblB≤QR(h)+2Qmax{1,R}√m/bracketrightbigg\n+2 max{1,R}/radicalbigg\nlog(2d+2)\nn\n+/bracketleftbigg\nL(0)+[L] inf\nQ>0/bracketleftbigg√mmin/bardblh/bardblB≤QR(h)\nmax{1,R}+2Q/bracketrightbigg/bracketrightbigg/radicalbigg\nlog(2/δ)\nn\nA more concrete bound under assumptions on ρin the case of L2-regression can be found in\nAppendix A.\nProof.LetQ>0 and choose h∗∈argmin/bardblh/bardblB≤QR(h). Lethmbe like in the direct approxima-\ntion theorem. Then\n/hatwideRn,λ(ˆa,ˆw,ˆb)≤ Rn,λ(a,w,b)≤min\n/bardblh/bardblB≤QR(h)+2λQ.\nWe find that\n/ba∇dblh(ˆa,ˆw,ˆb)/ba∇dblB≤inf\nQ>0/bracketleftbigg√mmin/bardblh/bardblB≤QR(h)\nmax{1,R}+2Q/bracketrightbigg\n,\n/hatwideRn(h(ˆa,ˆw,ˆb))≤inf\nQ>0/bracketleftbigg\nmin\n/bardblh/bardblB≤QR(h)+2Qmax{1,R}√m/bracketrightbigg\n.\nUsing the Rademacher generalization bound, the result is proved. /square\nIn particular, the Lemma gives insight into the fact that the penalty parameterλ=max{1,R}√m\ncan successfully be chosen independently of unknown quantities like P,Qandρ.\n28 WEINAN E AND STEPHAN WOJTOWYTSCH\nRemark 4.8.The same strategies as above can of course be used to prove most ly correct classi-\nfication results or a priori bounds in other function classes or for m ulti-label classification and\ndifferent loss functions.\n5.Concluding Remarks\nWe have presented a simple framework for classification problems an d given a priori error\nbounds for regularized risk functionals in the context of two-layer neural networks. The main\nbounds are given in\n(1) Theorem 2.37 for hinge loss,\n(2) Theorems 2.39 and 2.40 for a regularized hinge loss with exponent ial tail,\n(3) Theorems 3.11 and 3.13 for multi-label classification, and\n(4) Theorem 4.7 for general Lipschitz-continuous loss functions in binary classification prob-\nlems of infinite complexity.\nThe results cover the most relevant cases, but are not exhaustiv e. The same techniques can\neasily be extended to other cases. The extension to neural netwo rks with multiple hidden layers\nis discussed in Section 2.5. The very pessimistic complexity bound discu ssed in Theorem 2.11\nsuggests that there remain hard-to-solve classification problems . A major task is therefore to\ncategorizeclassificationproblemswhichcanbesolvedefficientlyusing neuralnetworksandobtain\ncomplexity estimates for real data sets.\nAcknowledgments\nThe authors would like to thank Chao Ma and Lei Wu for helpful discus sions. This work was\nin part supported by a gift to Princeton University from iFlytek.\nAppendix A.Regression problems with non-Barron target functions\nAs suggested in the introduction, also classification problems can be approached by L2-\nregression where the target function takes discrete values on th e support of the data distribution\nP. If the classes C+,C−have a positive spatial distance, the target function coincides with\na Barron function on spt P, while the target function fails to be continuous and in particular\nBarron if the classes touch.\nWe extend the analysis of infinite complexity classification problems to the general case of\nL2-regression. For a target function f∗∈L2(P) consider the approximation error decay function\nρ(Q) = inf\n/bardblf/bardblB≤Q/ba∇dblf−f∗/ba∇dbl2\nL2(P).\nLemma A.1 (Approximation error decay estimates) .Assume that f∗is a Lipschitz-continuous\nfunction on [0,1]dwith Lipschitz constant at most 1. Then\n(1) IfPis the uniform distribution on the unit cube, there exists f∗such that for all γ >4\nd−2\nthere is a sequence of scales Qn→ ∞such thatρ(Qn)≥CQ−γ\nnfor alln∈N.\n(2) For any distribution P, the estimate ρ(Q)≤CdQ−2/dholds.\nProof.Without loss of generality, we may assume that f∗is 1-Lipschitz continuous on Rd. We\nnote that the mollified function fε=ηε∗f∗satisfies\n/ba∇dblfε−f∗/ba∇dblL2(P)≤ /ba∇dblfε−f∗/ba∇dblL∞([0,1]d)\n≤sup\nx/vextendsingle/vextendsingle/vextendsingle/vextendsinglef∗(x)−/integraldisplay\nRdf∗(y)η/parenleftbiggx−y\nε/parenrightbigg\nε−ddy/vextendsingle/vextendsingle/vextendsingle/vextendsingle\n≤sup\nx/integraldisplay\nRd/vextendsingle/vextendsinglef∗(x)−f∗(y)/vextendsingle/vextendsingleη/parenleftbiggx−y\nε/parenrightbigg\nε−ddy\nA PRIORI ESTIMATES FOR CLASSIFICATION PROBLEMS 29\n≤ε/integraldisplay\nRd|z|η(z) dz.\nwherecd,ηis verysmallfor large d. On the otherhand /ba∇dblfε/ba∇dblB≤Cε−dlikein the proofofTheorem\n2.11. /square\nThus (if the target function is at least Lipschitz continuous), we he uristically expect behaviour\nlikeρ(Q)∼Q−αfor someα>2\nd.\nLemma A.2 (Approximation by small networks) .Assume that sptP⊆[−R,R]d. For every\nm∈N, there exists a neural network fmwithmneurons such that\n(A.1) /ba∇dblf∗−fm/ba∇dbl2\nL2(P)≤inf\nQ>02/bracketleftbigg\nρ(Q)+Q2max{1,R}2\nm/bracketrightbigg\n.\nThe proof follows from the direct approximation theorem and the ine quality/ba∇dblf∗−fm/ba∇dbl2\nL2≤\n2/bracketleftbig\n/ba∇dblf∗−f/ba∇dbl2\nL2+/ba∇dblf−fm/ba∇dbl2\nL2/bracketrightbig\n. Inparticular,if ρ(Q)≤¯cQ−α, wecanoptimize Q=/parenleftBig\nmα¯c\n2 max{1,R}2/parenrightBig1\n2+α\nto see that there exists fmsuch that\n1\n2/ba∇dblf∗−fm/ba∇dbl2\nL2(P)≤¯c/parenleftbiggmα¯c\n2 max{1,R}2/parenrightbigg−α\n2+α\n+max{1,R}2\nm/parenleftbiggmα¯c\n2 max{1,R}2/parenrightbigg2\n2+α\n≤/bracketleftBigg/parenleftbigg2\nα/parenrightbiggα\n2+α\n+/parenleftBigα\n2/parenrightBig2\n2+α/bracketrightBigg\n¯c2\n2+α/parenleftbiggmax{1,R}2\nm/parenrightbiggα\n2+α\n. (A.2)\nLet\n/hatwideRn(a,w,b) =1\nnn/summationdisplay\ni=1/vextendsingle/vextendsinglef∗−f(a,w,b)/vextendsingle/vextendsingle2(xi),/hatwideRn,λ(a,w)+/bracketleftBigg\nλ\nmm/summationdisplay\ni=1|ai|/ba∇dblwi/ba∇dbl/bracketrightBigg2\nforλ=max{1,R}√m. For convenience, we abbreviate\nCα= 2/bracketleftBigg/parenleftbigg2\nα/parenrightbiggα\n2+α\n+/parenleftBigα\n2/parenrightBig2\n2+α/bracketrightBigg\n¯c2\n2+α.\nLemma A.3 (A priori estimate) .Assume that ρ(Q)≤¯cQ−α. Then with probability at least\n1−δover the choice of an iid training sample, the minimizer (ˆa,ˆw,ˆb)of the regularized risk\nfunctional/hatwideRn,λsatisfies the a priori error estimate\n/ba∇dblf∗−f(ˆa,ˆw,ˆb)/ba∇dbl2\nL2(P)≤2Cα/parenleftbiggmax{1,R}2\nm/parenrightbiggα\n2+α\n+4Cα/parenleftbiggmα¯c\n2 max{1,R}2/parenrightbigg1\n2+α/radicalbigg\nlog(2d+2)\nn\n+4C2\nα/parenleftbiggmα¯c\n2 max{1,R}2/parenrightbigg2\n2+α\nmax{1,R}2/radicalbigg\nlog(2/δ)\nn\nThe proof closely follows that of Theorem 2.40: We use a candidate fu nctionfmsuch that\n/ba∇dblfm/ba∇dblB≤/parenleftbiggmα¯c\n2 max{1,R}2/parenrightbigg1\n2+α\n,/ba∇dblf∗−fm/ba∇dbl2\nL2(Pn)≤Cα/parenleftbiggmax{1,R}2\nm/parenrightbiggα\n2+α\nas an energy competitor, obtain norm bounds for the true minimizer and use Rademacher gener-\nalization bounds. Note that the estimate loses meaning as α→0 and that the term m2\n2+αn−1/2\nmay not go to zero in the overparametrized regime if αis too small.\n30 WEINAN E AND STEPHAN WOJTOWYTSCH\nReferences\n[Bac17] F. Bach. Breaking the curse of dimensionality with c onvex neural networks. The Journal of Machine\nLearning Research , 18(1):629–681, 2017.\n[Bar93] A. R. Barron. Universal approximation bounds for su perpositions of a sigmoidal function. IEEE\nTransactions on Information theory , 39(3):930–945, 1993.\n[BJS20] L. Berlyand, P.-E. Jabin, and C. A. Safsten. Stabili ty for the training of deep neural networks and\nother classifiers. arXiv:2002.04122 [math.AP] , 02 2020.\n[CB20] L. Chizat and F. Bach. Implicit bias of gradient desce nt for wide two-layer neural networks trained\nwith the logistic loss. arxiv:2002.04486 [math.OC] , 2020.\n[Cyb89] G. Cybenko. Approximation by superpositions of a si gmoidal function. Mathematics of control,\nsignals and systems , 2(4):303–314, 1989.\n[EMW18] W. E, C. Ma, and L. Wu. A priori estimates of the popula tion risk for two-layer neural networks.\nComm. Math. Sci. , 17(5):1407 – 1425 (2019), arxiv:1810.06397 [cs.LG] (2018 ).\n[EMW19a] W. E, C. Ma, and L. Wu. Barron spaces and the composit ional function spaces for neural network\nmodels. arXiv:1906.08039 [cs.LG] , 2019.\n[EMW19b] W. E, C. Ma, and L. Wu. A priori estimates of the popul ation risk for two-layer neural networks.\nComm. Math. Sci. , 17(5):1407 – 1425, 2019.\n[EMWW20] W. E, C. Ma, S. Wojtowytsch, and L. Wu. Towards a math ematical understanding of neural network-\nbased machine learning: what we know and what we don’t. In preparation , 2020.\n[EW20a] W. E and S. Wojtowytsch. Kolmogorov width decay and p oor approximators in machine learn-\ning: Shallow neural networks, random feature models and neu ral tangent kernels. arXiv:2005.10807\n[math.FA] , 2020.\n[EW20b] W. E and S. Wojtowytsch. On the Banach spaces associa ted with multi-layer ReLU networks of\ninfinite width. arXiv:2007.15623 [stat.ML] , 2020.\n[EW20c] W. E and S. Wojtowytsch. Representation formulas an d pointwise properties for barron functions.\narXiv:2006.05982 [stat.ML] , 2020.\n[FL07] I. Fonseca and G. Leoni. Modern Methods in the Calculus of Variations: LpSpaces. Springer Science\n& Business Media, 2007.\n[Mak98] Y. Makovoz. Uniform approximation by neural networ ks.Journal of Approximation Theory ,\n95(2):215–228, 1998.\n[Mau16] A.Maurer.Avector-contraction inequality forRad emacher complexities.In International Conference\non Algorithmic Learning Theory , pages 3–17. Springer, 2016.\n[SSBD14] S. Shalev-Shwartz and S. Ben-David. Understanding machine learning: From theory to algorithms .\nCambridge university press, 2014.\nWeinan E, Program for Applied and Computational Mathematics and Department of Mathematics,\nPrinceton University, Princeton, NJ 08544\nE-mail address :[email protected]\nStephan Wojtowytsch, Program for Applied and Computational Mathematics, Princeton Univer-\nsity, Princeton, NJ 08544\nE-mail address :[email protected]",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "Hqw5VfaD5Pq",
"year": null,
"venue": "MSML 2021",
"pdf_link": "https://proceedings.mlr.press/v145/e22a/e22a.pdf",
"forum_link": "https://openreview.net/forum?id=Hqw5VfaD5Pq",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Some observations on high-dimensional partial differential equations with Barron data",
"authors": [
"Weinan E",
"Stephan Wojtowytsch"
],
"abstract": "We use explicit representation formulas to show that solutions to certain partial differential equa- tions lie in Barron spaces or multilayer spaces if the PDE data lie in such function spaces. Conse- quently, these solutions can be represented efficiently using artificial neural networks, even in high dimension. Conversely, we present examples in which the solution fails to lie in the function space associated to a neural network under consideration.",
"keywords": [],
"raw_extracted_content": "Proceedings of Machine Learning Research vol 107:253–269, 2021 2nd Annual Conference on Mathematical and Scientific Machine Learning\nSome observations on high-dimensional partial differential equations\nwith Barron data\nWeinan E WEINAN @MATH .PRINCETON .EDU\nProgram for Applied and Computational Mathematics\nPrinceton University\nPrinceton, NJ 08544\nStephan Wojtowytsch STEPHANW @PRINCETON .EDU\nProgram for Applied and Computational Mathematics\nPrinceton University\nPrinceton, NJ 08544\nEditors: Joan Bruna, Jan S Hesthaven, Lenka Zdeborova\nAbstract\nWe use explicit representation formulas to show that solutions to certain partial differential equa-\ntions lie in Barron spaces or multilayer spaces if the PDE data lie in such function spaces. Conse-\nquently, these solutions can be represented efficiently using artificial neural networks, even in high\ndimension. Conversely, we present examples in which the solution fails to lie in the function space\nassociated to a neural network under consideration.\n1. Introduction\nNeural network-based machine learning has had impressive success in solving very high dimen-\nsional PDEs, see e.g. E et al. (2017); E and Yu (2018); Han et al. (2018); Sirignano and Spiliopoulos\n(2018); Chaudhari et al. (2018); Wang et al. (2018); Raissi et al. (2019); Bhattacharya et al. (2020);\nMao et al. (2020); Kissas et al. (2020); Sun et al. (2020); Chen et al. (2020) and many others. The\nfact that such high dimensional PDEs can be solved with relative ease suggests that the solutions\nof these PDEs must have relatively low complexity in some sense. The important question that we\nneed to address is: How can we quantify such low complexity?\nIn low dimension, we are accustomed to using smoothness to quantify complexity. Roughly\nspeaking, a function has low complexity if it can be easily approximated using polynomials or\npiecewise polynomials, and the convergence rate of the polynomial or piecewise polynomial ap-\nproximation is characterized by the smoothness of the target function. A host of function spaces\nhave been proposed, such as the Sobolev spaces, Besov spaces, H ¨older- andCkspaces, etc, to\nquantify the smoothness of functions. Based on these, the analysis of numerical approximations to\nsolutions of PDEs can be carried out in three steps:\n1. Approximation theory: relating the convergence rate to the function class that the target func-\ntion belongs to. This stage is purely about function approximation. No PDEs are involved.\n2. Regularity theory: characterizing the function class that solutions of a given PDE belong to.\nThis is pure PDE theory.\n3. Numerical analysis: proving that a numerical algorithm is convergent, stable, and characterize\nits order of accuracy.\n© 2021 W. E & S. Wojtowytsch.\nE W OJTOWYTSCH\nWe aim to carry out the same program in high dimension, with polynomials and piecewise poly-\nnomials replaced by neural networks. In high dimension, the most important question is whether\nthere is a “curse of dimensionality”, i.e. whether the convergence rate deteriorates as the dimen-\nsionality is increased.\nIn this regard, our gold standard is the Monte Carlo algorithm for computing expectations. The\ntask of approximation theory is to establish similar results for function approximation for machine\nlearning models, namely, given a particular machine learning model, one would like to identify the\nclass of functions for which the given machine learning model can approximate with dimension-\nindependent convergence rate. It is well known that the number of parameters of a neural network\nto approximate a general Ck-function in dimension din theL1-topology to accuracy \"generally\nscales like\"\u0000d\nkYarotsky (2017) (up to a logarithmic factor). Classical (H ¨older, Sobolev, Besov, . . . )\nfunction spaces thus face the “curse of dimensionality” and therefore are not suited for our purpose.\nHowever, Monte Carlo-like convergence rates can indeed be established for the most interest-\ning machine learning models by identifying the appropriate function spaces. For the random fea-\nture model, the appropriate function spaces are the extensions of the classical reproducing kernel\nHilbert space E et al. (2020). For two-layer neural networks, those function spaces are the relatively\nwell-studied Barron spaces E et al. (2019); E and Wojtowytsch (2020a), while for deeper neural\nnetworks, tree-like function spaces E and Wojtowytsch (2020b) seem to provide good candidates (at\nleast for ReLU-activated networks). These spaces combine two convenient properties: Functions\ncan be approximated well using a small set of parameters, and few data samples suffice to assess\nwhether a function is a suitable candidate as the solution to a minimization problem.\nThis paper is concerned in the second question listed above in high dimension. For the reasons\nstated above, we must consider the question whether the solution of a PDE lie in a (Barron or\ntree-like) function space if the data lie in such a space. In other words, if the problem data are\nrepresented by a neural network, can the same be said for the solution, and is the solution network\nof the same depth as the data network or deeper? We believe that these should be the among the\nprincipal considerations of a modern regularity theory for high-dimensional PDEs. We note that a\nsomewhat different approach to a similar question is taken in Grohs et al. (2018); Hutzenthaler et al.\n(2020).\nIn this paper, we show that solutions to three prototypical PDEs (screened Poisson equation,\nheat equation, viscous Hamilton-Jacobi equation) lie in appropriate function spaces associated with\nneural network models. These equations are considered on the whole space and the argument is\nbased on explicit representation formulas. In bounded domains on the other hand, we show that\neven harmonic functions with Barron function boundary data may fail to be Barron functions, and\nwe discuss obstacles in trying to replicate classical regularity theory in spaces of neural networks.\nWhile we do not claim great generality in the problems we treat, we cover some important\nspecial cases and counterexamples. As a corollary, equations whose solutions lie in a (Barron\nor tree-like) function space associated to neural networks cannot be considered as fair benchmark\nproblems for computational solvers for general elliptic or parabolic PDEs: If the data are represented\nby a neural network, then so is the exact solution, and approximate solutions converge to the analytic\nsolution at a dimension-independent rate as the number of parameters approaches infinity. The\nperformance is therefore likely to be much better in this setting than in potential applications without\nthis high degree of compatibility.\nThe article is structured as follows. In the remainder of the introduction, we give a brief sum-\nmary of some results concerning Barron space and tree-like function spaces. In Section 2, we\n254\nHIGH-DIMENSIONAL PDE S WITH BARRON DATA\nconsider the Poisson equation, screened Poisson equation, heat equation, and viscous Hamilton-\nJacobi equation on the whole space. In two cases, we show that solutions lie in Barron space and in\none case, solutions lie in a tree-like function space of depth four. In Section 3, we consider equa-\ntions on bounded domains and demonstrate that boundary values can make a big difference from the\nperspective of function representation. We also discuss the main philosophical differences between\nclassical function spaces and spaces of neural network functions, which a novel regularity theory\nneeds to account for.\n1.1. Previous work\nThe class of partial differential equations is diverse enough to ensure that no single theory captures\nthe properties of all equations, and the existing literature is too large to be reviewed here. Even the\nnotion of what a solution is depends on the equation under consideration, giving rise to concepts\nsuch as weak solutions, viscosity solutions, entropy solutions etc. The numerical schemes to find\nthese solutions are as diverse as the equations themselves. As a rule, many methods perform well\nin low spatial dimension d2f1;2;3g, which captures many problems of applied science. Other\nproblems, such as the Boltzmann equation, Schr ¨odinger equations for many particle systems, or\nBlack-Scholes equations in mathematical science, are posed in much higher dimension, and require\ndifferent methods.\nIn low dimension, elliptic and parabolic equations are often considered in Sobolev spaces. Finite\nelement methods, which find approximate solutions to a PDE in finite-dimensional ansatz spaces,\nare empirically successful and allow rigorous a priori and a posteriori error analysis. The ansatz\nspaces are often chosen as spaces of piecewise linear functions on the triangulation of a domain. As\na grid based method, the curse of dimensionality renders this approach unusable when the dimension\nbecomes moderately high.\nEvading the curse of dimensionality when solving high-dimensional PDEs requires strong as-\nsumptions. If the right hand side even of the Poisson equation is merely in C0;\u000borL2, the solution\ncannot generally be more regular than C2;\u000borH2. These spaces are so large (e.g. in terms of\nKolmogorov width) that the curse of dimensionality cannot be evaded.\nBased on a hierarchical decomposition of approximating spaces, sparse grid methods can be\nused to solve equations in higher dimension. In their theoretical analysis, the role of classical\nSobolev spaces is taken by Sobolev spaces with dominating mixed derivatives (imagine spaces\nwhere@1@2uis controlled, but @1@1uis not). These methods have been used successfully in medium\nhigh dimension, although the a priori regularity estimates which underly classical finite element\nanalysis are less developed here to the best of our knowledge. An introduction can be found e.g. in\nGarcke (2012).\nFunction classes of neural networks have a non-linear structure, which allows them to avoid the\ncurse of dimensionality in approximating functions in large classes where anylinear model suffers\nfrom the CoD, see e.g. Barron (1993). Following their success in data science, it is a natural question\nwhether they have the same potential in high-dimensional PDEs.\nIt has been observed by Grohs et al. (2018); Jentzen et al. (2018); Hutzenthaler et al. (2020);\nHornung et al. (2020); Darbon et al. (2020) that deep neural networks can approximate (viscosity)\nsolutions to different types of partial differential equations without the curse of dimensionality,\nassuming that the problem data are given by neural networks as well. Generally, proofs of these\nresults are based on links between the PDE and stochastic analysis, and showing that solutions to\n255\nE W OJTOWYTSCH\nSDEs can be approximated sufficiently well by neural networks with the Monte-Carlo rate, even in\nhigh dimension. A more extensive list of empirical works can be found in these references.\nIn this article, we follow a similar philosophy of explicit representation, but we consider a more\nrestricted and technically much simpler setting. In this setting, we prove stronger results:\n1. The neural networks we consider have one hidden layer (linear PDEs) or three hidden layers\n(viscous Hamilton-Jacobi equation), and are therefore much shallower than the deep networks\nconsidered elsewhere.\n2. In certain cases, solutions to the PDEs are in Barron space. This not oly implies that they can\nbe approximated without the CoD, but also that integral quantities can be estimated well using\nfew samples, even if the solution of the PDE depends on these data samples. This follows\nfrom the fact that the unit ball in Barron space has low Rademacher complexity.\n3. On the other hand, we show by way of counterexample that the solution of the Laplace equa-\ntion in the unit ball with Barron boundary data is generally notgiven by a Barron function,\npossibly shedding light on the requirement of depth.\nOur results can be viewed more in the context of function-space based regularity theory, whereas\nexisting results directly address the approximation of solutions without an intermediate step.\n1.2. A brief review of Barron and tree-like function spaces\nTwo-layer neural networks can be represented as\nfm(x) =1\nmmX\ni=1ai\u001b(wT\nix+bi): (1)\nThe parameters (ai;wi;bi)2R\u0002Rd\u0002Rare referred to as the weights of the network and \u001b:\nR!Ras the activation function. In some parts of this article, we will focus on the popular ReLU-\nactivation function \u001b(z) = maxfz;0g, but many arguments go through for more general activation\n\u001b.\nIn the infinite width limit for the hidden layer, the normalized sum is replaced by a probability\nintegral\nf\u0019(x) =Z\nRd+2a\u001b(wTx+b)\u0019(da\ndw\ndb)=E(a;w;b )\u0018\u0019\u0002\na\u001b(wTx+b)\u0003\n(2)\nOn this space, a norm is defined by\nkfkB= inf\u001aZ\nRd+2jaj\u0002\njwj+jbj\u0003\n\u0019(da\ndw\ndb) :\u0019s.t.f=f\u0019\u001b\n:\nThe formula has to be modified slightly for non-ReLU activation functions Li et al. (2020) and (E\nand Wojtowytsch, 2021, Appendix A). If for example limz!\u00061\u001b(z) =\u00061, we may choose\nkfkB= inf\u001aZ\nRd+2jaj\u0002\njwj+ 1\u0003\n\u0019(da\ndw\ndb) :\u0019s.t.f=f\u0019\u001b\n256\nHIGH-DIMENSIONAL PDE S WITH BARRON DATA\nwithout strong dependence on the bias variable b. We give some examples of functions in Barron\nspace or not in Barron space below.\nThe function space theory for multi-layer networks is more complicated, and results are cur-\nrently only available for ReLU activation. We do not go into detail concerning the scale of tree-like\nfunction spacesWLassociated to multi-layer neural networks here, but note the following key\nresults.\n1.W1=B, i.e. the tree-like function space of depth 1coincides with Barron space.\n2. Iff:Rk!Randg:Rd!Rkare functions in the tree-like function spaces W`and\nWLrespectively (componentwise), then the composition f\u000eg:Rd!Ris in the tree-like\nfunction spaceWL+`and the estimate\nkf\u000egkWL+`\u0014max\n1\u0014i\u0014kkgikWLkfkW`\nholds.\n3. In particular, the product of two functions f;g2WLis generally not in WL, but inWL+1\nsince the map (x;y)7!xyis in Barron space (on bounded sets in R2).\nAll results can be found in E and Wojtowytsch (2020b). We recall the following properties of\nBarron space:\n1. Iff2Hs(Rd)fors>d\n2+ 2, thenf2B, i.e. sufficiently smooth functions admit a Barron\nrepresentation – see (E and Wojtowytsch, 2020a, Theorem 3.1), based on an argument in\n(Barron, 1993, Section IX).\n2. Barron space embeds into the space of Lipschitz-continuous functions.\n3. Iff2B, thenf=P1\ni=1fiwherefi(x) =gi(Pix+bi)\n(a)giisC1-smooth except at the origin,\n(b)Piis an orthogonal projection on a ki-dimensional subspace for some 0\u0014ki\u0014d\u00001,\n(c)biis a shift vector.\nThe proof can be found in (E and Wojtowytsch, 2020a, Lemma 5.2).\n4. For a Barron function fand a probability measure Pon[\u0000R;R]d, there exist mparameters\n(ai;wi;bi)2Rd+2and a two-layer neural network fmas in (1) such that\nkf\u0000fmkL2(P)\u0014maxf1;RgkfkBpm(3)\nor\nkf\u0000fmkL1[\u0000R;R]\u0014dmaxf1;RgkfkBpm(4)\nor\nkf\u0000fmkH1((0;1)d)\u0014kfkBr\nd+ 1\nm: (5)\nThe proof in the Hilbert space cases is based on the Maurey-Jones-Barron Lemma (e.g. (Bar-\nron, 1993, Lemma 1)), whereas the proof for L1-approximation uses Rademacher complex-\nities (see e.g. E et al. (2020)).\n257\nE W OJTOWYTSCH\nExample 1 Examples of Barron functions are\n1. the single neuron activation function f(x) =a\u001b(wTx+b),\n2. the`2-norm function\nf(x) =cdZ\nSd\u00001\u001b(wTx)\u00190(dw)\nwhich is represented by the uniform distribution on the unit sphere (for ReLU activation), and\n3. any sufficiently smooth function on Rd.\nExamples of functions which are not Barron are all functions which fail to be Lipschitz continu-\nous and functions which are non-differentiable on a set which is not compatible with the structure\ntheorem, e.g. f(x) = maxfx1;:::;xdgandf(x) = dist`2(x;Sd\u00001).\nRemark 1 Define the auxiliary norm\nkfkaux:=Z\nRdj^fj(\u0018)\u0001j\u0018jd\u0018\non functions f:Rd!R. According to Barron (1993), the estimate kfkB\u0014kfkauxholds, and\nmany early works on neural networks used k\u0001kauxin place of the Barron norm. Unlike the auxiliary\nnorm, the modern Barron norm is automatically adaptive to the activation function \u001band Barron\nspace is much larger than the set of functions on which the auxiliary norm is finite (which is a\nseparable subset of the non-separable Barron space).\nMost importantly, the auxiliary norm is implicitly dimension-dependent. If g:Rk!Ris a\nBarron function and f:Rd!Ris defined as f(x) =g(x1;:::;xk)fork <d , then the auxiliary\nnorm offis automatically infinite (since fdoes not decay at infinity). Even when considering\nbounded sets and extension theorems, the auxiliary norm is much larger than the Barron norm,\nwhich allows us to capture the dependence on low-dimensional structures efficiently.\n2. Prototypical equations\nIn this article, we study four PDEs for which explicit representation formulas are available. The pro-\ntotypical examples include a linear elliptic, linear parabolic, and viscous Hamilton-Jacobi equation.\nThe key-ingredient in all considerations is a Green’s function representation with a translation-\ninvariant and rapidly decaying Green’s function.\n2.1. The screened Poisson equation\nThe fundamental solution G(x) =cdjxj2\u0000dof\u0000\u0001G=\u000e0onRd(ford >2) decays so slowly at\ninfinity that we can only use it to solve the Poisson equation\n\u0000\u0001u=f (6)\niffis compactly supported (or at least decays rapidly at infinity), since otherwise the convolution\nintegral fails to exist. Neither condition is particularly compatible with the superposition represen-\ntation of one-dimensional profiles which is characteristic of neural networks. As a model problem\nfor second order linear elliptic equations, we therefore study the screened Poisson equation\n(\u0000\u0001 +\u00152)u=f (7)\n258\nHIGH-DIMENSIONAL PDE S WITH BARRON DATA\non the whole space Rdfor some\u0015 > 0. Solutions are given explicitly as convolutions with the\nfundamental solution of (7) and – for large dimension – as superpositions of one-dimensional solu-\ntions..\nLemma 2 Assume that f2B in dimension dandusolves (7). Then\nkukB\u0014\u0015\u00002kfkB (8)\nif\u001bhas finite limits at\u00061 and\nkukB\u0014\u0002\n\u0015\u00002+ 2\u0015\u00003\u0003\nkfkB: (9)\nif\u001b= ReLU .\nProof [Proof of Lemma 2] For d= 3, the fundamental solution G(x) =e\u0000\u0015jxj\n4\u0019jxjof the screened\nPoisson equation (7) is well-known. Observe that\n1\n4\u0019Z\nR3e\u0000\u0015jyjjyjdy=Z1\n0e\u0000\u0015rr\u00001r3\u00001dr\n=\u0015\u00002Z1\n0r\u0015re\u0000\u0015r\u0015dr\n=\u0015\u00002\n1\n4\u0019Z\nR3e\u0000\u0015jyjdy= 2\u0015\u00003:\nThus, ifd= 3andf(x) =a\u001b(w1x1+b)we have\nu(x) =a\n4\u0019Z\nR3\u001b(w1(x1\u0000y1) +b)e\u0000\u0015jyj\njyjdy\nkukB\u0014jajZ\nR3\b\njw1+jbj\t\nje\u0000\u0015jyj\njyj+jw1jjyje\u0000\u0015jyj\njyjdy\n\u0014jaj\b\njw1+jbj\t\n\u00152+2jajjw1j\n\u00153:\nIf\u001bis bounded, the bias term is replaced by 1.\nIn arbitrary dimension, the unit ball in Barron space is the closed convex hull of functions of\nthe formf(x) =a\u001b(wTx+b)such thatjaj\u0002\njwj+jbj\u0003\n\u00141(ReLU case). When considering\nthe Euclidean norm on Rd, after a rotation, we note that the solution uof (7) with right hand side\na\u001b(wT\u0001+b)satisfies\nkukB\u0014jajfjwj+jbjg\u0002\n\u0015\u00002+ 2\u0015\u00003\u0003\nsinceuonly depends on wTxand is constant in all directions orthogonal to w. The estimate follows\nin the general case by linearity.\n259\nE W OJTOWYTSCH\n2.2. The Poisson equation\nWhile the importance of the forcing term \u0015uin the argument above is clear, it is important to ask\nwhether the term is in fact necessary or just convenient. We show that in many cases, it is the latter,\nat least in naive models. Consider the equation\n\u0000\u0001u= ReLU(x1) (10)\nwhich is solved by\nu(x) =\u0000maxf0;x1g3\n6+h(x); \u0001h(x) = 0: (11)\nSince any function uin ReLU-Barron space grows at most linearly at infinity,maxf0;x1g3\n6is not\na ReLU-Barron function. This fast growth cannot be compensated by the free harmonic function\nh. Note that hcould grow at most cubically at infinity and would therefore have to be a cubic\npolynomial by Liouville’s theorem Boas and Boas (1988). Then\nlim\nt!1h(te1)\nh(\u0000te1)=\u00061\ndepending on the degree of h, soucould not be bounded by a subcubic function in both half spaces\nfx1>0gandfx1<0g. A very similar argument goes through for activation functions \u001bfor\nwhich the limits limz!\u00061\u001b(z)exist and are finite (with quadratic polynomials instead of cubics).\nNotably, activation functions \u001b2fsin;cos;expgdo not suffer the same obstacle since there second\nanti-derivative grows at the same rate as the function itself. Considering these activation functions\nwould likely prove fruitful, but take us back into the classical field of Fourier analysis. We pursue\na different route and allow the right hand side and solution of a PDE to be represented by neural\nnetworks with different activation functions.\nConsider aC2-smooth activation function \u001b. Then\n\u0001[a\u001b(wTx+b)] =ajwj2\u001b00(wTx+b):\nIn particular, for 0<\u000b< 1we have\n\r\r\u0001[a\u001b(wTx+b)]\r\r\nL1(Rd)=jajjwj2k\u001b00kL1;\u0002\n\u0001[a\u001b(wTx+b)]\u0003\nC0;\u000b(Rd)=jajjwj2+\u000b[\u001b00]C0;\u000b:\nFor ReLU activation and functions in Barron space, the Laplacian of a\u001b(wTx+b)is merely a\n(signed) Radon measure. On the other hand, if \u001b2C2;\u000b(R)is bounded and has bounded first and\nsecond derivatives, then we can make sense of the Laplacian in the classical sense. The same is true\nfor the superposition\nf\u0019(x) =Z\nR\u0002Rd\u0002Ra\u001b(wTx+b)\u0019(da\ndw\ndb)\nif the modified Barron norm\nZ\njaj[jwj2+\u000b+ 1]\u0019(da\ndw\ndb) (12)\nis finite. Unlike applications in data science, most settings in scientific computing require the ability\nto take derivatives, at least in a generalized sense. For elliptic problems, it seems natural to consider\n260\nHIGH-DIMENSIONAL PDE S WITH BARRON DATA\nneural networks with different activation functions to represent target function and solution to the\nPDE:\nf(x) =Z\na\u001b00(wTx+b) d\u0019\nu(x) =Z\n~a\u001b( ~w+~b) d~\u0019\nwhere Z\nj~aj[j~wj2+\u000b+ 1]d~\u0019<1;Zjaj\njwj2[jwj2+\u000b+ 1]d\u0019<1:\nAll considerations were in the simplest setting of the Laplace operator on the whole space. We will\ndiscuss the influence of boundary values below.\n2.3. The heat equation\nParabolic equations more closely resemble the screened Poisson equation than the Poisson equation:\nThe fundamental solution decays rapidly at infinity, and no fast growth at infinity is observed. The\n(physical) solution udoes not grow orders of magnitude faster than the initial condition u0at any\npositive time. Moreover, the heat kernel is a probability density for any time and in any dimension,\nso no dimension-dependent factors are expected.\nAs a prototypical example of a parabolic equation, we consider the heat equation\n\u001a(@t\u0000\u0001)u=f t> 0\nu=u0t= 0: (13)\nThe solution is given as the superposition of the solution of the homogeneous equation with non-\nzero initial values and the solution of the inhomogeneous solution with zero initial value:\nu(t;x) =uhom(t;x) +uinhom (t;x)\n=1\n(4\u0019t)d=2Z\nRdu0(y) exp\u0012\n\u0000jx\u0000yj2\n4t\u0013\ndy\n+Zt\n01\n(4\u0019(t\u0000s))d=2Z\nRdf(s;y) exp\u0012\n\u0000jx\u0000yj2\n4(t\u0000s)\u0013\ndy:\nLemma 3 Leffbe a ReLU-Barron function of tandx. Thenuhomis a Barron function of xandp\ntwith Barron norm\nkuhomkB\u00142kfkB\nFor fixed time t>0,uhomanduinhom are Barron functions of xwith\nkuhom(t;\u0001)kB\u0014\u0002\n1 +p\nt\u0003\n;kuhom(t;\u0001)kB\u0014\u0014\nt+2\n3t3=2+t2\n2+2\n5t5=2\u0015\nkfkB:\nThe proof is a direct calculation using the Green’s function. It is presented in the appendix.\n261\nE W OJTOWYTSCH\nRemark 4 The dependence onp\ntis a consequence of the parabolic scaling. The exponent t5=2\noccurs because the Barron norm of f(t;\u0001)as a function of xscales like maxf1;tgin the worst case\n(the time variable, now a constant, is absorbed into the bias). A further factor of tstems from the\nincreasing length of the interval over which the time-integral in the definition of uinhom is given.\nThe same argument applies under the weaker assumption that the source term fis in the\nBochner space L1\u0000\n(0;1);B(Rd)\u0001\nwith the stronger bound kuinhom (t;\u0001)kB\u0014C\u0002\nt+t3=2\u0003\n. Also in\nBarron-spaces for bounded and Lipschitz-continuous activation, the exponent 5=2can be lowered\nto3=2since the size of the bias term does not enter the Barron norm.\nWe therefore can efficiently solve the homogeneous heat equation for a Barron initial condition\nin both space and time using two-layer neural networks. For the heat equation with a source term,\nwe can solve efficiently for the heat distribution at a fixed finite time t>0.\n2.4. A viscous Hamilton-Jacobi equation\nAll PDEs we previously considered are linear, and we showed that if the right hand side is in a\nsuitable Barron space, then so is the solution. This is non-surprising as the solution of one of these\nequations with a ridge function right hand side has a ridge function solution. By linearity, the same is\ntrue for superpositions of ridge functions. This structure is highly compatible with two-layer neural\nnetworks, which are superpositions of a specific type of ridge function. The linear equations are\ninvariant under rescalings (and rotations), so all problems are reduced to ODEs or 1+1-dimensional\nPDEs.\nThe same argument cannot be applied to non-linear equations. For our final example, we study\nthe viscous Hamilton-Jacobi equation\nut\u0000\u0001u+jruj2= 0; (14)\n(@t\u0000\u0001)u=vt\n\u0000v\u0000div\u0012rv\n\u0000v\u0013\n=vt\n\u0000v\u0000\u0001v\n\u0000v\nfor which an explicit formula is available. If vsolvesvt\u0000\u0001v= 0, thenu= log(\u0000v)solves\n(@t\u0000\u0001)u=\u0000vt\nv\u0000div\u0012rv\n\u0000v\u0013\n=\u0000(@t\u0000\u0001)u\nv\u0000jrvj2\nv2= 0\u0000\f\frlog(\u0000v)\f\f2=\u0000jruj2:\nThus the solution uof (14) with initial condition u(0;\u0001) =u0is given by\nu(t;x) =\u0000log\u00121\n(4\u0019t)d=2Z\nRexp\u0012\n\u0000jx\u0000yj2\n4t\u0013\nexp (\u0000u0(y)) dy\u0013\n:\nLemma 5 Assume that u0:Rd![\f\u0000;\f+]is a Barron function. Then the solution uof(14) as a\nfunction ofp\ntandxis in the tree-like function space with 3hidden layers and\nkukW3\u0014exp(\f+\u0000\f\u0000)ku0kB:\nProof Recall that due to (E and Wojtowytsch, 2020a, Section 4.1) we have\n1.exp : [\f\u0000;\f+]!Ris a Barron function with Barron norm \u0014exp(\f+)\u0000exp(\f\u0000)\u0014\nexp(\f+), and\n262\nHIGH-DIMENSIONAL PDE S WITH BARRON DATA\n2.log : [\r\u0000;\r+]!Ris a Barron function with Barron norm1\n\r\u0000\u00001\n\r+\u00141\n\r\u0000for any 0<\r\u0000<\n\r+<1.\nThe estimates in this precise form hold for ReLU activation, but similar estimates can easily be\nobtained for more general \u001b. However, a function space theory is not yet available for more general\n\u001b.\nIf\u0000u0:Rd![\f\u0000;\f+]has Barron normku0kB(Rd)\u0014C0, then exp(\u0000u0)is a function\nwith tree-like three-layer norm \u0014exp(\f+)C0. Using the same change of variables as for the heat\nequation, we find that\nF(t;x) =1\n(4\u0019t)d=2Z\nRexp\u0012\n\u0000jx\u0000yj2\n4t\u0013\nexp (\u0000u0(y)) dy\nis a three-layer tree-like function ofp\ntandx. We conclude that uis a tree-like four-layer function\nofp\ntandxwith norm\nkukW3(Rd)\u00141\nexp(\f\u0000)exp(\f+)C0= exp(\f+\u0000\f\u0000)C0:\nSo a viscous Hamilton-Jacobi equation whose initial condition is a bounded Barron function can\nbe solved using a four-layer ReLU-network (but the parameters may be very large if the oscillation\nofu0is not small).\n3. On the influence of boundary values\n3.1. A counterexample on the unit ball\nIn all examples above, a critical ingredient was the translation invariance of the PDE, which breaks\ndown if we consider bounded domains or non-constant coefficients. When solving \u0000\u0001u= 0with\nboundary values f(x1)on a bounded domain, the solution never depends only on the variable x1\nunlessfis an affine function. The Barron space theory of PDEs on bounded domains is therefore\nmarkedly different from the theory in the whole space.\nLemma 6 Let\u001b(z) = maxfz;0gbe ReLU-activation and g(x) =\u001b(x1)a Barron function on Rd\nford\u00152. Denote by Bdthe unit ball in Rdand denote by uthe solution to the PDE\n\u001a\u0000\u0001u= 0 inBd\nu=gon@Bd: (15)\nIfd\u00153,uis not a Barron function on Bd.\nProof Assume for the sake of contradiction that uis a Barron function and d\u00153. Ifuis a Barron\nfunction, then uis defined (non-uniquely) on the whole space Rdby the explicit representation\nformula. We observe that u2C1\nloc(Bd)and that@1u=@1gis discontinuous on the equatorial\nsphere@Bd\\fx1= 0gsincee1is tangent to the unit sphere at this point. Thus \u0006 :=@Bd\\fx1= 0g\nis contained in the countable union of affine spaces on which uis not differentiable. If d >2,\u0006\n263\nE W OJTOWYTSCH\nis ad\u00002-dimensional curved submanifold of Rdand anyd\u00001-dimensional subspace which does\nnot coincide with the hyperplane fx1= 0gintersects \u0006in a set of measure zero (with respect to the\nd\u00002-dimensional Hausdorff measure). Thus umust be non-differentiable on the entire hyper-plane\nfx1= 0g, leading to a contradiction.\nNote that this argument is entirely specific to ReLU-activation and to two-layer neural networks.\nOn the other hand, if \u001b2C1, then alsou2C1(Bd), and thus in particular u2B (without sharp\nnorm estimates). More generally, assume we wish to solve the Laplace equation on the unit ball in\nddimensions with boundary values which are a one-dimensional profile\n\u001a\u0000\u0001u(x) = 0jxj<1\nu(x) =g(x1)jxj= 1:(16)\nWe may decompose u=g+vwhere\n\u001a\u0000\u0001v=g00(x1)jxj<1\nv= 0jxj= 1: (17)\nIf we abbreviate (x2;:::;xd) = ^x, it is clear by symmetry that vonly depends onj^xj, i.e.v=\n (x1;j^xj)where\n\u001a\u0001 +d\u00002\ny2@2 =\u0000g00(y1)y2\n1+y2\n2<1; y2>0\n = 0fy2\n1+y2\n2= 1;y2>0g: (18)\nSince solutions of the original equation are smooth, we conclude that @2 = 0 onfy2= 0g,\nmeaning that also = 0on the remaining portion fy2= 0g\\fy2\n1+y2\n2\u00141gof the boundary.\nThus the solution of Laplace’s equation with ridge function boundary values is not a ridge func-\ntion, but enjoys a high degree of symmetry nonetheless. Instead of using neural networks as one-\ndimensional functions of wTx, it may be a more successful endeavour to consider superpositions of\none-dimensional functions of the two-dimensional data\n0\n@wTx;s\njxj2\u0000\f\f\f\fwT\njwjx\f\f\f\f21\nA=\u0012\nwTx;\r\r\r\r\u0012\nI\u0000w\njwj\nw\njwj\u0013\nx\r\r\r\r\u0013\n:\nThe second component in the vector is a (ReLU-)Barron function. Thus, if :R2!Ris a Barron\nfunction, then\nu(x) = \u0012\nwTx;\r\r\r\r\u0012\nI\u0000w\njwj\nw\njwj\u0013\nx\r\r\r\r\u0013\nis a tree-like function of depth three. In the unit ball, this modified data can be generated explicitly,\nwhile in more complex domains, it must be learned by a deeper neural network. Whether is in\nfact a Barron function currently remains open.\n3.2. A counterexample on a “Barron”-domain\nIn the previous example, we showed that the harmonic function on a domain with Barron-function\nboundary values may not be a Barron function, but might be a tree-like function of greater depth.\nWe may be tempted to conjecture the following: If the boundary of a domain Ucan locally be\n264\nHIGH-DIMENSIONAL PDE S WITH BARRON DATA\nwritten as the graph of a Barron function over a suitable hyperplane and g:@U!Ris a Barron\nfunction, then the harmonic function uonUwith boundary values gis a tree-like function. Such\ndomains coincide with domains with ‘Barron boundary’ considered in Caragea et al. (2020). This is\ngenerally false, as a classical counterexample to regularity theory on nonconvex Lipschitz domains\nshows.\nConsider the planar domain\nD\u0012=fx2R2: 0<\u001e<\u0012g\nwhere\u001e2(0;2\u0019)is angular polar coordinate and 0<\u0012< 2\u0019. Fork2N, consider\nuk;\u0012:D\u0012!R; uk;\u0012(r;\u001e) =rk\u0019\n\u0012sin\u0012k\u0019\n\u0012\u001e\u0013\n:\nObserve that\n•\u0000\u0001uk;\u0012= 0for allk2N,0<\u0012< 2\u0019.\n• Fork= 1and\u0019<\u0012< 2\u0019, we note that uk;\u0012is not Lipschitz continuous at the origin.\nWe can consider bounded domains eD\u0012such thateD\u0012\\B1(0) =D\u0012\\B1(0)and either\n1.@eD\u0012is polygonal or\n2.@eD\u0012isC1-smooth away from the origin.\nIn either case, @eD\u0012can be locally represented as the graph of a ReLU-Barron function. Since\nuk;\u0012\u00110on@D\u0012anduk;\u0012is smooth away from the origin, we find that uk;\u0012isC1-smooth on@eD\u0012.\nIn particular there exists a Barron function gk;\u0012such thatgk;\u0012\u0011uk;\u0012on@eD\u0012. The unique solution\nto the boundary value problem\n(\n\u0000\u0001u= 0 ineD\u0012\nu=gk;\u0012 on@eD\u0012\nisuk;\u0012itself. Again, for k= 1and\u0019<\u0012< 2\u0019, this harmonic function is not Lipschitz-continuous\non the closure of eD\u001e, and thus in particular not in any tree-like function space.\nAs we observed in Section 2.2, classical Barron spaces as used in data-scientific applications\nbehave similar to C0;\u000b-spaces in PDE theory and another scale of C2;\u000b-type may be useful. Such\nspaces may also describe more meaningful boundary regularity.\n3.3. Neural network spaces and classical function spaces\nIn this section, we briefly sketch some differences between specifically Barron spaces and classical\nfunction spaces, which we expect to pose challenges in the development of a regularity theory in\nspaces of neural network functions.\nClassical regularity theory even in the linear case uses nonlinear coordinate transformations like\nstraightening the boundary and relies on the ability to piece together local solutions using a partition\nof unity. There are major differences between classical function spaces and Barron spaces. Recall\nthat\n265\nE W OJTOWYTSCH\n1. ifu;v2C2;\u000b, then also and u\u000ev2C2;\u000b, and\n2. ifu2C2;\u000banda2C0;\u000b, then alsoa@i@ju2C0;\u000b.\nNeither property holds in Barron space in dimension d\u00152. The first property is important for coor-\ndinate transformations and localizing arguments, the second when considering differential operators\nwith variable coefficients. The properties also fail in Sobolev spaces, which is why the boundary and\ncoefficients of a problem are typically assumed to have greater regularity than is expected of the so-\nlution or its second derivatives respectively (e.g. bounded measurable coefficients, C2-boundaries,\n. . . ).\nWhile spaces of smooth functions are invariant under diffeomorphisms, Barron-space is only\ninvariant under linear transformations: If \u001e:Rd!Rdis a non-linear diffeomorphism, there exists\na hyperplane H=fwT\u0001+b= 0ginRdsuch that\u001e\u00001(H)is not a linear hyperplane. Thus the\nfunctionu(x) =\u001b\u0000\nwT\u001e(x) +b)is not a Barron-function since its singular set is not straight.\nCompositions of tree-like functions are tree-like functions (for deeper trees) and compositions\nof flow-induced functions for deep ResNets E et al. (2019) are flow-induced functions (but flow-\ninduced function classes are non-linear). This is the first major difference between spaces of neural\nnetworks and spaces of classically smooth functions which we want to point out. While in the\nSobolev setting, greater regularity is assumed, we believe that in the neural network setting, deeper\nnetworks should be considered.\nThe second difference is about the ‘locality’ of the function space criterion. Note that a function\nuisCk;\u000b- orHk-smooth on a domain Uif and only if there exists a finite covering fU1;:::;UNg\nofUby open domains such that ujUkis in the appropriate function space. The smoothness crite-\nrion therefore can be localized. Even in ostensibly non-local fractional order Sobolev spaces, the\nsmoothness criterion can be localized under a mild growth or decay condition.\nIn general, we cannot localize the Barron property in the same way. We describe a counter-\nexample for Barron spaces with ReLU-activation, but we expect a similar result to hold in more\ngeneral function classes. Consider a U-shaped domain in R2, e.g.\nU=\u0000\n(0;1)\u0002(0;3)\u0001\n|{z}\nleft column[\u0000\n(2;3)\u0002(0;3)\u0001\n|{z}\nright column[\u0000\n(0;3)\u0002(0;1)\u0001\n|{z}\nhorizontal bar:\nThe function\nf:U!R; f (x) =(\n\u001b(x2\u00001)x1<1:5\n0 x1\u00151:5\nis in Barron space on each of the domains U1=fx2U:x1<2gandU2=fx2U:\nx1>1g, but overall fis not in Barron space since the set of points on which a Barron function\nis non-differentiable is a countable union of points and lines in R2. In particular, fcould not be\ndifferentiable on (2;3)\u0002f2g, but clearly is.\nReferences\nAndrew R Barron. Universal approximation bounds for superpositions of a sigmoidal function.\nIEEE Transactions on Information theory , 39(3):930–945, 1993.\nKaushik Bhattacharya, Bamdad Hosseini, Nikola B Kovachki, and Andrew M Stuart. Model reduc-\ntion and neural networks for parametric pdes. arXiv preprint arXiv:2005.03180 , 2020.\n266\nHIGH-DIMENSIONAL PDE S WITH BARRON DATA\nHP Boas and RP Boas. Short proofs of three theorems on harmonic functions. Proceedings of the\nAmerican Mathematical Society , pages 906–908, 1988.\nAndrei Caragea, Philipp Petersen, and Felix V oigtlaender. Neural network approximation and esti-\nmation of classifiers with classification boundary in a barron class. arXiv:2011.09363 [math.FA] ,\n2020.\nPratik Chaudhari, Adam Oberman, Stanley Osher, Stefano Soatto, and Guillaume Carlier. Deep\nrelaxation: partial differential equations for optimizing deep neural networks. Research in the\nMathematical Sciences , 5(3):30, 2018.\nJingrun Chen, Rui Du, and Keke Wu. A comparison study of deep galerkin method and deep ritz\nmethod for elliptic problems with different boundary conditions. arXiv:2005.04554 [math.NA] ,\n2020.\nJerome Darbon, Gabriel P Langlois, and Tingwei Meng. Overcoming the curse of dimensionality\nfor some hamilton–jacobi partial differential equations via neural network architectures. Research\nin the Mathematical Sciences , 7(3):1–50, 2020.\nWeinan E and Stephan Wojtowytsch. Representation formulas and pointwise properties for Barron\nfunctions. arXiv:2006.05982 [stat.ML] , 2020a.\nWeinan E and Stephan Wojtowytsch. On the Banach spaces associated with multi-layer ReLU\nnetworks of infinite width. CSIAM Trans. Appl. Math. , 1(3):387–440, 2020b.\nWeinan E and Stephan Wojtowytsch. Kolmogorov width decay and poor approximators in machine\nlearning: Shallow neural networks, random feature models and neural tangent kernels. Res Math\nSci, 8(5), 2021.\nWeinan E and Bing Yu. The deep ritz method: a deep learning-based numerical algorithm for\nsolving variational problems. Communications in Mathematics and Statistics , 6(1):1–12, 2018.\nWeinan E, Jiequn Han, and Arnulf Jentzen. Deep learning-based numerical methods for high-\ndimensional parabolic partial differential equations and backward stochastic differential equa-\ntions. Communications in Mathematics and Statistics , 5(4):349–380, 2017.\nWeinan E, Chao Ma, and Lei Wu. Barron spaces and the compositional function spaces for neural\nnetwork models. arXiv:1906.08039 [cs.LG] , 2019.\nWeinan E, Chao Ma, Stephan Wojtowytsch, and Lei Wu. Towards a mathematical understanding of\nneural network-based machine learning: what we know and what we don’t. CSIAM Trans. Appl.\nMath. , 1(4):561–615, 2020.\nJochen Garcke. Sparse grids in a nutshell. In Sparse grids and applications , pages 57–80. Springer,\n2012.\nPhilipp Grohs, Fabian Hornung, Arnulf Jentzen, and Philippe V on Wurstemberger. A proof that\nartificial neural networks overcome the curse of dimensionality in the numerical approximation\nof black-scholes partial differential equations. arXiv preprint arXiv:1809.02362 , 2018.\n267\nE W OJTOWYTSCH\nJiequn Han, Arnulf Jentzen, and Weinan E. Solving high-dimensional partial differential equations\nusing deep learning. Proceedings of the National Academy of Sciences , 115(34):8505–8510,\n2018.\nFabian Hornung, Arnulf Jentzen, and Diyora Salimova. Space-time deep neural network approx-\nimations for high-dimensional partial differential equations. arXiv preprint arXiv:2006.02199 ,\n2020.\nMartin Hutzenthaler, Arnulf Jentzen, Thomas Kruse, and Tuan Anh Nguyen. A proof that rectified\ndeep neural networks overcome the curse of dimensionality in the numerical approximation of\nsemilinear heat equations. SN Partial Differential Equations and Applications , 1:1–34, 2020.\nArnulf Jentzen, Diyora Salimova, and Timo Welti. A proof that deep artificial neural networks\novercome the curse of dimensionality in the numerical approximation of kolmogorov partial\ndifferential equations with constant diffusion and nonlinear drift coefficients. arXiv preprint\narXiv:1809.07321 , 2018.\nGeorgios Kissas, Yibo Yang, Eileen Hwuang, Walter R Witschey, John A Detre, and Paris\nPerdikaris. Machine learning in cardiovascular flows modeling: Predicting arterial blood pres-\nsure from non-invasive 4d flow MRI data using physics-informed neural networks. Computer\nMethods in Applied Mechanics and Engineering , 358:112623, 2020.\nZhong Li, Chao Ma, and Lei Wu. Complexity measures for neural networks with general activation\nfunctions using path-based norms. arXiv preprint arXiv:2009.06132 , 2020.\nZhiping Mao, Ameya D Jagtap, and George Em Karniadakis. Physics-informed neural networks\nfor high-speed flows. Computer Methods in Applied Mechanics and Engineering , 360:112789,\n2020.\nMaziar Raissi, Paris Perdikaris, and George E Karniadakis. Physics-informed neural networks: A\ndeep learning framework for solving forward and inverse problems involving nonlinear partial\ndifferential equations. Journal of Computational Physics , 378:686–707, 2019.\nJustin Sirignano and Konstantinos Spiliopoulos. Dgm: A deep learning algorithm for solving partial\ndifferential equations. Journal of computational physics , 375:1339–1364, 2018.\nYifan Sun, Linan Zhang, and Hayden Schaeffer. Neupde: Neural network based ordinary and partial\ndifferential equations for modeling time-dependent data. In Mathematical and Scientific Machine\nLearning , pages 352–372. PMLR, 2020.\nHan Wang, Linfeng Zhang, Jiequn Han, and Weinan E. DeePMD-kit: A deep learning package for\nmany-body potential energy representation and molecular dynamics. Computer Physics Commu-\nnications , 228:178–184, 2018.\nDmitry Yarotsky. Error bounds for approximations with deep ReLU networks. Neural Networks ,\n94:103–114, 2017.\n268\nHIGH-DIMENSIONAL PDE S WITH BARRON DATA\nAppendix A. Proofs\nProof [Proof of Lemma 3] Assume specifically that u02B(Rd)andf2B(Rd+1). With the\nsubstitution z=y\u0000xp\ntwe obtain\nuhom(t;x) =1\n(4\u0019t)d=2Z\nRdu0(y) exp\u0012\n\u0000jx\u0000yj2\n4t\u0013\ndy\n=1\n(4\u0019)d=2Z\nRdu0(x+p\ntz) exp(\u0000jzj2=4) dz\n=1\n(4\u0019)d=2Z\nRdZ\nRd+2a\u001b\u0010\nwT\u0000\nx+p\ntz\u0001\n+b\u0011\n\u00190(da\ndw\ndb) exp(\u0000jzj2=4) dz\n=Z\nRd+2aZ\nRd\u001b\u0010\nwT\u0000\nx+p\ntz\u0001\n+b\u0011exp(\u0000jzj2=4)\n(4\u0019)d=2dz\u00190(da\ndw\ndb):\nIn particular, uis a Barron function of (p\nt;x)on(0;1)\u0002Rdwith Barron norm\nkuhomkB\u0000\n(0;1)\u0002Rd\u0001\u0014Z\nRd+2jajZ\nRd\u0002\njwj+jwTzj+jbj\u0003exp(\u0000jzj2=4)\n(4\u0019)d=2dz\u00190(da\ndw\ndb)\n\u00142ku0kB\nin both the ReLU and general case. There is no dimension dependence in the integral since up to\nrotationjwTzj=jwjjz1j, so the integral reduces to the one-dimensional case. This agrees with the\nintuition that uis the superposition of solutions of (@t\u0000\u0001)u= 0 with initial condition given by\nthe one-dimensional profile \u001b(wTx+b). Instead of considering high-dimensional heat kernels, we\ncould have reduced the analysis to 1 + 1 dimensions.\nThe fact that uis a Barron function ofp\ntrather thantis due to the parabolic scaling of the\nequation. For fixed time t>0, the same argument shows that in the x-variable we have\nkuhomkB\u0000\nftg\u0002Rd\u0001\u0014Z\nRd+2\u0002Rdjaj\u0002\njwj+p\ntjwTzj+jbj\u0003\nd\u0019dz\u0014\u0002\n1 +p\nt\u0003\nku0kB:\nThe inhomogeneous part of uis a superposition of Barron functions in xandpt\u0000sfor0<\ns < t , which does not allow us to express uas a Barron function of both space and time in an\nobvious way. However, since f(t;\u0001)is a Barron function in space with norm kf(t;\u0001)kB(Rd)\u0014\nkfkB((0;1)\u0002Rd)maxf1;tg, we use the analysis of the homogeneous problem to obtain that\n\r\ruinhom (t;\u0001)\r\r\nB(Rd)\u0014Zt\n0kf(s;\u0001)kB(Rd)\u0002\n1 +p\nt\u0000s\u0003\nds\n\u0014kfkB((0;1)\u0002Rd)Zt\n0maxf1;sg\u0002\n1 +p\nt\u0000s\u0003\nds\n=\u0014\nt+2\n3t3=2+t2\n2+2\n5t5=2\u0015\nkfkB((0;1)\u0002Rd):\nThe Barron norm of the combined solution u(t;\u0001) =uhom(t;\u0001) +uinhom (t;\u0001)therefore grows at\nmost approximately like t5=2in time, independently of dimension d.\n269",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "CCHdomRmaQ",
"year": null,
"venue": "CoRR 2020",
"pdf_link": "http://arxiv.org/pdf/2009.10713v3",
"forum_link": "https://openreview.net/forum?id=CCHdomRmaQ",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Towards a Mathematical Understanding of Neural Network-Based Machine Learning: what we know and what we don't",
"authors": [
"Weinan E",
"Chao Ma",
"Stephan Wojtowytsch",
"Lei Wu"
],
"abstract": "The purpose of this article is to review the achievements made in the last few years towards the understanding of the reasons behind the success and subtleties of neural network-based machine learning. In the tradition of good old applied mathematics, we will not only give attention to rigorous mathematical results, but also the insight we have gained from careful numerical experiments as well as the analysis of simplified models. Along the way, we also list the open problems which we believe to be the most important topics for further study. This is not a complete overview over this quickly moving field, but we hope to provide a perspective which may be helpful especially to new researchers in the area.",
"keywords": [],
"raw_extracted_content": "Towards a Mathematical Understanding of\nNeural Network-Based Machine Learning:\nwhat we know and what we don't\nWeinan E∗ †1,2, Chao Ma‡3, Stephan Wojtowytsch§2, and Lei Wu¶2\n1Department of Mathematics, Princeton University\n2Program in Applied and Computational Mathematics, Princeton University\n3Department of Mathematics, Stanford University\nDecember 9, 2020\nContents\n1 Introduction 2\n1.1 The setup of supervised learning . . . . . . . . . . . . . . . . . . . . . . . . . 3\n1.2 The main issues of interest . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4\n1.3 Approximation and estimation errors . . . . . . . . . . . . . . . . . . . . . . 5\n2 Preliminary remarks 8\n2.1 Universal approximation theorem and the CoD . . . . . . . . . . . . . . . . . 8\n2.2 The loss landscape of large neural network models . . . . . . . . . . . . . . . 9\n2.3 Over-parametrization, interpolation and implicit regularization . . . . . . . . 9\n2.4 The selection of topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10\n3 The approximation property and the Rademacher complexity of the hy-\npothesis space 11\n3.1 Random feature model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13\n3.2 Two-layer neural network model . . . . . . . . . . . . . . . . . . . . . . . . . 14\n∗Also at Beijing Institute of Big Data Research.\n†[email protected]\n‡[email protected]\n§[email protected]\n¶[email protected]\n1arXiv:2009.10713v3 [cs.LG] 7 Dec 2020\n3.3 Residual networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17\n3.4 Multi-layer networks: Tree-like function spaces . . . . . . . . . . . . . . . . . 20\n3.5 Indexed representation and multi-layer spaces . . . . . . . . . . . . . . . . . 22\n3.6 Depth separation in multi-layer networks . . . . . . . . . . . . . . . . . . . . 24\n3.7 Tradeo\u000bs between learnability and approximation . . . . . . . . . . . . . . . 25\n3.8 A priori vs. a posteriori estimates . . . . . . . . . . . . . . . . . . . . . . . . 27\n3.9 What's not known? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28\n4 The loss function and the loss landscape 28\n4.1 What's not known? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30\n5 The training process: convergence and implicit regularization 30\n5.1 Two-layer neural networks with mean-\feld scaling . . . . . . . . . . . . . . . 31\n5.2 Two-layer neural networks with conventional scaling . . . . . . . . . . . . . . 33\n5.3 Other convergence results for the training of neural network models . . . . . 36\n5.4 Double descent and slow deterioration for the random feature model . . . . . 37\n5.5 Global minima selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39\n5.6 Qualitative properties of adaptive gradient algorithms . . . . . . . . . . . . . 40\n5.7 Exploding and vanishing gradients for multi-layer neural networks . . . . . . 44\n5.8 What's not known? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45\n6 Concluding remarks 46\nA Proofs for Section 3.1 54\nB Proofs for Section 3.2 55\n1 Introduction\nNeural network-based machine learning is both very powerful and very fragile. On the one\nhand, it can be used to approximate functions in very high dimensions with the e\u000eciency\nand accuracy never possible before. This has opened up brand new possibilities in a wide\nspectrum of di\u000berent disciplines. On the other hand, it has got the reputation of being\nsomewhat of a \\black magic\": Its success depends on lots of tricks, and parameter tuning\ncan be quite an art. The main objective for a mathematical study of machine learning is to\n1. explain the reasons behind the success and the subtleties, and\n2. propose new models that are equally successful but much less fragile.\nWe are still quite far from completely achieving these goals but it is fair to say that a\nreasonable big picture is emerging.\nThe purpose of this article is to review the main achievements towards the \frst goal and\ndiscuss the main remaining puzzles. In the tradition of good old applied mathematics, we\n2\nwill not only give attention to rigorous mathematical results, but also discuss the insight we\nhave gained from careful numerical experiments as well as the analysis of simpli\fed models.\nAt the moment much less attention has been given to the second goal. One proposal that\nwe should mention is the continuous formulation advocated in [33]. The idea there is to \frst\nformulate \\well-posed\" continuous models of machine learning problems and then discretize\nto get concrete algorithms. What makes this proposal attractive is the following:\n\u000fmany existing machine learning models and algorithms can be recovered in this way\nin a scaled form;\n\u000fthere is evidence suggesting that indeed machine learning models obtained this way\nis more robust with respect to the choice of hyper-parameters than conventional ones\n(see for example Figure 5 below);\n\u000fnew models and algorithms are borne out naturally in this way. One particularly\ninteresting example is the maximum principle-based training algorithm for ResNet-like\nmodels [58].\nHowever, at this stage one cannot yet make the claim that the continuous formulation is the\nway to go. For this reason we will postpone a full discussion of this issue to future work.\n1.1 The setup of supervised learning\nThe model problem of supervised learning which we focus on in this article can be formulated\nas follows: given a dataset S=f(xi;yi=f\u0003(xi));i2[n]g, approximate f\u0003as accurately as\nwe can. Iff\u0003takes continuous values, this is called a regression problem. If f\u0003takes discrete\nvalues, this is called a classi\fcation problem.\nWe will focus on the regression problem. For simplicity, we will neglect the so-called\n\\measurement noise\" since it does not change much the big picture that we will describe,\neven though it does matter for a number of important speci\fc issues. We will assume\nxi2X:= [0;1]d, and we denote by Pthe distribution of fxig. We also assume for simplicity\nthat supx2Xjf\u0003(x)j\u00141.\nObviously this is a problem of function approximation. As such, it can either be regarded\nas a problem in numerical analysis or a problem in statistics. We will take the former\nviewpoint since it is more in line with the algorithmic and analysis issues that we will study.\nThe standard procedure for supervised learning is as follows:\n1. Choose a hypothesis space, the set of trial functions, which will be denoted by Hm. In\nclassical numerical analysis, one often uses polynomials or piecewise polynomials. In\nmodern machine learning, it is much more popular to use neural network models. The\nsubscriptmcharacterizes the size of the model. It can be the number of parameters\nor neurons, and will be speci\fed later for each model.\n3\n2. Choose a loss function. Our primary goal is to \ft the data. Therefore the most popular\nchoice is the \\empirical risk\":\n^Rn(f) =1\nnX\ni(f(xi)\u0000yi)2=1\nnX\ni(f(xi)\u0000f\u0003(xi))2\nSometimes one adds some regularization terms.\n3. Choose an optimization algorithm and the hyper-parameters. The most popular choices\nare gradient descent (GD), stochastic gradient descent (SGD) and advanced optimizers\nsuch as Adam [52], RMSprop [85].\nThe overall objective is to minimize the \\population risk\", also known as the \\general-\nization error\":\nR(f) =Ex\u0018P(f(x)\u0000f\u0003(x))2\nIn practice, this is estimated on a \fnite data set (which is unrelated to any data used to\ntrain the model) and called test error, whereas the empirical risk (which is used for training\npurposes) is called the training error.\n1.2 The main issues of interest\nFrom a mathematical perspective, there are three important issues that we need to study:\n1. Properties of the hypothesis space. In particular, what kind of functions can be ap-\nproximated e\u000eciently by a particular machine learning model? What can we say about\nthe generalization gap, i.e. the di\u000berence between training and test errors.\n2. Properties of the loss function. The loss function de\fnes the variational problem used\nto \fnd the solution to the machine learning problem. Questions such as the landscape\nof the variational problem are obviously important. The landscape of neural network\nmodels is typically non-convex, and there may exist many saddle points and bad local\nminima.\n3. Properties of the training algorithm. Two obvious questions are: Can we optimize the\nloss function using the selected training algorithm? Does the solutions obtained from\ntraining generalize well to test data?\nThe second and third issues are closely related. In the under-parametrized regime (when\nthe size of the training dataset is larger than the number of free parameters in the hypothesis\nspace), this loss function largely determines the solution of the machine learning model. In\nthe opposite situation, the over-parametrized regime, this is no longer true. Indeed it is often\nthe case that there are in\fnite number of global minimizers of the loss function. Which one\nis picked depends on the details of the training algorithm.\nThe most important parameters that we should keep in mind are:\n4\n\u000fm: number of free parameters\n\u000fn: size of the training dataset\n\u000ft: number of training steps\n\u000fd: the input dimension.\nTypically we are interested in the situation when m;n;t!1 andd\u001d1.\n1.3 Approximation and estimation errors\nDenote by ^fthe output of the machine learning (abbreviated ML) model. Let\nfm= argminf2HmR(f)\nWe can decompose the error f\u0003\u0000^finto:\nf\u0003\u0000^f=f\u0003\u0000fm+fm\u0000^f\nf\u0003\u0000fmis the approximation error , due entirely to the choice of the hypothesis space. fm\u0000^f\nis the estimation error , the additional error due to the fact that we only have a \fnite dataset.\nTo get some basic idea about the approximation error, note that classically when approx-\nimating functions using polynomials, piecewise polynomials, or truncated Fourier series, the\nerror typically satis\fes\nkf\u0000fmkL2(X)\u0014C0m\u0000\u000b=dkfkH\u000b(X)\nwhereH\u000bdenotes the Sobolev space of order \u000b. The appearance of 1 =din the exponent of\nmis a signature of an important phenomenon, the curse of dimensionality (CoD) : The\nnumber of parameters required to achieve certain accuracy depends exponentially on the\ndimension. For example, if we want m\u0000\u000b=d= 0:1, then we need m= 10d=\u000b= 10d, if\u000b= 1.\nAt this point, it is useful to recall the one problem that has been extensively studied\nin high dimension: the problem of evaluating an integral, or more precisely, computing an\nexpectation. Let gbe a function de\fned on X. We are interested in computing approximately\nI(g) =Z\nXg(x)dx=Eg\nwhere the expectation is taken with respect to the uniform distribution. Typical grid-based\nquadrature rules, such as the Trapezoidal rule and the Simpson's rule, all su\u000ber from CoD.\nThe one algorithm that does not su\u000ber from CoD is the Monte Carlo algorithm which works\nas follows. Letfxign\ni=1be a set of independent, uniformly distributed random variables on\nX. Let\nIn(g) =1\nnnX\ni=1g(xi)\n5\nThen a simple calculation gives us\nE(I(g)\u0000In(g))2=Var(g)\nn;Var(g) =Z\nXg2(x)dx\u0000\u0012Z\nXg(x)dx\u00132\n(1)\nTheO(1=pn) rate is independent of d. It turns out that this rate is almost the best one can\nhope for.\nIn practice, Var( g) can be very large in high dimension. Therefore, variance reduction\ntechniques are crucial in order to make Monte Carlo methods truly practical.\nTurning now to the estimation error. Our concern is how the approximation produced\nby the machine learning algorithm behaves away from the training dataset, or in practical\nterms, whether the training and test errors are close. Shown in Figure 1 is the classical Runge\nphenomenon for interpolating functions on uniform grids using high order polynomials. One\ncan see that while on the training set, here the grid points, the error of the interpolant is 0,\naway from the training set, the error can be very large.\nFigure 1: The Runge phenomenon with target function f\u0003(x) = (1 + 25 x2)\u00001.\nIt is often easier to study a related quantity, the generalization gap. Consider the solution\nthat minimizes the empirical risk, ^f= argminf2Hm^Rn(f). The \\generalization gap\" of ^fis\nthe quantityjR(^f)\u0000^Rn(^f)j. Since it is equal to jI(g)\u0000In(g)jwithg(x) = ( ^f(x)\u0000f\u0003(x))2,\none might be tempted to conclude that\ngeneralization gap = O(1=pn)\nbased on (1). This is NOT necessarily true since ^fis highly correlated with fxig. In fact,\ncontrolling this gap is among the most di\u000ecult problems in ML.\nStudying the correlations of ^fis a rather impossible problem. Therefore to estimate the\ngeneralization gap, we resort to the uniform bound:\njR(^f)\u0000^Rn(^f)j\u0014sup\nf2HmjR(f)\u0000^Rn(f)j= sup\nf2HmjI(g)\u0000In(g)j (2)\n6\nThe RHS of this equation depends heavily on the nature of Hm. If we takeHmto be the\nunit ball in the Lipschitz space, we have [39]\nsup\nkhkLip\u00141jI(g)\u0000In(g)j\u00181\nn1=d\nwithg= (h\u0000f\u0003)2. This gives rise to CoD for the size of the dataset (commonly referred\nto as \\sample complexity\"). However if we take Hmto be the unit ball in the Barron space,\nto be de\fned later, we have\nsup\nkhkB\u00141jI(h)\u0000In(h)j\u00181pn\nThis is the kind of estimates that we should look for.\nAssuming that all the functions under consideration are bounded, the problem of esti-\nmating the RHS of (2) reduces to the estimation of suph2HmjI(h)\u0000In(h)j. One way to do\nthis is to use the notion of Rademacher complexity [15].\nDe\fnition 1. LetFbe a set of functions, and S= (x1;x2;:::;xn) be a set of data points.\nThen, the Rademacher complexity of Fwith respect to Sis de\fned as\nRadS(F) =1\nnE\u0018\"\nsup\nh2FnX\ni=1\u0018ih(xi)#\n;\nwheref\u0018ign\ni=1are i.i.d. random variables taking values \u00061 with equal probability.\nRoughly speaking, Rademacher complexity quanti\fes the degree to which functions in\nthe hypothesis space can \ft random noise on the given dataset. It bounds the quantity of\ninterest, suph2HjI(h)\u0000In(h)j, from above and below.\nTheorem 2. For any\u000e2(0;1), with probability at least 1\u0000\u000eover the random samples\nS= (x1;\u0001\u0001\u0001;xn), we have\nsup\nh2F\f\f\f\f\fEx[h(x)]\u00001\nnnX\ni=1h(xi)\f\f\f\f\f\u00142 RadS(F) + sup\nh2Fkhk1r\nlog(2=\u000e)\n2n:\nsup\nh2F\f\f\f\f\fEx[h(x)]\u00001\nnnX\ni=1h(xi)\f\f\f\f\f\u00151\n2RadS(F)\u0000sup\nh2Fkhk1r\nlog(2=\u000e)\n2n:\nFor a proof, see for example [76, Theorem 26.5]. For this reason, a very important part\nof theoretical machine learning is to study the Rademacher complexity of a given hypothesis\nspace.\nIt should be noted that there are other ways of analyzing the generalization gap, such as\nthe stability method [17].\n7\nNotations. For any function f:Rm7!Rn, letrf= (@fi\n@xj)i;j2Rn\u0002mandrTf= (rf)T.\nWe useX.Yto mean that X\u0014CYfor some absolute constant C. For anyx2Rd, let\n~x= (xT;1)T2Rd+1. Let \n be a subset of Rd, and denote byP(\n) the space of probability\nmeasures. De\fne P2(\n) =f\u00162 P(\n) :R\nkxk2\n2d\u0016(x)<1g. We will also follow the\nconvention in probability theory to use \u001atto denote the value of \u001aat timet.\n2 Preliminary remarks\nIn this section we set the stage for our discussion by going over some of the classical results\nas well as recent qualitative studies that are of general interest.\n2.1 Universal approximation theorem and the CoD\nLet us consider the two-layer neural network hypothesis space:\nHm=ffm(x) =mX\nj=1aj\u001b(wT\njx)g\nwhere\u001bis a nonlinear function, the activation function. The Universal Approximation\nTheorem (UAT) states that under very mild conditions, any continuous function can be\nuniformly approximated on compact domains by neural network functions.\nTheorem 3. [24, Theorem 5] If \u001bis sigmoidal, in the sense that limz!\u00001\u001b(z) = 0;limz!1\u001b(z) =\n1, then any function in C([0;1]d)can be approximated uniformly by two layer neural network\nfunctions.\nThis result can be extended to any activation functions that are not exactly a polynomial\n[56].\nUAT plays the role of Weierstrass Theorem on the approximation of continuous functions\nby polynomials. It is obviously an important result, but it is of limited use due to the lack\nof quantitative information about the error in the approximation. For one thing, the same\nconclusion can be drawn for polynomial approximation, which we know is of limited use in\nhigh dimension.\nMany quantitative error estimates have been established since then. But most of these\nestimates su\u000ber from CoD. The one result that stands out is the estimate proved by Barron\n[11]:\ninf\nfm2Hmkf\u0003\u0000fmk2\nL2(P).\u0001(f\u0003)2\nm(3)\nwhere \u0001(f) is a norm de\fned by\n\u0001(f) := inf\n^fZ\nRdk!k1j^f(!)jd!<1;\n8\nwhere ^fis the Fourier transform of an extension of ftoL2(Rd). The convergence rate in (3)\nis independent of dimension. However, the constant \u0001( f\u0003) could be dimension-dependent\nsince it makes use of the Fourier transform. As the dimensionality goes up, the number of\nderivatives required to make \u0001 \fnite also goes up. This is distinct from the CoD, which is\nthe dimension-dependence of the rate.\n2.2 The loss landscape of large neural network models\nThe landscape of the loss function for large neural networks was studied in [22] using an\nanalogy with high dimensional spherical spin glasses and numerical experiments. The land-\nscape of high dimensional spherical spin glass models has been analyzed in [6]. It was shown\nthat the lowest critical values of the Hamiltonians of these models form a layered structure,\nordered by the indices of the critical points. They are located in a well-de\fned band lower-\nbounded by the global minimum. The probability of \fnding critical points outside the band\ndiminishes exponentially with the dimension of the spin-glass model. Choromanska et al [22]\nused the observation that neural networks with independent weights can be mapped onto\nspherical spin glass models, and suggested that the same picture should hold qualitatively for\nlarge neural network models. They provided numerical evidence to support this suggestion.\nThis work is among the earliest that suggests even though it is highly non-convex, the\nlandscape of larger neural networks might be simpler than the ones for smaller networks.\n2.3 Over-parametrization, interpolation and implicit regulariza-\ntion\nModern deep learning often works in the over-parametrized regime where the number of\nfree parameters is larger than the size of the training dataset. This is a new territory as\nfar as machine learning theory is concerned. Conventional wisdom would suggest that one\nshould expect over\ftting in this regime, i.e. an increase in the generalization gap. Whether\nover\ftting really happens is a problem of great interest in theoretical machine learning. As\nwe will see later, one can show that, over\ftting does happen in the highly over-parametrized\nregime.\nAn enlightening numerical study of the optimization and generalization properties in this\nregime was carried out in [91]. Among other things, it was discovered that in this regime,\nthe neural networks are so expressive that they can \ft any data, no matter how much noise\none adds to the data. Later it was shown by Cooper that the set of global minima with no\ntraining error forms a manifold of dimension m\u0000n[23].\nSome of these global minima generalize very poorly. Therefore an important question\nis how to select the ones that do generalize well. It was suggested in [91] that by tuning\nthe hyper-parameters of the optimization algorithm, one can obtain models with good gen-\neralization properties without adding explicit regularization. This means that the training\ndynamics itself has some implicit regularization mechanism which ensures that bad global\nminima are not selected during training. Understanding the possible mechanism for this\n9\nimplicit regularization is one of the key issues in understanding modern machine learning.\n2.4 The selection of topics\nThere are several rather distinct ways for a mathematical analysis of machine learning and\nparticularly neural network models:\n1. The numerical analysis perspective. Basically machine learning problems are viewed\nas (continuous) function approximation and optimization problems, typically in high\ndimension.\n2. The harmonic analysis perspective. Deep learning is studied from the viewpoint of\nbuilding hierarchical wavelets-like transforms. Examples of such an approach can be\nfound in [18, 83].\n3. The statistical physics perspective. A particularly important tool is the replica trick.\nFor special kinds of problems, this approach allows us to perform asymptotically exact\n(hence sharp) calculations. It is also useful to study the performance of models and\nalgorithms from the viewpoint of phase transitions. See for example [90, 9, 40].\n4. The information theory perspective. See [1] for an example of the kinds of results\nobtained along this line.\n5. The PAC learning theory perspective. This is closely related to the information theory\nperspective. It studies machine learning from the viewpoint of complexity theory. See\nfor example [61].\nIt is not yet clear how the di\u000berent approaches are connected, even though many results may\nfall in several categories. In this review, we will only cover the results obtained along the\nlines of numerical analysis. We encourage the reader to consult the papers referenced above\nto get a taste of these alternative perspectives.\nWithin the numerical analysis perspective, supervised machine learning and neural net-\nwork models are still vast topics. By necessity, this article focusses on a few aspects which\nwe believe to be key problems in machine learning. As a model problem, we focus on L2-\nregression here, where the data is assumed to be of the form ( xi;f\u0003(xi)) without uncertainty\nin they-direction.\nIn Section 3, we focus on the function spaces developed for neural network models.\nSection 4 discusses the (very short) list of results available for the energy landscape of loss\nfunctionals in machine learning. Training dynamics for network weights are discussed in\nSection 5 with a focus on gradient descent. The speci\fc topics are selected for two criteria:\n\u000fWe believe that they have substantial importance for the mathematical understanding\nof machine learning.\n10\n\u000fWe are reasonably con\fdent that the mathematical models developed so far will over\ntime \fnd their way into the standard language of the theoretical machine learning\ncommunity.\nThere are many glaring omissions in this article, among them:\n1. Classi\fcation problems. While most benchmark problems for neural networks fall into\nthe framework of classi\fcation, we focus on the more well-studied area of regression.\n2. Some other common neural network architectures, such as convolutional neural net-\nworks, long short-term memory (LSTM) networks, encoder-decoder networks.\n3. The impact of stochasticity. Many training algorithms and initialization schemes for\nneural networks use random variables. While toy models with standard Gaussian noise\nare well understood, the realistic case remains out of reach [45].\n4. Simpli\fed neural networks such as linear and quadratic networks. While these models\nare not relevant for applications, the simpler model allows for simpler analysis. Some\ninsight is available for these models which has not been achieved in the non-linear case\n[75, 49, 4].\n5. Important \\tricks\", such as dropout, batch normalization, layer normalization, gradient\nclipping, etc. To our knowledge, these remain empirically useful, but mysterious from\na mathematical perspective.\n6. Highly data-dependent results. The geometry of data is a large \feld which we avoid in\nthis context. While many data-distributions seem to be concentrated close to relatively\nlow-dimensional manifolds in a much higher-dimensional ambient space, these `data-\nmanifolds' are in many cases high-dimensional enough that CoD, a central theme in\nthis review, is still an important issue.\n3 The approximation property and the Rademacher\ncomplexity of the hypothesis space\nThe most important issue in classical approximation theory is to identify the function space\nnaturally associated with a particular approximation scheme, e.g. approximation by piece-\nwise polynomials on regular grids. These spaces are typically some Sobolev or Besov spaces,\nor their variants. They are the natural spaces for the particular approximation scheme, since\none can prove matching direct and inverse approximation theorems, namely any function in\nthe space can be approximated using the given approximation scheme with the speci\fed rate\nof convergence, and conversely any function that can be approximated to the speci\fed order\nof accuracy belongs to that function space.\nMachine learning is just another way to approximate functions, therefore we can ask\nsimilar questions, except that our main interest in this case is in high dimension. Any\n11\nmachine learning model hits the `curse of dimensionality' when approximating the class\nof Lipschitz functions. Nevertheless, many important problems seem to admit accurate\napproximations by neural networks. Therefore it is important to understand the class of\nfunctions that can be well approximated by a particular machine learning model.\nThere is one important di\u000berence from the classical setting. In high dimension, the rate\nof convergence is limited to the Monte Carlo rate and its variants. There is limited room\nregarding order of convergence and consequently there is no such thing as the order of the\nspace as is the case for Sobolev spaces.\nIdeally, we would like to accomplish the following:\n1. Given a type of hypothesis space Hm, say two-layer neural networks, identify the\nnatural function space associated with them (in particular, identify a norm kf\u0003k\u0003)\nthat satis\fes:\n\u000fDirect approximation theorem:\ninf\nf2HmR(f) = inf\nf2Hmkf\u0000f\u0003k2\nL2(P).kf\u0003k2\n\u0003\nm\n\u000fInverse approximation theorem: If a function f\u0003can be approximated e\u000eciently\nby the functions in Hm, asm!1 with some uniform bounds, then kf\u0003k\u0003is\n\fnite.\n2. Study the generalization gap for this function space. One way of doing this is to study\nthe Rademacher complexity of the set FQ=ff:kfk\u0003\u0014Qg. Ideally, we would like\nto have:\nRadS(FQ).Qpn\nIf both holds, then a combination gives us, up to logarithmic terms:\nR(^f).kf\u0003k2\n\u0003\nm+kf\u0003k\u0003pn\nRemark 4.It should be noted that what we are really interested in is the quantitative\nmeasures of the target function that control the approximation and estimation errors. We\ncall these quantities \\norms\" but we are not going to insist that they are really norms. In\naddition, we would like to use one norm to control both the approximation and estimation\nerrors. This way we have one function space that meets both requirements. However, it\ncould very well be the case that we need di\u000berent quantities to control di\u000berent errors. See\nthe discussion about residual networks below. This means that we will be content with a\ngeneralized version of (3):\nR(^f).\u0000(f\u0003)\nm+\r(f\u0003)pn\nWe will see that this can indeed be done for the most popular neural network models.\n12\n3.1 Random feature model\nLet\u001e(\u0001;w) be the feature function parametrized by w, e.g.\u001e(x;w) =\u001b(wTx). A random\nfeature model is given by\nfm(x;a) =1\nmmX\nj=1aj\u001e(x;w0\nj): (4)\nwherefw0\njgm\nj=1are i.i.d random variables drawn from a pre\fxed distribution \u00190. The collec-\ntionf\u001e(\u0001;w0\nj)gare the random features, a= (a1;:::;am)T2Rmare the coe\u000ecients. For\nthis model, the natural function space is the reproducing kernel Hilbert space (RKHS) [3]\ninduced by the kernel\nk(x;x0) =Ew\u0018\u00190[\u001e(x;w)\u001e(x0;w)] (5)\nDenote byHkthis RKHS. Then for any f2Hk, there exists a(\u0001)2L2(\u00190) such that\nf(x) =Z\na(w)\u001e(x;w)d\u00190(w); (6)\nand\nkfk2\nHk= inf\na2SfZ\na2(w)d\u00190(w); (7)\nwhereSf:=fa(\u0001) :f(x) =R\na(w)\u001e(x;w)d\u00190(w)g. We also de\fnekfk1= infa2Sfka(\u0001)kL1(\u00190).\nFor simplicity, we assume that \n := supp( \u00190) is compact. Denote W0= (w0\n1;:::;w0\nm)T2\nRm\u0002danda(W0) = (a(w0\n1);:::;a (w0\nm))T2Rm.\nTheorem 5 (Direct Approximation Theorem) .Assumef\u00032Hk, then there exists a(\u0001), such\nthat\nEW0[kfm(\u0001;a(W0))\u0000f\u0003)k2\nL2]\u0014kf\u0003k2\nHk\nm:\nTheorem 6 (Inverse Approximation Theorem) .Let(w0\nj)1\nj=0be a sequence of i.i.d. random\nvariables drawn from \u00190. Letf\u0003be a continuous function on X. Assume that there exist\nconstantsCand a sequence (aj)1\nj=0satisfying supjjajj\u0014C, such that\nlim\nm!11\nmmX\nj=1aj\u001e(x;w0\nj) =f\u0003(x); (8)\nfor allx2X. Then with probability 1, there exists a function a\u0003(\u0001) : \n7!Rsuch that\nf\u0003(x) =Z\n\na\u0003(w)\u001e(x;w)d\u00190(w);\nMoreover,kfk1\u0014C.\n13\nTo see how these approximation theory results can play out in a realistic ML setting,\nconsider the regularized model:\nLn;\u0015(a) =^Rn(a) +1pnkakpm;\nand de\fne the regularized estimator:\n^an;\u0015= argminLn;\u0015(a):\nTheorem 7. For any\u000e2(0;1), with probability 1\u0000\u000e, the population risk of the regularized\nestimator satis\fes\nR(^an)\u00141\nm\u0012\nlog(n=\u000e)kf\u0003k2\nHk+log2(n=\u000e)\nmkf\u0003k2\n1\u0013\n(9)\n+1pn \nkf\u0003kHk+\u0012log(1=\u000e)\nm\u00131=4\nkf\u0003k1+p\nlog(2=\u000e)!\n: (10)\nThese results should be standard. However, they do not seem to be available in the\nliterature. In the appendix, we provide a proof for these results.\nIt is worth noting that the dependence on kfk1and log(n=\u000e) can be removed by a more\nsophisticated analysis [8]. However, to achieve the rate of O(1=m+ 1=pn), one must make\nan explicit assumption on the decay rate of eigenvalues of the corresponding kernel operator\n[8].\n3.2 Two-layer neural network model\nThe hypothesis space for two-layer neural networks is de\fned by:\nFm=ffm(x) =1\nmmX\nj=1aj\u001b(wT\njx)g\nWe will focus on the case when the activation function \u001bis ReLU:\u001b(z) = max(z;0). Many\nof the results discussed below can be extended to more general activation functions [60].\nThe function space for this model is called the Barron space ([34, 31], see also [11, 53, 37]\nand particularly [7]). Consider the function f:X= [0;1]d7!Rof the following form\nf(x) =Z\n\na\u001b(wTx)\u001a(da;dw) =E\u001a[a\u001b(wTx)];x2X\nwhere \n = R1\u0002Rd+1,\u001ais a probability distribution on \n. The Barron norm is de\fned by\nkfkBp= inf\n\u001a2Pf(E\u001a[apkwkp\n1])1=p\n14\nwherePf:=f\u001a:f(x) =E\u001a[a\u001b(wTx)]g. De\fne\nBp=ff2C0:kfkBp<1g\nFunctions inBpare called Barron functions. As shown in [31], for the ReLU activation\nfunction, we actually have k\u0001kBp=k\u0001kBqfor any 1\u0014p\u0014q\u00141 . Hence, we will use k\u0001kB\nandBdenote the Barron norm and Barron space, respectively.\nRemark 8.Barron space and Barron functions are named in reference to the article [11] which\nwas the \frst to recognize and rigorously establish the advantages of non-linear approximation\nover linear approximation by considering neural networks with a single hidden layer.\nIt should be stressed that the Barron norm introduced above is not the same as the\none used in [11], which was based on the Fourier transform (see (11)). To highlight this\ndistinction, we will call the kind of norm in (11) spectral norm .\nAn important property of the ReLU activation function is the homogeneity property\n\u001b(\u0015z) =\u0015\u001b(z) for all\u0015 >0. A discussion of the representation for Barron functions with\npartial attention to homogeneity can be found in [37]. Barron spaces for other activation\nfunctions are discussed in [35, 60].\nOne natural question is what kind of functions are Barron functions. The following result\ngives a partial answer.\nTheorem 9 (Barron and Klusowski (2016)) .If\n\u0001(f) :=Z\nRdk!k2\n1j^f(!)jd!<1; (11)\nwhere ^fis the Fourier transform of f, thenfcan be represented as\nf(x) =Z\n\na\u001b(bTx+c)\u001a(da;db;dc);8x2X\nwhere\u001b(x) = max(0;x). MoreoverkfkB\u00142\u0001(f) + 2krf(0)k1+ 2f(0).\nA consequence of this is that every function in Hs(Rd) is Barron for s>d\n2+1. In addition,\nit is obvious that every \fnite sum of neuron activations is also a Barron function. Another\ninteresting example of a Barron function is the function f(x) =kxk`2=Eb\u0018N(0;Id)[\u001b(bTx)].\nOn the other hand, every Barron function is Lipschitz-continuous. An important criterion\nto establish that certain functions are notin Barron space is the following structure theorem.\nTheorem 10. [37, Theorem 5.4] Let fbe a Barron function. Then f=P1\ni=1fiwherefi2\nC1(RdnVi)whereViis ak-dimensional a\u000ene subspace of Rdfor somekwith 0\u0014k\u0014d\u00001.\nAs a consequence, distance functions to curved surfaces are not Barron functions.\nThe claim that the Barron space is the natural space associated with two-layer networks\nis justi\fed by the following series of results.\n15\nTheorem 11 (Direct Approximation Theorem, L2-version) .For anyf2B andm2N,\nthere exists a two-layer neural network fmwithmneuronsf(aj;wj)gm\nj=1such that\nkf\u0000fmkL2(P).kfkBpm:\nTheorem 12 (Direct Approximation Theorem, L1-version) .For anyf2B andm2N,\nthere exists a two-layer neural network fmwithmneuronsf(aj;wj)gm\nj=1such that\nkf\u0000fmkL1([0;1]d).4kfkBr\nd+ 1\nm:\nWe present a brief self-contained proof of the L1-direct approximation theorem in the\nappendix. We believe the idea to be standard, but have been unable to locate a good\nreference for it.\nRemark 13.In fact, there exists a constant C > 0 such that\nkf\u0000fmkL1([0;1]d)\u0014CkfkBp\nlog(m)\nm1=2+1=2d\nand for every \">0 there exists f2Bsuch that\nkf\u0000fmkL1([0;1]d)\u0015cm\u00001=2\u00001=d\u0000\";\nsee [10, 67]. Further approximation results, including in classical functions spaces, can be\nfound in [72].\nTheorem 14 (Inverse Approximation Theorem) .Let\nNCdef=f1\nmmX\nj=1aj\u001b(wT\njx) :1\nmmX\nj=1jajjkwjk1\u0014C;m2N+g:\nLetf\u0003be a continuous function. Assume there exists a constant Cand a sequence of functions\nfm2NCsuch that\nfm(x)!f\u0003(x)\nfor allx2X, then there exists a probability distribution \u001a\u0003on\n, such that\nf\u0003(x) =Z\na\u001b(wTx)\u001a\u0003(da;dw);\nfor allx2Xandkf\u0003kB\u0014C.\nBoth theorems are proved in [34]. In addition, it just so happens that the Rademacher\ncomplexity is also controlled by a Monte Carlo like rate:\n16\nTheorem 15 ([7]).LetFQ=ff2B;kfkB\u0014Qg. Then we have\nRadS(FQ)\u00142Qr\n2 ln(2d)\nn\nIn the same way as before, one can now consider the regularized model:\nLn(\u0012) =^Rn(\u0012) +\u0015r\nlog(2d)\nnk\u0012kP; ^\u0012n= argminLn(\u0012)\nwhere the path norm is de\fned by:\nk\u0012kP=1\nmmX\nj=1jajjkwjk1\nTheorem 16. [34]: Assume f\u0003:X7![0;1]2B. There exist constants absolute C0, such\nthat for any \u000e > 0, if\u0015\u0015C0, then with probability at least 1\u0000\u000eover the choice of the\ntraining set, we have\nR(^\u0012n).kf\u0003k2\nB\nm+\u0015kf\u0003kBr\nlog(2d)\nn+r\nlog(n=\u000e)\nn:\n3.3 Residual networks\nConsider a residual network model\nz0;L(x) =Vx;\nzl+1;L(x) =zl;L(x) +1\nLUl\u001b\u000e(Wlzl;L(x)); l= 0;1;\u0001\u0001\u0001;L\u00001\nf(x;\u0012) =\u000bTzL;L(x);\nwherex2Rdis the input,V2RD\u0002d;Wl2Rm\u0002D,Ul2RD\u0002m;\u000b2RD. Without loss of\ngenerality, we will \fx Vto be\nV=\u0014Id\u0002d\n0(D\u0000d)\u0002d\u0015\n: (12)\nWe use \u0002 :=fU1;:::;UL;Wl;:::;WL;\u000bgto denote all the parameters to be learned from\ndata.\nConsider the following ODE system [31]:\nz(x;0) =Vx;\n_z(x;t) =E(U;W)\u0018\u001atU\u001b(Wz(x;t));\nf\u000b;f\u001atg(x) =\u000bTz(x;1):\n17\nThis ODE system can be viewed as the limit of the residual network (12) ([31]). Consider\nthe following linear ODEs ( p\u00151)\nNp(0) = 1;\n_Np(t) = (E\u001at(jUjjWj)p)1=pNp(t);\nwhere 1= (1;:::; 1)T2Rd,jAjandAqare element-wise operations for the matrix Aand\npositive number q.\nDe\fnition 17 ([31]).Letfbe a function that admits the form f=f\u000b;f\u001atgfor a pair of\n(\u000b;f\u001atg), then we de\fne\nkfkDp(\u000b;f\u001atg)=j\u000bjTNp(1); (13)\nto be theDpnorm offwith respect to the pair ( \u000b,f\u001atg). Herej\u000bjis obtained from \u000bby\ntaking element-wise absolute values. We de\fne\nkfkDp= inf\nf=f\u000b;f\u001atgj\u000bjTNp(1): (14)\nto be theDpnorm off, and letDp=ff:kfkDp<1gbe the \row-induced function space.\nRemark 18.In our de\fnition, \u001atis a distribution on RD\u0002m\u0002Rm\u0002D. However, the ODE\nsystem (13) with large mdoes not express more functions than just taking m= 1. Never-\ntheless, choosing m6= 1 may in\ruence the associated function norm as well as the constant\non the approximation bound in Theorem 23. We do not explore the e\u000bect of di\u000berent min\nthis paper.\nBesideDp, [31] introduced another class of function spaces ~Dpthat contain functions for\nwhich the quantity Np(1) and the continuity of \u001atwith respect to tare controlled. We \frst\nprovide the following de\fnition of \\Lipschitz condition\" of \u001at.\nDe\fnition 19. Given a family of probability distribution f\u001at; t2[0;1]g, the \\Lipschitz\ncoe\u000ecient\" off\u001atg, denoted by Lipf\u001atg, is de\fned as the in\fmum of all the numbers Lthat\nsatis\fes\njE\u001atU\u001b(Wz)\u0000E\u001asU\u001b(Wz)j\u0014Ljt\u0000sjjzj; (15)\nand \f\f\fkE\u001atjUjjWjk1;1\u0000kE\u001asjUjjWjk1;1\f\f\f\u0014Ljt\u0000sj; (16)\nfor anyt;s2[0;1], wherek\u0001k 1;1is the sum of the absolute values of all the entries in a\nmatrix. The \\Lipschitz norm\" of f\u001atgis de\fned as\nkf\u001atgkLip=kE\u001a0jUjjWjk1;1+Lipf\u001atg: (17)\nDe\fnition 20 ([31]).Letfbe a function that satis\fes f=f\u000b;f\u001atgfor a pair of ( \u000b;f\u001atg),\nthen we de\fne\nkfk~Dp(\u000b;f\u001atg)=j\u000bjTNp(1) +kNp(1)k1\u0000D+kf\u001atgkLip; (18)\n18\nto be the ~Dpnorm offwith respect to the pair ( \u000b,f\u001atg). We de\fne\nkfk~Dp= inf\nf=f\u000b;f\u001atgkfk~Dp(\u000b;f\u001atg): (19)\nto be the ~Dpnorm off. The space ~Dpis de\fned as all the functions that admit the\nrepresentation f\u000b;f\u001atgwith \fnite ~Dpnorm.\nThe spaceDpand ~Dpare called \row-induced spaces .\nOne can easily see that the \\norms\" de\fned here are all non-negative quantities (de-\nspite the\u0000Dterm), even though it is not clear that they are really norms. The following\nembedding theorem shows that \row-induced function space is larger than Barron space.\nTheorem 21. For any function f2B, andD\u0015d+ 2andm\u00151, we havef2~D1, and\nkfk~D1\u00142kfkB+ 1: (20)\nFinally, we de\fne a discrete \\path norm\" for residual networks.\nDe\fnition 22. For a residual network de\fned by (12) with parameters \u0002 = f\u000b;Ul;Wl;l=\n0;1;\u0001\u0001\u0001;L\u00001g, we de\fne the l1path norm of \u0002 to be\nk\u0002kP=j\u000bjTLY\nl=1\u0012\nI+1\nLjUljjWlj\u0013\n1: (21)\nWith the de\fnitions above, we are ready to state the direct and inverse approximation\ntheorems for the \row-induced function spaces [31].\nTheorem 23 (Direct Approximation Theorem) .Letf2~D2,\u000e2(0;1). Then, there exists\nan absolute constant C, such that for any\nL\u0015C\u0010\nm4D6kfk5\n~D2(kfk~D2+D)2\u00113\n\u000e;\nthere is an L-layer residual network fL(\u0001; \u0002) that satis\fes\nkf\u0000fL(\u0001; \u0002)k2\u0014kfk2\n~D2\nL1\u0000\u000e; (22)\nand\nk\u0002kP\u00149kfk~D1(23)\nTheorem 24 (Inverse Approximation Theorem) .Letfbe a function de\fned on X. Assume\nthat there is a sequence of residual networks ffL(\u0001; \u0002L)g1\nL=1such thatkf(x)\u0000fL(x; \u0002L)k! 0\nasL!1 . Assume further that the parameters in ffL(\u0001; \u0002)g1\nL=1are (entry-wise) bounded\nbyc0. Then, we have f2D1, and\nkfkD1\u00142em(c2\n0+1)D2c0\nm\nMoreover, if there exists constant c1such thatkfLkD1\u0014c1holds for any L > 0, then we\nhave\nkfkD1\u0014c1\n19\nThe Rademacher complexity estimate is only established for a family of modi\fed \row-\ninduced function norms k\u0001k ^Dp(see the factor 2 in the de\fnition below). It is not clear at\nthis stage whether this is only a technical di\u000eculty.\nLet\nkfk^Dp= inf\nf=f\u000b;f\u001atgj\u000bjT^Np(1) +k^Np(1)k1\u0000D+kf\u001atgkLip; (24)\nwhere ^Np(t) is given by\n^Np(0) = 2 1;\n_^Np(t) = 2 ( E\u001at(jUjjWj)p)1=p^Np(t):\nDenote by ^Dpthe space of functions with \fnite ^Dpnorm. Then, we have\nTheorem 25 ([31]).Let^DQ\np=ff2^Dp:kfk^Dp\u0014Qg, then we have\nRadn(^DQ\n2)\u001418Qr\n2 log(2d)\nn: (25)\nNext we turn to the generalization error estimates for the regularized estimator. At the\nmoment, for the same reason as above, such estimates have only been proved when the\nempirical risk is regularized by a weighted path norm\nk\u0002kWP=j\u000bjTLY\nl=1\u0012\nI+2\nLjUljjWlj\u0013\n1; (26)\nwhich is the discrete version of (24). This norm assigns larger weights to paths that pass\nthrough more non-linearities. Now consider the residual network (12) and the regularized\nempirical risk:\nJ(\u0002) := ^R(\u0002) + 3\u0015k\u0002kWPr\n2 log(2d)\nn; (27)\nTheorem 26 ([30]).Letf\u0003:X![0;1]. Fix any\u0015\u00154 + 2=(3p\n2 log(2d)). Assume that ^\u0002\nis an optimal solution of the regularized model (27). Then for any \u000e2(0;1), with probability\nat least 1\u0000\u000eover the random training samples, the population risk satis\fes\nR(^\u0002)\u00143kfk2\nB\nLm+ (4kfkB+ 1)3(4 +\u0015)p\n2 log(2d) + 2pn+ 4r\n2 log(14=\u000e)\nn: (28)\n3.4 Multi-layer networks: Tree-like function spaces\nA neural network with Lhidden layers is a function of the form\nf(x) =mLX\niL=1aL\niL\u001b0\n@mL\u00001X\niL\u00001=1aL\u00001\niLiL\u00001\u001b \n:::\u001b m1X\ni1=1a1\ni2i1\u001b d+1X\ni0=1a0\ni1i0xi0!!!1\nA (29)\n20\nwherea`\ni`+1i`withi`+12[m`+1] andi`2[m`] are the weights of the neural network. Here\nthe bias terms in the intermediate layers are omitted without loss of generality. Analogous\nto the Barron norm, we introduce the path-norm proxy by\nX\niL;:::;i 0\f\faL\niLaL\u00001\niLiL\u00001\u0001\u0001\u0001a1\ni2i1a0\ni1i0\f\f:\nand the path-norm of the function as the in\fmum of the path-norm proxies over all weights\ninducing the function.\nkfkWL= inf(X\niL;:::;i 0\f\faL\niLaL\u00001\niLiL\u00001\u0001\u0001\u0001a1\ni2i1a0\ni1i0\f\f\f\f\f\ffsatis\fes (29))\n(30)\nThe natural function spaces for the purposes of approximation theory are the tree-like func-\ntion spacesWLof depthL2Nintroduced in [36], where the norm can be extended in a\nnatural way. For trees with a single hidden layer, Barron space and tree-like space agree,\nand the properties of tree-like function spaces are reminiscent of Barron space. Instead of\nstating all the results as theorems, we will just list them below.\n\u000fRademacher complexity/generalization gap: Let FQ=ff2C0;kfkWL\u0014Qgbe\nthe closed ball of radius Q > 0 in the tree-like function space. Then Rad S(FQ)\u0014\n2L+1Qq\n2 ln(2d+2)\nn. If a stronger version of the path-norm is controlled, then the depen-\ndence on depth can be weakened to L3instead of 2L[12]. But remember, here we are\nthinking of keep L\fxed at some \fnite value and increase the widths of the layers.\n\u000fThe closed unit ball of the tree-like space of depth L\u00151 is compact (in particular\nclosed) inC0(K) for every compact set K\u0012Rdand inL2(P) for every compactly\nsupported probability measure P.\n\u000fIn particular, an inverse approximation theorem holds: If kfmkWL\u0014Candfm!f\ninL2(P), thenfis in the tree-like function space of the same depth.\n\u000fThe direct approximation theorem holds, but not with Monte Carlo rate. For f\u0003in\na tree-like function space and m2N, there exists a network fmwith layers of width\nm`=mL\u0000`+1such that\nkfm\u0000f\u0003kL2(P)\u00142Lkf\u0003kWLpm:\nNote that the network has O(m2L\u00001) weights. Part of this stems from the recursive\nde\fnition, and by rearranging the index set into a tree-like structure, the number of\nfree parameters (but dimension-independent) can be dropped to O(mL+1).\nIt is unclear whether a depth-independent (or less depth-dependent) approximation\nrate can be expected. Unlike two-layer networks and residual neural networks, layers\nof a multi-layer network discretize into conditional expectations, whereas the other\nfunction classes are naturally expressed as expectations. A larger number of parameters\nfor multi-layer networks is therefore natural.\n21\n\u000fTree-like function spaces form a scale in the sense that if fis in the tree-like function\nspace of depth `, then it is also in the tree-like function space of depth L > ` and\nkfkWL\u00142kfkW`.\n\u000fIff:Rd!Rkandg:Rk!Rare tree-like functions in WLandW`respectively,\nthen their composition g\u000efis inWL+`.\nIn analogy to two-layer neural networks, we can prove a priori error estimates for multi-\nlayer networks. Let Pbe a probability measure on XandS=fx1;:::;xNgbe a set of iid\nsamples drawn from P. For \fnite neural networks with weights ( aL;:::;a0)2RmL\u0002\u0001\u0001\u0001\u0002\nRm1\u0002dwe denote\nbRn(aL;:::;a0) =^Rn(faL;:::;a0)\nfaL;:::;a0(x) =mLX\niL=1aL\niL\u001b0\n@mL\u00001X\niL\u00001=1aL\u00001\niLiL\u00001\u001b0\n@X\niL\u00002:::\u001b m1X\ni1=1a1\ni2i1\u001b d+1X\ni0=1a0\ni1i0xi0!!1\nA1\nA:\nConsider the regularized loss function: regularized risk functional\nLn(a0;:::;aL) =bRn(aL;:::;a0) +9L2\nm\"mLX\niL=1\u0001\u0001\u0001d+1X\ni0=1\f\faL\niLaL\u00001\niLiL\u00001:::a0\ni1i0\f\f#2\nTheorem 27 (Generalization error) .Assume that the target function satis\fes f\u00032WL. Let\nHmbe the class of neural networks with architecture like in the direct approximation theorem\nfor tree-like function spaces, i.e. m`=mL\u0000`+1for`\u00151. Letfmbe the function given by\nargmin(a0;:::;aL)2HmLn. Thenfmsatis\fes the risk bound\nR(fm)\u001418L2kf\u0003k2\nWL\nm+ 2L+3=2kf\u0003kWLr\n2 log(2d+ 2)\nn+ \u0016cr\n2 log(2=\u000e)\nn: (31)\nIn particular, there exists a function fmsatisfying the risk estimate. Using a natural\ncut-o\u000b, the constant \u0016 ccan be replaced with kf\u0003kL1\u0014Ckf\u0003kWL.\n3.5 Indexed representation and multi-layer spaces\nNeural networks used in applications are not tree-like. To capture the structure of the\nfunctions represented by practical multi-layer neural networks, [36, 71] introduced an indexed\nrepresentation of neural network functions.\nDe\fnition 28. For 0\u0014i\u0014L, let (\ni;Ai;\u0019i) be probability spaces where \n 0=f0;:::;dg\nand\u00190is the normalized counting measure. Consider measurable functions aL: \nL!R\nandai: \ni+1\u0002\ni!Rfor 0\u0014i\u0014L\u00001. The arbitrarily wide neural network modeled on\nthe index spacesf\nigwith weight functions faigis\nfaL;:::;a0(x) =Z\n\nLa(L)\n\u0012L\u001b Z\n\nL\u00001:::\u001b\u0012Z\n\n1a1\n\u00122;\u00121\u001b\u0012Z\n\n0a0\n\u00121;\u00120x\u00120\u00190(d\u00120)\u0013\n\u00191(d\u00121)\u0013\n::: \u0019(L\u00001)(d\u0012L\u00001)!\n\u0019L(d\u0012L):\n(32)\n22\nIf the index spaces are \fnite, this coincides with \fnite neural networks and the integrals\nare replaced with \fnite sums. From this we see that the class of arbitrarily wide neural\nnetworks modeled on certain index spaces may not be a vector space. If all weight spaces\nf\nigare su\u000eciently expressive (e.g. the unit interval with Lebesgue measure), then the set\nof multi-layer networks modeled on \n L;:::; \n0becomes a vector space. The key property is\nthat (0;1) can be decomposed into two sets of probability 1 =2, each of which is isomorphic\nto itself, so adding two neural networks of equal depth is possible.\nThe space of arbitrarily wide neural networks is a subspace of the tree-like function space\nof depthLthat consists of functions fwith a \fnite path-norm\nkfk\nL;:::;\n0;K= inf(Z\nQL\ni=0\ni\f\fa(L)\n\u0012L:::a(0)\n\u00121\u00120\f\f\u0000\n\u0019L\n\u0001\u0001\u0001\n\u00190\u0001\n(d\u0012L\n\u0001\u0001\u0001\n d\u00120)\f\f\f\ff=faL;:::;a0onK)\n:\nSince the coe\u000ecients of non-consecutive layers do not share an index, this may be a proper\nsubspace for networks with mutiple hidden layers. If \n i= (0;1), the space of arbitrarily\nwide networks with one hidden layer coincides with Barron space.\nIn a network with two hidden layers, the (vector-valued) output of the zeroth layer is\n\fxed independently of the second layer. Thus the output of the \frst layer is a vector whose\ncoordinates lie in the subset of Barron space which have L1-densities with respect to the\n(\fxed) distribution of zeroth-layer weights on Rd+1. This subspace is a separable subset of\n(non-separable) Barron space (see [37] for some functional analytic considerations on Barron\nspace). It is, however, still unclear whether the tree-like space and the space of arbitrarily\nwide neural networks on su\u000eciently expressive index spaces agree. The latter contains all\nBarron functions and their compositions, including products of Barron functions.\nThe space of measurable weight functions which render the path-norm \fnite is inconve-\nniently large when considering training dynamics. To allow a natural gradient \row structure,\nwe consider the subset of functions with L2-weights. This is partially motivated by the ob-\nservation that the L2-norm of weights controls the path-norm.\nLemma 29. Iff=faL;:::;a0, then\nkfk\nL;:::;\n0;K\u0014inf(\nkaLkL2(\u0019L)L\u00001Y\ni=0kaikL2(\u0019i+1\n\u0019i)\f\f\f\fais.t.f=faL;:::;a0onK)\nProof. We sketch the proof for two hidden layers.\nZ\n\n2\u0002\n1\u0002\n0\f\fa2\n\u00122a1\n\u00122\u00121a0\n\u00121\u00120\f\fd\u00122d\u00121d\u00120=Z\n\n2\u0002\n1\u0002\n0\f\fa2\n\u00122a0\n\u00121\u00120\f\f\f\fa1\n\u00122\u00121\f\fd\u00122d\u00121d\u00120\n\u0014\u0012Z\n\n2\u0002\n1\u0002\n0\f\fa2\n\u00122a0\n\u00121\u00120\f\f2d\u00122d\u00121d\u00120\u00131\n2\u0012Z\n\n2\u0002\n1\u0002\n0\f\fa1\n\u00122\u00121\f\f2d\u00122d\u00121d\u00120\u00131\n2\n=\u0012Z\n\n2\f\fa2\n\u00122\f\f2d\u00122\u00131\n2\u0012Z\n\n2\u0002\n1\f\fa1\n\u00122\u00121\f\f2d\u00122d\u00121\u00131\n2\u0012Z\n\n1\u0002\n0\f\fa0\n\u00121\u00120\f\f2d\u00121d\u00120\u00131\n2\n:\n23\nNote that the proof is entirely speci\fc to network-like architectures and does not gener-\nalize to tree-like structures. We de\fne the measure of complexity of a function (which is not\na norm) as\nQ(f) = inf(\nkaLkL2(\u0019L)L\u00001Y\ni=0kaikL2(\u0019i+1\n\u0019i)\f\f\f\fais.t.f=faL;:::;a0onK)\n:\nWe can equip the class of neural networks modeled on index spaces f\nigwith a metric\nwhich respects the parameter structure.\nRemark 30.The space of arbitrarily wide neural networks with L2-weights can be metrized\nwith the Hilbert-weight metric\ndHW(f;g) = inf(LX\n`=0ka`;f\u0000a`;gkL2(\u0019`)\f\f\f\faL;f;:::;a0;gs.t.f=faL;f;:::;a0;f; g=faL;g;:::;a0;gand\nka`;hk\u0011 LY\ni=0kai;hkL2! 1\nL+1\n\u00142Q(h)1\nL+1forh2ff;gg)\n:\n(33)\nThe normalization across layers is required to ensure that functions in which one layer can\nbe chosen identical do not have zero distance by shifting all weight to the one layer.\nWe refer to these metric spaces (metric vector spaces if \n i= (0;1) for alli\u00151) as\nmulti-layer spaces . They are complete metric spaces.\n3.6 Depth separation in multi-layer networks\nWe can ask how much larger L-layer space is compared to ( L\u00001)-layer space. A satisfying\nanswer to this question is still outstanding, but partial answers have been found, mostly\nconcerning the di\u000berences between networks with one and two hidden layers.\nExample 31.The structure theorem for Barron functions Theorem 10 shows that functions\nwhich are non-di\u000berentiable on a curved hypersurface are not Barron. In particular, this\nincludes distance functions from hypersurfaces like\nf(x) = dist(x;Sd\u00001) =\f\f1\u0000kxk`2\f\f:\nIt is obvious, however, that fis the composition of two Barron functions and therefore can\nbe represented exactly by a neural network with two hidden layers. This criterion is easy to\ncheck in practice and therefore of greater potential impact than mere existence results. But\nit says nothing about approximation by \fnite neural networks.\nRemark 32.Neural networks with two hidden layers are signi\fcantly more \rexible than\nnetworks with one hidden layer. In fact, there exists an activation function \u001b:R!Rwhich\n24\nis analytic, strictly monotone increasing and satis\fes lim z!\u00061\u001b(z) =\u00061, but also has the\nsurprising property that the \fnitely parametrized family\nH=(3dX\ni=1ai\u001b 3dX\nj=1bij\u001b\u0000\nwT\njx\u0001!\f\f\f\fai;bij2R)\nis dense in the space of continuous functions on any compact set K\u001aRd[66]. The proof\nis based on the Kolmogorov-Arnold representation theorem, and the function is constructed\nin such a way that the translations\n\u001b(z\u00003m) +\u0015m\u001b(z\u0000(3m+ 1)) +\u0016m\u001b(z\u0000(3m+ 2))\n\u000em\nform2Nform a countable dense subset of C0[0;1] for suitable \u0015m;\u0016m;\u000em2R. In particular,\nthe activation function is virtually impossible to use in practice. However, the result shows\nthat any approximation-theoretic analysis must be speci\fc to certain activation functions\nand that common regularity requirements are not su\u000ecient to arrive at a uni\fed theory.\nExample 33.A separation result like this can also be obtained with standard activation\nfunctions. If fis a Barron function, then there exists an fm(x) =Pm\ni=1ai\u001b(wT\nix) such that\nkf\u0000fmkL2(P)\u0014Cpm:\nIn [38], the authors show that there exists a function fsuch that\nkf\u0000fmkL2(P)\u0015\u0016cm\u00001=(d\u00001):\nbutf=g\u000ehwhereg;hare Barron functions (whose norm grows like a low degree polynomial\nind). The argument is based on Parseval's identity and neighbourhood growth in high\ndimensions. Intuitively, the authors argue that kfm\u0000fkL2(Rd)=k^fm\u0000^fkL2(Rd)and that\nthe Fourier transform of \u001b(wTx) is concentrated on the line generated by w. If^fis a radial\nfunction, its Fourier transform is radial as well and f(x) =g(jxj) can be chosen such that gis\na Barron function and the Fourier transform of fhas signi\fcant L2-mass in high frequencies.\nSince small neighborhoods B\"(wi) only have mass \u0018\"d, we see that ^fand ^fmcannot be\nclose unless mis very large in high dimension. To make this intuition precise, some technical\narguments are required since x7!\u001b(wTx) is not an L2-function.\nThere are some results on functions which can be approximated better with signi\fcantly\ndeeper networks than few hidden layers, but a systematic picture is still missing. To the\nbest of our knowledge, there are no results for the separation between LandL+ 1 hidden\nlayers.\n3.7 Tradeo\u000bs between learnability and approximation\nExample 33 and Remark 32 can be used to establish more: If ~fis a Barron function and\nkf\u0000~fkL2(P)<\", then there exists a dimension-dependent constant cd>0 such thatk~fkB\u0015\n25\ncd\"\u0000d\u00003\n2, i.e.fcannot be approximated to high accuracy by functions of low Barron norm.\nTo see this, choose msuch that\u0016c\n4m\u00001=(d\u00001)\u0014\"\u0014\u0016c\n2m\u00001=(d\u00001)and letfmbe a network with\nmneurons. Then\n2\"\u0014\u0016cm\u00001=d\u0014kfm\u0000fkL2\u0014kfm\u0000~fkL2+k~f\u0000fkL2\u0014kfm\u0000~fkL2+\";\nso iffmis a network which approximates the Barron function ~f, we see that\n\u0016c\n4m\u00001=(d\u00001)\u0014\"\u0014kfm\u0000~fkL2\u0014k~fkB\nm1=2:\nIn particular, we conclude that\nk~fkB\u0015\u0016c\n4m1=2\u00001=(d\u00001)=\u0016c\n4md\u00003\n2(d\u00001)=\u00124\n\u0016c\u0013d\u00001\n2\u0010\u0016c\n4m\u00001=(d\u00001)\u0011\u0000d\u00003\n2\u0015\u00124\n\u0016c\u0013d\u00001\n2\n\"\u0000d\u00003\n2:\nThis is a typical phenomenon shared by allmachine learning models of low complexity.\nLetPdbe Lebesgue-measure on the d-dimensional unit cube (which we take as the archetype\nof a truly `high-dimensional' data distribution).\nTheorem 34. [35] LetZbe a Banach space of functions such that the unit ball BZinZ\nsatis\fes\nES\u0018Pn\ndRadS(BZ)\u0014Cdpn;\ni.e. the Rademacher complexity on a set of Nsample decays at the optimal rate in the number\nof data points. Then\n1. The Kolmogorov width ofZin the space of Lipschitz functions with respect to the\nL2-metric is low in the sense that\nlim sup\nt!1\"\nt2\nd\u00002 sup\nf(0)=0;fis 1-Lipschitzinf\nkgkZ\u0014tkf\u0000gkL2(Pd)#\n\u0015\u0016c>0:\n2. There exists a function fwith Lispchitz constant 1such thatf(0) = 0 , but\nlim sup\nt!1\u0014\nt\rinf\nkgkZ\u0014tkf\u0000gkL2(Pd)\u0015\n=1 8\r >2\nd\u00002:\nThis resembles the result of [65] for approximation by ridge functions under a constraint\non the number of parameters, whereas here a complexity bound is assumed instead and no\nspeci\fc form of the model is prescribed (and the result thus applies to multi-layer networks\nas well).\nThus function spaces of low complexity are `poor approximators' for general classes like\nLipschitz functions since we need functions of large Z-norm to approximate functions to\na prescribed level of accuracy. This includes all function spaces discussed in this review,\nalthough some spaces are signi\fcantly larger than others (e.g. there is a large gap between\nreproducing kernel Hilbert spaces, Barron space, and tree-like three layer space).\n26\n3.8 A priori vs. a posteriori estimates\nThe error estimate given above should be compared with a more typical form of estimate in\nthe machine learning literature:\nR(^\u0012n)\u0000^Rn(^\u0012n).k^\u0012nkpn(34)\nwherek^\u0012nkis some suitably de\fned norm. Aside from the fact that (16) gives a bound\non the total generalization error and (34) gives a bound on the generalization gap, there is\nan additional important di\u000berence: The right hand side of (16) depends only on the target\nfunctionf\u0003, not the output of the machine learning model. The right hand side of (34)\ndepends only on the output of the machine learning model, not the target function. In\naccordance with the practice in \fnite element methods, we call (16) a priori estimates and\n(34) a posteriori estimates.\nHow good and how useful are these estimates? A priori estimates discussed here tell us in\nparticular that there exist functions in the hypothesis space for which the generalization error\ndoes not su\u000ber from the CoD if the target function lies in the appropriate function space.\nIt is likely that these estimates are nearly optimal in the sense that they are comparable to\nMonte Carlo error rates (except for multi-layer neural networks, see below). It is possible\nto improve these estimates, for example using standard tricks for Monte Carlo sampling for\nthe approximation error and local Rademacher complexity [13] for the estimation error .\nHowever, these would only improve the exponents in mandnbyO(1=d) which diminishes\nfor larged.\nRegarding the quantitative value of these a priori estimates, the situation is less satisfac-\ntory. The \frst issue is that the values of the norms are not known since the target function is\nnot known. One can estimate these values using the output of the machine learning model,\nbut this does not give us rigorous bounds. An added di\u000eculty is that the norms are de\fned\nas an in\fmum over all possible representations, the output of the machine learning model\nonly gives one representation. But even if we use the exact values of these norms, the bounds\ngiven above are still not tight. For one thing, the use of Monte Carlo sampling to control\nthe approximation error does not give a tight bound. This by itself is an interesting issue.\nThe obvious advantage of the a posteriori bounds is that they can be readily evaluated\nand give us quantitative bounds for the size of the generalization gap. Unfortunately this\nhas not been borned out in practice: The values of these norms are so enormous that these\nbounds are almost always vacuous [29].\nIn \fnite element methods, a posteriori estimates are used to help re\fning the mesh in\nadaptive methods. Ideally one would like to do the same for machine learning models.\nHowever, little has been done in this direction.\nSince the a posteriori bounds only controls the generalization gap, not the full general-\nization error, it misses an important aspect of the whole picture, namely, the approximation\nerror. In fact, by choosing a very strong norm, one can always obtain estimates of the type\nin (34). However, with such strongly constrained hypothesis space, the approximation error\n27\nmight be huge. This is indeed the case for some of the norm-based a posteriori estimates in\nthe literature. See [30] for examples.\n3.9 What's not known?\nHere is a list of problems that we feel are most pressing.\n1. Sharper estimates. There are two obvious places where one should be able to improve\nthe estimates.\n\u000fIn the current analysis, the approximation error is estimated with the help of Monte\nCarlo sampling. This gives us the typical size of the error for randomly picked param-\neters. However, in machine learning, we are only interested in the smallest error. This\nis a clean mathematical problem that has not received attention.\n\u000fThe use of Rademacher complexity to bound the generalization gap neglects the fact\nthat the integrand in the de\fnition of the population risk (as well as the empirical risk)\nshould itself be small, since it is the point-wise error. This should be explored further.\nWe refer to [13] for some results in this direction.\n2. The rate for the approximation error for functions in multi-layer spaces is not the\nsame as Monte Carlo. Can this be improved?\nMore generally, it is not clear whether the multi-layer spaces de\fned earlier are the right\nspaces for multi-layer neural networks.\n3. We introduced two kinds of \row-induced spaces for residual neural networks. To\ncontrol the Rademacher complexity, we had to introduce the weighted path norm. This\nis not necessary for the approximation theory. It is not clear whether similar Rademacher\ncomplexity estimates can be proved for the spaces D2or~D2.\n4. Function space for convolutional neural networks that fully explores the bene\ft of\nsymmetry.\n5. Another interesting network structure is the DenseNet [46]. Naturally one is interested\nin the natural function space associated with DenseNets.\n4 The loss function and the loss landscape\nIt is a surprise to many that simple gradient descent algorithms work quite well for optimizing\nthe loss function in common machine learning models. To put things into perspective, no\none would dream of using the same kind of algorithms for protein folding { the energy\nlandscape for am typical protein is so complicated with lots of local minima that gradient\ndescent algorithms will not go very far. The fact that they seem to work well for machine\nlearning models strongly suggests that the landscapes for the loss functions are qualitatively\ndi\u000berent. An important question is to quantify exactly how the loss landscape looks like.\nUnfortunately, theoretical results on this important problem is still quite scattered and\nthere is not yet a general picture that has emerged. But generally speaking, the current\n28\nunderstanding is that while it is possible to \fnd arbitrarily bad examples for \fnite sized\nneural networks, their landscape simpli\fes as the size increases.\nTo begin with, the loss function is non-convex and it is easy to cook up models for\nwhich the loss landscape has bad local minima (see for example [82]). Moreover, it has been\nsuggested that for small size two-layer ReLU networks with teacher networks as the target\nfunction, nearly all target networks lead to spurious local minima, and the probability of hit-\nting such local minima is quite high [74]. It has also been suggested the over-parametrization\nhelps to avoid these bad spurious local minima.\nThe loss landscape of large networks can be very complicated. [79] presented some\namusing numerical results in which the authors demonstrated that one can \fnd arbitrarily\ncomplex patterns near the global minima of the loss function. Some theoretical results along\nthis direction were proved in [25]. Roughly speaking, it was shown that for any \">0, every\nlow-dimensional pattern can be found in a loss surface of a su\u000eciently deep neural network,\nand within the pattern there exists a point whose loss is within \"of the global minimum\n[25].\nOn the positive side, a lot is known for linear and quadratic neural network models. For\nlinear neural network models, it has been shown that [49]: (1) every local minimum is a\nglobal minimum, (2) every critical point that is not a global minimum is a saddle point, (3)\nfor networks with more than three layers there exist \\bad\" saddle points where the Hessian\nhas no negative eigenvalue and (4) there are no such bad saddle points for networks with\nthree layers.\nSimilar results have been obtained for over-parametrized two-layer neural network models\nwith quadratic activation [81, 27]. In this case it has been shown under various conditions\nthat (1) all local minima are global and (2) all saddle points are strict, namely there are\ndirections of strictly negative curvature. Another interesting work for the case of quadratic\nactivation function is [68]. In the case when the target function is a \\single neuron\", [68]\ngives an asymptotically exact (as d!1 ) characterization of the number of training data\nsamples needed for the global minima of the empirical loss to give rise to a unique function,\nnamely the target function.\nFor over-parametrized neural networks with smooth activation function, the structure\nof the global minima is characterized in the paper of Cooper [23]: Cooper proved that the\nlocus of the global minima is generically (i.e. possibly after an arbitrarily small change to\nthe data set) a smooth m\u0000ndimensional submanifold of Rmwheremis the number of free\nparameters in the neural network model and nis the training data size.\nThe set of minimizers of a convex function is convex, so unless the manifold of minimizers\nis an a\u000ene subspace of Rm, the loss function is non-convex (as can be seen by convergence to\npoor local minimizers from badly chosen initial conditions). If there are two sets of weights\nsuch that both achieve minimal loss, but not all weights on the line segment between them\ndo, then somewhere on the connecting line there exists a local maximum of the loss function\n(restricted to the line). In particular, if the manifold of minimizers is curved at a point,\nthen arbitrarily closedby there exists a point where the Hessian of the loss function has a\nnegative eigenvalue. This lack of positive de\fniteness in training neural networks is observed\n29\nin numerical experiments as well. The curvature of the set of minimizers partially explains\nwhy the weights of neural networks converge to di\u000berent global minimizers depending on the\ninitial condition.\nFor networks where half of the weights in every layer can be set to zero if the remaining\nweights are rescaled appropriately, the set of global minimizers is connected [54].\nWe also mention the interesting empirical work reported in [57]. Among other things,\nit was demonstrated that adding skip connection has a drastic e\u000bect on smoothing the loss\nlandscape.\n4.1 What's not known?\nWe still lack a good mathematical tool to describe the landscape of the loss function for\nlarge neural networks. In particular, are there local minima and how large is the basin of\nattraction of these local minima if they do exist?\nFor \fxed, \fnite dimensional gradient \rows, knowledge about the landscape allows us to\ndraw conclusions about the qualitative behavior of the gradient descent dynamics indepen-\ndent of the detailed dynamics. In machine learning, the dimensionality of the loss function\nism, the number of free parameters, and we are interested in the limit as mgoes to in\fnity.\nSo it is tempting to ask about the landscape of the limiting (in\fnite dimensional) problem.\nIt is not clear whether this can be formulated as a well-posed mathematical problem.\n5 The training process: convergence and implicit reg-\nularization\nThe results of Section 3 tell us that good solutions do exist in the hypothesis space. The\namazing thing is that simple minded gradient descent algorithms are able to \fnd them, even\nthough one might have to be pretty good at parameter tuning. In comparison, one would\nnever dream of using gradient descent to perform protein folding, since the landscape of\nprotein folding is so complicated with lots of bad local minima.\nThe basic questions about the training process are:\n\u000fOptimization: Does the training process converge to a good solution? How fast?\n\u000fGeneralization: Does the solution selected by the training process generalize well? In\nparticular, is there such thing as \\implicit regularization\"? What is the mechanism for\nsuch implicit regularization?\nAt the moment, we are still quite far from being able to answering these questions com-\npletely, but an intuitive picture has started to emerge.\nWe will mostly focus on the gradient descent (GD) training dynamics. But we will touch\nupon some important qualitative features of other training algorithms such as stochastic\ngradient descent (SGD) and Adam.\n30\n5.1 Two-layer neural networks with mean-\feld scaling\n\\Mean-\feld\" is a notion in statistical physics that describes a particular form of interaction\nbetween particles. In the mean-\feld situation, particles interact with each other only through\na mean-\feld which every particle contributes to more or less equally. The most elegant\nmean-\feld picture in machine learning is found in the case of two-layer neural networks:\nIf one views the neurons as interacting particles, then these particles only interact with\neach other through the function represented by the neural network, the mean-\feld in this\ncase. This observation was \frst made in [20, 70, 73, 78]. By taking the hydrodynamic limit\nfor the gradient \row of \fnite neuron systems, these authors obtained a continuous integral\ndi\u000berential equation that describes the evolution of the probability measure for the weights\nassociated with the neurons.\nLet\nI(u1;\u0001\u0001\u0001;um) =^Rn(fm);uj= (aj;wj); fm(x) =1\nmX\njaj\u001b(wT\njx)\nDe\fne the GD dynamics by:\nduj\ndt=\u0000mrujI(u1;\u0001\u0001\u0001;um);uj(0) =u0\nj; j2[m] (35)\nLemma 35. Let\n\u001a(du;t) =1\nmX\nj\u000euj(t)\nthen the GD dynamics (35) can be expressed equivalently as:\n@t\u001a=r(\u001arV); V =\u000e^Rn\n\u000e\u001a(36)\n(36) is the mean-\feld equation that describes the evolution of the probability distribution\nfor the weights associated with each neuron. The lemma above simply states that (36) is\nsatis\fed for \fnite neuron systems.\nIt is well-known that (36) is the gradient \row of ^Rnunder the Wasserstein metric. This\nbrings the hope that the mathematical tools developed in the theory of optimal transport\ncan be brought to bear for the analysis of (36) [86]. In particular, we would like to use these\ntools to study the qualitative behavior of the solutions of (36) as t!1 . Unfortunately\nthe most straightforward application of the results from optimal transport theory requires\nthat the risk functional be displacement convex [69], a property that rarely holds in machine\nlearning (see however the example in [48]). As a result, less than expected has been obtained\nusing optimal transport theory.\nThe one important result, due originally to Chizat and Bach [20], is the following. We\nwill state the result for the population risk.\nTheorem 36. [20, 21, 87] Letf\u001atgbe a solution of the Wasserstein gradient \row such that\n31\n\u000f\u001a0is a probability distribution on the cone \u0002 :=fjaj2\u0014jwj2g.\n\u000fEvery open cone in \u0002has positive measure with respect to \u001a0.\nThen the following are equivalent.\n1. The velocity potentials\u000eR\n\u000e\u001a(\u001at;\u0001)converge to a unique limit as t!1 .\n2.R(\u001at)decays to the global in\fmum value as t!1 .\nIf either condition is met, the unique limit of R(\u001at)is zero. If \u001atalso converges in the\nWasserstein metric, then the limit \u001a1is a minimizer.\nIntuitively, the theorem is slightly stronger than the statement that R(\u001at) converges to\nits in\fmum value if and only if its derivative converges to zero in a suitable sense (if we\napproach from a \row of omni-directional distributions). The theorem is more a statement\nabout a training algorithm than the energy landscape, and speci\fc PDE arguments are used\nin its proof. A few remarks are in order:\n1. There are further technical conditions for the theorem to hold.\n2. Convergence of subsequences of\u000eR\n\u000e\u001a(\u001at;\u0001) is guaranteed by compactness. The question\nof whether they converge to a unique limit can be asked independently of the initial\ndistribution and therefore may be more approachable by standard means.\n3. The \frst assumption on \u001a0is a smoothness assumption needed for the existence of the\ngradient \row.\n4. The second assumption on \u001a0is called omni-directionality . It ensures that \u001acan shift\nmass in any direction which reduces risk. The cone formulation is useful due to the\nhomogeneity of ReLU.\nThis is almost the only non-trivial rigorous result known on the global convergence of\ngradient \rows in the nonlinear regime. In addition, it reveals the fact that having full support\nfor the probability distribution (or something to that e\u000bect) is an important property that\nhelps for the global convergence.\nThe result is insensitive to whether a minimizer of the risk functional exists (i.e. whether\nthe target function is in Barron space). If the target function is not in Barron space, conver-\ngence may be very slow since the Barron norm increases sub-linearly during gradient descent\ntraining.\nLemma 37. [87, Lemma 3.3] If f\u001atgevolves by the Wasserstein-gradient \row of R, then\nlim\nt!1kf\u001atkB\nt= 0:\n32\nIn high dimensions, even reasonably smooth functions are di\u000ecult to approximate with\nfunctions of low Barron norm in the sense of Theorem 34. Thus a \\dynamic curse of di-\nmensionality\" may a\u000bect gradient descent training if the target function is not in Barron\nspace.\nTheorem 38. Consider population and empirical risk expressed by the functionals\nR(\u001a) =1\n2Z\nX(f\u001a\u0000f\u0003)2(x)dx; ^Rn(\u001a) =1\n2nnX\ni=1(f\u001a\u0000f\u0003)2(xi)\nwhere the pointsfxigare i.i.d samples from the uniform distribution on X. There exists f\u0003\nwith Lipschitz constant and L1-norm bounded by 1such that the parameter measures f\u001atg\nde\fned by the 2-Wasserstein gradient \row of either ^RnorRsatisfy\nlim sup\nt!1\u0002\nt\rR(\u001at)\u0003\n=1\nfor all\r >4\nd\u00002.\nFigure 2: The rate of convergence of the gradient descent dynamics for Barron and non-Barron\nfunctions on a logarithmic scale. Convergence rates for Barron functions seem to be dimension-\nindependent. This is not the case for non-Barron functions.\n5.2 Two-layer neural networks with conventional scaling\nIn practice, people often use the conventional scaling (instead of the mean-\feld scaling)\nwhich takes the form:\nfm(x;a;B) =mX\nj=1aj\u001b(bT\njx) =aT\u001b(Bx);\nA popular initialization [55, 43] is as follows\naj(0)\u0018N(0;\f2);bj(0)\u0018N(0;I=d)\n33\nwhere\f= 0 or 1=pm. For later use, we de\fne the Gram matrix K= (Kij)2Rn\u0002n:\nKi;j=1\nnEb\u0018\u00190[\u001b(bTxi)\u001b(bTxj)]:\nWith this \\scaling\", the one case where a lot is known is the so-called highly over-\nparametrized regime. There is both good and bad news in this regime. The good news is\nthat one can prove exponential convergence to global minima of the empirical risk.\nTheorem 39 ([28]).Let\u0015n=\u0015min(K)and assume \f= 0. For any\u000e2(0;1), assume that\nm&n2\u0015\u00004\nn\u000e\u00001ln(n2\u000e\u00001). Then with probability at least 1\u00006\u000ewe have\n^Rn(a(t);B(t))\u0014e\u0000m\u0015nt^Rn(a(0);B(0)) (37)\nNow the bad news: the generalization property of the converged solution is no better\nthan that of the associated random feature model, de\fned by freezing fbjg=fbj(0)g, and\nonly trainingfaig.\nThe \frst piece of insight that the underlying dynamics in this regime is e\u000bectively linear\nis given in [26]. [47] termed the e\u000bective kernel the \\neural tangent kernel\". Later it was\nproved rigorously that in this regime, the entire GD path for the two-layer neural network\nmodel is uniformly close to that of the associated random feature model [32, 5].\nTheorem 40 ([32]).LetB0=B(0). Denote by fm(x;~a(\u0001);B0))the solutions of GD dy-\nnamics for the random feature model. Under the same setting as Theorem 39, we have\nsup\nx2Sd\u00001jfm(x;a(t);B(t))\u0000fm(x;~a(t);B0)j.(1 +p\nln(1=\u000e))2\u0015\u00001\nnpm: (38)\nIn particular, there is no \\implicit regularization\" in this regime.\nEven though convergence of the training process can be proved, overall this is a dis-\nappointing result. At the theoretical level, it does not shed any light on possible implicit\nregularization. At the practical level, it tells us that high over-parametrization is indeed a\nbad thing for these models.\nWhat happens in practice? Do less over-parametrized regimes exist for which implicit\nregularization does actually happen? Some insight has been gained from the numerical study\nin [63].\nLet us look at a simple example: the single neuron target function: f\u0003\n1(x) =\u001b(b\u0003\u0001\nx);b\u0003=e1. This is admittedly a very simple target function. Nevertheless, the training\ndynamics for this function is quite representative of target functions which can be accurately\napproximated by a small number of neurons (i.e. e\u000bectively \\over-parametrized\").\nFigure 3 shows some results from [63]. First note that the bottom two \fgures represent\nexactly the kind of results stated in Theorem 40. The top two \fgures suggest that the\ntraining dynamics displays two phases. In the \frst phase, the training dynamics follows\nclosely that of the associated random feature model. This phase quickly saturates. The\ntraining dynamics for the neural network model is able to reduce the training (and test)\n34\n0 10000 20000 30000 40000 50000\nNumber of iterations10−510−410−310−210−1Lossm=30, n=200 d=19\nNN, train\nNN, test\nRF , train\nRF , test(a)\n0 5 10 15 20 25 30\nIndex of neurons−0.20.00.20.40.60.81.0aj/bardblbj/bardbl\nm=30, n=200 d=19\nnn\nrf (b)\n0 5000 10000 15000 20000\nNumber of iterations10−610−510−410−310−210−1100Lossm=2000, n=200 d=20\nnn, train\nnn-test\nrf, train\nrf-test\n(c)\n0 250 500 750 1000 1250 1500 1750 2000\nIndex of neurons−0.010.000.010.020.030.04aj/bardblbj/bardbl\nm=2000, n=200 d=20\nnn\nrf (d)\nFigure 3: The dynamic behavior of learning single-neuron target function using GD with con-\nventional scaling. We also show the results of the random feature model as a comparison. Top:\nthe case when m= 30;n= 200;d= 19 in the mildly over-parametrized regime. Bottom: the\ncase whenm= 2000;n= 200;d= 19 in the highly over-parametrized regime. The learning rate\n\u0011= 0:001. (a,c) The dynamic behavior of the training and test error. (b,d) The outer layer\ncoe\u000ecient of each neuron for the converged solution.\nerror further in the second phase through a quenching-activation process: Most neurons\nare quenched in the sense that their outer layer coe\u000ecients ajbecome very small, with the\nexception of only a few activated ones.\nThis can also be seen from the m\u0000nhyper-parameter space. Shown in Figure 4 are the\nheat maps of the test errors under the conventional and mean-\feld scaling, respectively. We\nsee that the test error does not change much as mchanges for the mean-\feld scaling. In\ncontrast, there is a clear \\phase transition\" in the heat map for the conventional scaling when\nthe training dynamics undergoes a change from the neural network-like to a random feature-\nlike behavior. This is further demonstrated in Figure 5: One can see that the performance\nof the neural network models indeed becomes very close to that of the random feature model\nas the network width mincreases. At the same time, the path norm also undergoes a sudden\nincrease from the mildly over-parameterized/under-parameterized regime ( m\u0019n=(d+ 1))\nto the highly over-parameterized regime ( m\u0019n). This may provide an explanation for the\nincrease of the test error.\n35\n2.0 2.5 3.0 3.5 4.0 4.5\nlog10(m)1.82.02.22.42.6log10(n)Test errors\n6.0\n5.4\n4.8\n4.2\n3.6\n3.0\n2.4\n1.8\n1.2\n0.6\n2.0 2.5 3.0 3.5 4.0 4.5\nlog10(m)1.82.02.22.42.6log10(n)Test errorsFigure 4: How the network width a\u000bects the test error of GD solutions. The test errors are given\nin logarithmic scale. These experiments are conducted on the single-neuron target function with\nd= 20 and learning rate \u0011= 0:0005. The GD is stopped when the training loss is smaller than\n10\u00007. The two dashed lines correspond to m=n=(d+ 1) (left) and m=n(right), respectively.\nLeft: Conventional scaling; Right: Mean-\feld scaling.\n0.5 1.0 1.5 2.0 2.5 3.0 3.5\nlog10(width)8\n6\n4\n2\n0log10(test error)\nnn\nrf\n0.5 1.0 1.5 2.0 2.5 3.0 3.5\nlog10(width)24681012Path norm\nFigure 5: Heren= 200;d= 20, learning rate = 0 :001. GD is stopped when the training error is\nsmaller than 10\u00008. The two dashed lines correspond to m=n=(d+ 1) (left) and m=n(right),\nrespectively. Several independent experiments were performed. The mean (shown as the line) and\nvariance (shown in the shaded region) are shown here. Left: test error; Right : path norm.\nThe existence of di\u000berent kinds of behavior with drastically di\u000berent generalization prop-\nerties is one of the reasons behind the fragility of deep learning: If the network architecture\nhappens to fall into the random feature-like regime, the performance will deteriorate.\n5.3 Other convergence results for the training of neural network\nmodels\nConvergence of GD for linear neural networks was proved in [14, 4, 89]. [59] analyzes the\ntraining of neural networks with quadratic activation function. It is proved that a modi\fed\nGD with small initialization converges to the ground truth in polynomial time if the sample\n36\nsizen\u0015O(dr5) whered;rare the input dimension and the rank of the target weight matrix,\nrespectively. [80] considers the learning of single neuron with SGD. It was proved that as\nlong as the sample size is large enough, SGD converges to the ground truth exponentially\nfast if the model also consists of a single neuron. Similar analysis for the population risk was\npresented in [84].\n5.4 Double descent and slow deterioration for the random feature\nmodel\nAt the continuous (in time) level, the training dynamics for the random feature model is\na linear system of ODEs de\fned by the Gram matrix. It turns out that the generalization\nproperties for this linear model is surprisingly complex, as we now discuss.\nConsider a random feature model with features f\u001e(\u0001;w)gand probability distribution\n\u0019over the feature vectors w. Letfw1;w2;:::;wmgbe the random feature vectors sampled\nfrom\u0019. Let \b be an n\u0002mmatrix with \b ij=\u001e(xi;wj) wherefx1;x2;:::;xngis the training\ndata, and let\n^f(x;a) =mX\nk=1ak\u001e(x;bk); (39)\nwherea= (a1;b2;:::;am)Tare the parameters. To \fnd a, GD is used to optimize the following\nleast squares objective function,\nmin\na2Rm1\n2nk\ba\u0000yk2; (40)\nstarting from the origin, where yis a vector containing the values of the target function at\nfxi; i= 1;2;:::;ng. The dynamics of ais then given by\nd\ndta(t) =\u00001\nm@\n@a1\n2nk\ba\u0000yk2=\u00001\nmn\bT(\ba\u0000y): (41)\nLet \b =U\u0006VTbe the singular value decomposition of \b, where\n\u0006 = diagf\u00151;\u00152;:::;\u0015 minfn;mgg\nwithf\u0015igbeing the singular values of \b, in descending order. Then the GD solution of (40)\nat timet\u00150 is given by\na(t) =X\ni:\u0015i>01\u0000e\u0000\u00152\nit=(mn)\n\u0015i(uT\niy)vi: (42)\nWith this solution, we can conveniently compute the training and test error at any time t.\nSpeci\fcally, let B= [w1;:::;wm], and with an abuse of notation let \u001e(x;B) = (\u001e(x;w1);:::;\u001e (x;wm))T,\nthe prediction function at time tis given by\n^ft(x) =\u001e(x;B)Ta(t) =X\ni:\u0015i>01\u0000e\u0000\u00152\nit=mn\n\u0015i(uT\niy)(vT\ni\u001e(x;B)): (43)\n37\nShown in the left \fgure of Figure 6 is the test error of the minimum norm solution (which\nis the limit of the GD path when initialized at 0) of the random feature model as a function\nof the size of the model mforn= 500 for the MNIST dataset [64]. Also shown is the smallest\neigenvalue of the Gram matrix. One can see that the test error peaks at m=nwhere the\nsmallest eigenvalue of the Gram matrix becomes exceedingly small. This is the same as the\n\\double descent\" phenomenon reported in [16, 2]. But as can be seen from the \fgure (and\ndiscussed in more detail below), it is really a resonance kind of phenomenon caused by the\nappearance of very small eigenvalues of the Gram matrix when m=n.\nShown in right is the test error of the solution when the gradient descent training is\nstopped after di\u000berent number of steps. One curious thing is that when training is stopped\nat moderately large values of training steps, the resonance behavior does not show up much.\nOnly when training is continued to very large number of steps does resonance show up.\n101102103104\nNumber of features: m102\n101\n100Test error\nm=nMin-norm Sol.\n106\n105\n104\n103\n102\n101\n100101\nSmallest eigenvalue\nMNIST, n=500\n101102103104\nNumber of features102\n101\n100Test error\nT=1e+04\nT=1e+05\nT=1e+06\nT=1e+08\nT=1e+12\nFigure 6: Left: The test error and minimal eigenvalue of Gram matrix as a function of mfor\nn= 500. Right: The test error of GD solutions obtained by running di\u000berent number of iterations.\nMNIST dataset is used.\nIntuitively this is easy to understand. The large test error is caused by the small eigen-\nvalues, because each term in (43) convergences to 1 =\u0015i, and this limit is large when the\ncorresponding eigenvalue is small. However, still by (43), the contribution of the eigenvalues\nenter through terms like e\u0000\u00152t, and therefore shows up very slowly in time.\nSome rigorous analysis of this can be found in [64] and will not be repeated here. Instead\nwe show in Figure 7 an example of the detailed dynamics of the training process. One can\nsee that the training dynamics can be divided into three regimes. In the \frst regime, the\ntest error decreases from an O(1) initial value to something small. In the second regime,\nwhich spans several decades, the test error remains small. It eventually becomes big in\nthe third regime when the small eigenvalues start to contribute signi\fcantly. This slow\ndeterioration phenomenon may also happen for more complicated models, like deep neural\nnetworks, considering that linear approximation of the loss function is e\u000bective near the\nglobal minimum, and hence in this regime the GD dynamics is similar to that of a linear\nmodel (e.g. random feature). Though, the conjecture needs to be con\frmed by further\ntheoretical and numerical study.\n38\nFigure 7\n|{z}\nI| {z }\nII|{z}\nIII\nOne important consequence of this resonance phenomenon is that it a\u000bects the training\nof two-layer neural networks under the conventional scaling. For example in Figure 4, the\ntest error of neural networks under the conventional scaling peaks around m=n, withm\nbeing the width of the network. Though at m=nthe neural network has more parameters\nthann, it still su\u000bers from this resonance phenomenon since as we saw before, in the early\nphase of the training, the GD path for the neural network model follows closely that of the\ncorresponding random feature model.\n5.5 Global minima selection\nIn the over-parametrized regime, there usually exist many global minima. Di\u000berent opti-\nmization algorithms may select di\u000berent global minima. For example, it has been widely\nobserved that SGD tends to pick \\\ratter solutions\" than GD [44, 51]. A natural question is\nwhich global minima are selected by a particular optimization algorithm.\nAn interesting illustration of this is shown in Figure 8. Here GD was used to train the\nFashionMNIST dataset to near completion, and was suddenly replaced by SGD. Instead of\n\fnishing the last step in the previous training process, the subsequent SGD path escapes\nfrom the vicinity of the minimum that GD was converging to, and eventually converges to a\ndi\u000berent global minimum, with a slightly smaller test error [51].\nThis phenomenon can be well-explained by considering the dynamic stability of the op-\ntimizers, as was done in [88]. It was found that the set of dynamically stable global minima\nis di\u000berent for di\u000berent training algorithm. In the example above, the global minimum that\nGD was converging to was unstable for SGD.\nThe gist of this phenomenon can be understood from the following simple one-dimensional\noptimization problem,\nf(x) =1\n2nnX\ni=1aix2\nwithai\u001508i2[n]. The minimum is at x= 0. For GD with learning rate \u0011to be stable,\n39\n0 2000 4000 6000 8000 10000 12000 14000\nIteration20406080100Train Accuracy(%)\nGD\nSGD\n0 2000 4000 6000 8000 10000 12000 14000\nIteration10203040506070Test Accuracy(%)\nGD\nSGDFigure 8: Escape phenomenon in \ftting corrupted FashionMNIST. The GD solution is unstable for\nthe SGD dynamics. Hence the path escapes after GD is suddenly replaced by SGD. Left: Training\naccuracy, Right: Test accuracy.\nthe following has to hold:\nj1\u0000\u0011aj\u00141\nSGD is given by:\nxt+1=xt\u0000\u0011a\u0018xt= (1\u0000\u0011a\u0018)xt; (44)\nwhere\u0018is a random variable that satis\fes P(\u0018=i) = 1=n. Hence, we have\nExt+1= (1\u0000\u0011a)tx0; (45)\nEx2\nt+1=\u0002\n(1\u0000\u0011a)2+\u00112s2\u0003tx2\n0; (46)\nwherea=Pn\ni=1ai=n,s=pPn\ni=1a2\ni=n\u0000a2. Therefore, for SGD to be stable at x= 0,\nwe not only need j1\u0000\u0011aj\u00141, but also (1\u0000\u0011a)2+\u00112s2\u00141. In particular, SGD can only\nconverge with the additional requirement that s\u00142=\u0011.\nThe quantity ais called sharpness, and sis called non-uniformity in [88]. The above\nsimple argument suggests that the global minima selected by SGD tend to be more uniform\nthan the ones selected by GD.\nThis sharpness-non-uniformity criterion can be extended to multi-dimension. It turns out\nthat this theoretical prediction is con\frmed quite well by practical results. Figure 9 shows\nthe sharpness and non-uniformity results of SGD solutions for a VGG-type network for the\nFashionMNIST dataset. We see that the predicted upper bound for the non-uniformity is\nboth valid and quite sharp.\n5.6 Qualitative properties of adaptive gradient algorithms\nAdaptive gradient algorithms are a family of optimization algorithms widely used for train-\ning neural network models. These algorithms use a coordinate-wise scaling of the update\ndirection (gradient or gradient with momentum) according to the history of the gradients.\nTwo most popular adaptive gradient algorithms are RMSprop and Adam, whose update\nrules are\n40\n0 2 4\nsharpness051015nonuniformity\n 2/ηFashionMNIST\nGD\nSGD, B=25\nSGD, B=10\nSGD, B=4Figure 9: The sharpness-non-uniformity diagram for the minima selected by SGD applied to a\nVGG-type network for the FashionMNIST dataset. Di\u000berent colors correspond to di\u000berent set of\nhyper-parameters. B and \u0016in the \fgure stand for the batch size and the learning rate, respectively.\nThe dashed line shows the predicted bound for the non-uniformity. One can see that (1) the data\nwith di\u000berent colors roughly lies below the corresponding dashed line and (2) the prediction given\nby the dashed line is quite sharp.\n\u000fRMSprop:\nvt+1=\u000bvt+ (1\u0000\u000b)(rf(xt))2\nxt+1=xt\u0000\u0011rf(xt)pvt+1+\u000f(47)\n\u000fAdam:\nvt+1=\u000bvt+ (1\u0000\u000b)(rf(xt))2\nmt+1=\fmt+ (1\u0000\f)rf(xt)\nxt+1=xt\u0000\u0011mt+1=(1\u0000\ft+1)p\nvt+1=(1\u0000\u000bt+1) +\u000f(48)\nIn (47) and (48), \u000fis a small constant used to avoid division by 0, usually taken to be 10\u00008.\nIt is added to each component of the vector in the denominators of these equations. The\ndivision should also be understood as being component-wise. Here we will focus on Adam.\nFor further discussion, we refer to [62].\nOne important tool for understanding these adaptive gradient algorithms is their contin-\nuous limit. Di\u000berent continuous limits can be obtained from di\u000berent ways of taking limits.\nIf we let\u0011tends to 0 while keeping \u000band\f\fxed, the limiting dynamics will be\n_x=\u0000rf(x)\njrf(x)j+\u000f: (49)\n41\nThis becomes signGD when \u000f= 0. On the other hand, if we let \u000b= 1\u0000a\u0011and\f= 1\u0000b\u0011\nand take\u0011!0, then we obtain the limiting dynamics\n_v=a(rf(x)2\u0000v)\n_m=b(rf(x)\u0000m)\n_x=\u0000(1\u0000e\u0000bt)\u00001mp\n(1\u0000e\u0000at)\u00001v+\u000f(50)\nIn practice the loss curves of Adam can be very complicated. The left panel of Figure 11\nshows an example. Three obvious features can be observed from these curves:\n1.Fast initial convergence: the loss curve decreases very fast, sometimes even super-\nlinearly, at the early stage of the training.\n2.Small oscillations: The fast initial convergence is followed by oscillations around the\nminimum.\n3.Large spikes: spikes are sudden increase of the value of the loss. They are followed\nby an oscillating recovery. Di\u000berent from small oscillations, spikes make the loss much\nlarger and the interval between two spikes is longer.\nThe fast initial convergence can be partly explained by the convergence property of\nsignGD, which attains global minimum in \fnite time for strongly convex objective functions.\nSpeci\fcally, we have the following proposition [62].\nProposition 41. Assume that the objective function satis\fes the Polyak-Lojasiewicz (PL)\ncondition:krf(x)k2\n2\u0015\u0016f(x), for some positive constant \u0016. De\fne continuous signGD\ndynamics,\n_xt=\u0000sign(rf(xt));\nthen we have\nf(xt)\u0014\u0012p\nf(x0)\u0000p\u0016\n2t\u00132\n:\nThe small oscillations and spikes may potentially be explained by linearization around\nthe stationary point, though in this case, linearization is quite tricky due to the singular\nnature of the stationary point [62].\nThe performance of Adam depends sensitively on the values of \u000band\f. This has also\nbeen studied in [62]. Recall that \u000b= 1\u0000a\u0011and\f= 1\u0000b\u0011. Focusing on the region where a\nandbare not too large, three regimes with di\u000berent behavior patterns were observed in the\nhyper-parameter space of ( a;b):\n1.The spike regime : This happens when bis su\u000eciently larger than a. In this regime\nlarge spikes appear in the loss curve, which makes the optimization process unstable.\n42\n2.The oscillation regime : This happens when aandbhave similar magnitude (or\nin the same order). In this regime the loss curve exhibits fast and small oscillations.\nSmall loss and stable loss curve can be achieved.\n3.The divergence regime : This happens when ais su\u000eciently larger than b. In this\nregime the loss curve is unstable and usually diverges after some period of training.\nThis regime should be avoided in practice since the training loss stays large.\nIn Figure 10 we show one typical loss curve for each regime for a typical neural network\nmodel.\nFigure 10: The three typical behavior patterns for Adam trajectories. Experiments are conducted\non a fully-connected neural network with three hidden layers are 256 ;128;64, respectively. The\ntraining data is taken from 2 classes of CIFAR-10 with 1000 samples per class. The learning rate is\n\u0011= 0:001. The \frst row shows the loss curve of total 1000 iterations, the second row shows part of\nthe loss curve (the last 200 iterations for oscillation and divergence regimes, and 400 \u0000800 iterations\nfor the spike regime). Left:a= 1,b= 100, large spikes appear in the loss curve; Middle:a= 10,\nb= 10, the loss is small and oscillates very fast, and the amplitude of the oscillation is also small;\nRight:a= 100,b= 1, the loss is large and blows up.\nIn Figure 11, we study Adam on a neural network model and show the \fnal average\ntraining loss for di\u000berent values of aandb. One can see that Adam can achieve very small\nloss in the oscillation regime. The algorithm is not stable in the spike regime and may blow\nup in the divergence regime. These observations suggest that in practice one should take\na\u0019bwith small values of aandb. Experiments on more complicated models, such as\nResNet18, also support these conclusions [62].\n43\n0.2 0.3 0.7 1.4 2.9 5.9 11.9 24.2 49.2 100.0\na100.0\n49.2\n24.2\n11.9\n5.9\n2.9\n1.4\n0.7\n0.3\n0.2b\nloss\n7.5\n6.0\n4.5\n3.0\n1.5\n0.2 0.3 0.7 1.4 2.9 5.9 11.9 24.2 49.2 100.0\na100.0\n49.2\n24.2\n11.9\n5.9\n2.9\n1.4\n0.7\n0.3\n0.2b\nspikes\nsmall oscillation\nlarge oscillationFigure 11: Left: The training curve of Adam for a multi-layer neural network model on CIFAR-\n10. The network has 3 hidden-layers whose widths are 256-256-128, with learning rate 1 e\u00003 and\n(\u000b;\f) = (0:9;0:999) as default. 2 classes are picked from CIFAR-10 with 500 images in each class.\nSquare loss function is used. Middle: Heat map of the average training loss (in logarithmic scale) of\nAdam on a multi-layer neural network model. The loss is the averaged over the last 1000 iterations.\naandbrange from 0 :1 to 100 in logarithm scale. Right: The di\u000berent behavior patterns.\n5.7 Exploding and vanishing gradients for multi-layer neural net-\nworks\nExploding and vanishing gradients is one of the main obstacles for training of multi (many)-\nlayer neural networks. Intuitively it is easy to see why this might be an issue. The gradient\nof the loss function with respect to the parameters involve a product of many matrices. Such\na product can easily grow or diminish very fast as the number of products, namely the layers,\nincreases.\nConsider the multi-layer neural network with depth L:\nz0=x;\nzl=\u001b(Wlzl\u00001+cl); l= 1;2;\u0001\u0001\u0001;L; (51)\nf(x;\u0012) =zL;\nwherex2Rdis the input data, \u001bis the ReLU activation and \u0012:= (W1;c1;\u0001\u0001\u0001;Wd;cL)\ndenotes the collection of parameters. Here Wl2Rml\u0002ml\u00001is the weight, cl2Rmlis the bias.\nmlis the width of the l-th hidden layer.\nTo make quantitative statements, we consider the case when the weights are i.i.d. random\nvariables. This is typically the case when the neural networks are initialized. This problem\nhas been studied in [41, 42]. Below is a brief summary of the results obtained in these papers.\nFix two collections of probability measures \u0016= (\u0016(1);\u0016(2);\u0001\u0001\u0001;\u0016(L)),\u0017= (\u0017(1);\u0017(2);\u0001\u0001\u0001;\u0017(L))\nonRsuch that\n\u000f\u0016(l),\u0017(l)are symmetric around 0 for every 1 \u0014l\u0014L;\n\u000fthe variance of \u0016(l)is 2=nl\u00001;\n\u000f\u0017(l)has no atoms.\n44\nWe consider the random network obtained by:\nWi;j\nl\u0018\u0016(l);ci\nl\u0018\u0017(l); i:i:d: (52)\nfor anyi= 1;2;\u0001\u0001\u0001;mlandj= 1;2;\u0001\u0001\u0001;ml\u00001, i.e. the weights and biases at layer lare\ndrawn independently from \u0016(l),\u0017(l)respectively. Let\nZp;q:=@zq\nL\n@xp; p = 1;2;\u0001\u0001\u0001;m0=d; q = 1;2;\u0001\u0001\u0001;mL: (53)\nTheorem 42. We have\nE\u0002\nZ2\np;q\u0003\n=1\nd: (54)\nIn contrast, the fourth moment of Zp;qisexponential inP\nl1\nml:\n2\nd2exp \n1\n2L\u00001X\nl=11\nml!\n\u0014E\u0002\nZ4\np;q\u0003\n\u0014C\u0016\nd2exp \nC\u0016L\u00001X\nl=11\nml!\n; (55)\nwhereC\u0016>0is a constant only related to the (fourth) moment of \u0016=f\u0016(l)gL\nl=1.\nFurthermore, for any K2Z+,3\u0014K < min\n1\u0014l\u0014L\u00001fmlg, we have\nE\u0002\nZ2K\np;q\u0003\n\u0014C\u0016;K\ndKexp \nC\u0016;KL\u00001X\nl=11\nml!\n; (56)\nwhereC\u0016;K>0is a constant depending on Kand the (\frst 2K) moments of \u0016.\nObviously, by Theorem 42, to avoid the exploding and vanishing gradient problem, we\nwant the quantityPL\u00001\nl=11\nmlto be small. This is also borne out from numerical experiments\n[42, 41].\n5.8 What's not known?\nThere are a lot that we don't know about training dynamics. Perhaps the most elegant\nmathematical question is the global convergence of the mean-\feld equation for two-layer\nneural networks.\n@t\u001a=r(\u001arV); V =\u000eR\n\u000e\u001a\nThe conjecture is that:\n\u000fif\u001a0is a smooth distribution with full support, then the dynamics described by this\n\row should converge and the population risk Rshould converge to 0;\n\u000fif the target function f\u0003lies in the Barron space, then the Barron norm of the output\nfunction stays uniformly bounded.\n45\nSimilar statements should also hold for the empirical risk.\nAnother core question is when two-layer neural networks can be trained e\u000ecientlyi. The\nwork [61, 77] shows that there exist target functions with Barron norms of size poly(d), such\nthat the training is exponentially slow in the statistical-query (SQ) setting [50]. A concrete\nexample isf\u0003(x) = sin(dx1) withx\u0018N(0;Id), for which it is easy to verify that the Barron\nnorm off\u0003ispoly(d). However, [77] shows that the gradients of the corresponding neural\nnetworks are exponentially small, i.e. O(e\u0000d). Thus even for a moderate d, it is impossible\nto evaluate the gradients accurately on a \fnite-precision machine due to the \roating-point\nerror. Hence gradient-based optimizers are unlikely to succeed. This suggests that the\nBarron space is very likely too large for studying the training of two-layer neural networks.\nIt is an open problem to identify the right function space, such that the functions can be\nlearned in polynomial time by two-layer neural networks.\nThere are (at least) two possibilities for why the training becomes slow for certain (Bar-\nron) target functions in high dimension:\n1. The training is slow in the continuous model due to the large parameter space or\n2. The training is fast in the continuous model with dimension-independent rates, but\nthe discretization becomes more di\u000ecult in high dimension.\nIt is unknown which of these explanations applies. Under strong conditions, it is known\nthat parameters which are initialized according to a law \u00190and trained by gradient descent\nperform better at time t>0 than parameters which are drawn from the distribution \u0019tgiven\nby the Wasserstein gradient \row starting at \u00190[19]. This might suggest that also gradient\n\rows with continuous initial condition may not reduce risk at a dimension-independent rate.\nOne can ask similar questions for multi-layer neural networks and residual neural net-\nworks. However, it is much more natural to formulate them using the continuous formulation.\nWe will postpone this to a separate article.\nEven less is known for the training dynamics of neural network models under the con-\nventional scaling, except for the high over-parametrized regime. This issue is all the more\nimportant since conventional scaling is the overwhelming scaling used in practice.\nIn particular, since the neural networks used in practice are often over-parametrized,\nand they seem to perform much better than random feature models, some form of implicit\nregularization must be at work. Identifying the presence and the mechanism of such implicit\nregularization is a very important question for understanding the success of neural network\nmodels.\nWe have restricted our discussion to gradient descent training. What about stochastic\ngradient descent and other training algorithms?\n6 Concluding remarks\nVery brie\ry, let us summarize the main theoretical results that have been established so far.\niThe authors would like to thank Jason Lee for helpful conversations on the topic.\n46\n1. Approximation/generalization properties of hypothesis space:\n\u000fFunction spaces and quantitative measures for the approximation properties for\nvarious machine learning models. The random feature model is naturally associ-\nated with the corresponding RKHS. In the same way, the Barron norm is identi-\n\fed as the natural measure associated with two-layer neural network models (note\nthat this is di\u000berent from the spectral norm de\fned by Barron). For residual net-\nworks, the corresponding quantities are de\fned for the \row-induced spaces. For\nmulti-layer networks, a promising candidate is provided by the multi-layer norm.\n\u000fGeneralization error estimates of regularized model. Dimension-independent er-\nror rates have been established for these models. Except for multi-layer neural\nnetworks, these error rates are comparable to Monte Carlo. They provide a way\nto compare these di\u000berent machine learning models and serve as a benchmark for\nstudying implicit regularization.\n2. Training dynamics for highly over-parametrized neural network models:\n\u000fExponential convergence for the empirical risk.\n\u000fTheir generalization properties are no better than the corresponding random fea-\nture model or kernel method.\n3. Mean-\feld training dynamics for two-layer neural networks:\n\u000fIf the initial distribution has full support and the GD path converges, then it\nmust converge to a global minimum.\nA lot has also been learned from careful numerical experiments and partial analytical\narguments, such as:\n\u000fOver-parametrized networks may be able to interpolate any training data.\n\u000fThe \\double descent\" and \\slow deterioration\" phenomenon for the random feature\nmodel and their e\u000bect on the corresponding neural network model.\n\u000fThe qualitative behavior of adaptive optimization algorithms.\n\u000fThe global minima selection mechanism for di\u000berent optimization algorithms.\n\u000fThe phase transition of the generalization properties of two-layer neural networks under\nthe conventional scaling.\nWe have mentioned many open problems throughout this article. Besides the rigorous\nmathematical results that are called for, we feel that carefully designed numerical experi-\nments should also be encouraged. In particular, they might give some insight on the di\u000berence\nbetween neural network models of di\u000berent depth (for example, two-layer and three-layer\nneural neworks), and the di\u000berence between scaled and unscaled residual network models.\n47\nOne very important issue that we did not discuss much is the continuous formulation of\nmachine learning. We feel this issue deserves a separate article when the time is ripe.\nAcknowledgement: . This work is supported in part by a gift to the Princeton Univer-\nsity from iFlytek.\nReferences\n[1] Emmanuel Abbe and Colin Sandon. Poly-time universality and limitations of deep learning.\narXiv preprint arXiv:2001.02992 , 2020.\n[2] Madhu S Advani and Andrew M Saxe. High-dimensional dynamics of generalization error in\nneural networks. arXiv preprint arXiv:1710.03667 , 2017.\n[3] Nachman Aronszajn. Theory of reproducing kernels. Transactions of the American mathe-\nmatical society , 68(3):337{404, 1950.\n[4] Sanjeev Arora, Nadav Cohen, Noah Golowich, and Wei Hu. A convergence analysis of gradient\ndescent for deep linear neural networks. In International Conference on Learning Representa-\ntions , 2018.\n[5] Sanjeev Arora, Simon S Du, Wei Hu, Zhiyuan Li, Ruslan Salakhutdinov, and Ruosong Wang.\nOn exact computation with an in\fnitely wide neural net. arXiv preprint arXiv:1904.11955 ,\n2019.\n[6] Antonio Au\u000enger, G\u0013 erard Ben Arous, and Ji\u0014 r\u0013 \u0010 \u0014Cern\u0012 y. Random matrices and complexity of\nspin glasses. Communications on Pure and Applied Mathematics , 66(2):165{201, 2013.\n[7] Francis Bach. Breaking the curse of dimensionality with convex neural networks. Journal of\nMachine Learning Research , 18(19):1{53, 2017.\n[8] Francis Bach. On the equivalence between kernel quadrature rules and random feature expan-\nsions. The Journal of Machine Learning Research , 18(1):714{751, 2017.\n[9] Jean Barbier, Florent Krzakala, Nicolas Macris, L\u0013 eo Miolane, and Lenka Zdeborov\u0013 a. Optimal\nerrors and phase transitions in high-dimensional generalized linear models. Proceedings of the\nNational Academy of Sciences , 116(12):5451{5460, 2019.\n[10] Andrew R Barron. Neural net approximation. In Proc. 7th Yale Workshop on Adaptive and\nLearning Systems , volume 1, pages 69{72, 1992.\n[11] Andrew R. Barron. Universal approximation bounds for superpositions of a sigmoidal function.\nIEEE Transactions on Information theory , 39(3):930{945, 1993.\n[12] Andrew R Barron and Jason M Klusowski. Approximation and estimation for high-dimensional\ndeep learning networks. arXiv:1809.03090 , 2018.\n[13] Peter L Bartlett, Olivier Bousquet, Shahar Mendelson, et al. Local Rademacher complexities.\nThe Annals of Statistics , 33(4):1497{1537, 2005.\n48\n[14] Peter L. Bartlett, David P. Helmbold, and Philip M. Long. Gradient descent with identity ini-\ntialization e\u000eciently learns positive de\fnite linear transformations by deep residual networks.\narXiv preprint arXiv:1802.06093 , 2018.\n[15] Peter L. Bartlett and Shahar Mendelson. Rademacher and gaussian complexities: Risk bounds\nand structural results. Journal of Machine Learning Research , 3:463{482, 2002.\n[16] Mikhail Belkin, Daniel Hsu, Siyuan Ma, and Soumik Mandal. Reconciling modern machine-\nlearning practice and the classical bias{variance trade-o\u000b. Proceedings of the National Academy\nof Sciences , 116(32):15849{15854, 2019.\n[17] Olivier Bousquet and Andr\u0013 e Elissee\u000b. Stability and generalization. Journal of machine learning\nresearch , 2(Mar):499{526, 2002.\n[18] Joan Bruna and St\u0013 ephane Mallat. Invariant scattering convolution networks. IEEE transac-\ntions on pattern analysis and machine intelligence , 35(8):1872{1886, 2013.\n[19] Zhengdao Chen, Grant Rotsko\u000b, Joan Bruna, and Eric Vanden-Eijnden. A dynamical central\nlimit theorem for shallow neural networks. Advances in Neural Information Processing Systems ,\n33, 2020.\n[20] Lenaic Chizat and Francis Bach. On the global convergence of gradient descent for over-\nparameterized models using optimal transport. In Advances in neural information processing\nsystems , pages 3036{3046, 2018.\n[21] Lenaic Chizat and Francis Bach. Implicit bias of gradient descent for wide two-layer neural\nnetworks trained with the logistic loss. arXiv preprint arXiv:2002.04486 , 2020.\n[22] Anna Choromanska, Mikael Hena\u000b, Michael Mathieu, G\u0013 erard Ben Arous, and Yann LeCun.\nThe loss surfaces of multilayer networks. In AISTATS , 2015.\n[23] Yaim Cooper. The loss landscape of overparameterized neural networks. arXiv preprint\narXiv:1804.10200 , 2018.\n[24] George Cybenko. Approximation by superpositions of a sigmoidal function. Mathematics of\ncontrol, signals and systems , 2(4):303{314, 1989.\n[25] Wojciech Marian Czarnecki, Simon Osindero, Razvan Pascanu, and Max Jaderberg. A\ndeep neural network's loss surface contains every low-dimensional pattern. arXiv preprint\narXiv:1912.07559 , 2019.\n[26] Amit Daniely. SGD learns the conjugate kernel class of the network. In Advances in Neural\nInformation Processing Systems , pages 2422{2430, 2017.\n[27] Simon Du and Jason Lee. On the power of over-parametrization in neural networks with\nquadratic activation. In International Conference on Machine Learning , pages 1329{1338,\n2018.\n[28] Simon S. Du, Xiyu Zhai, Barnabas Poczos, and Aarti Singh. Gradient descent provably\noptimizes over-parameterized neural networks. In International Conference on Learning Rep-\nresentations , 2019.\n49\n[29] Gintare Karolina Dziugaite and Daniel M. Roy. Computing nonvacuous generalization bounds\nfor deep (stochastic) neural networks with many more parameters than training data. In\nProceedings of the Thirty-Third Conference on Uncertainty in Arti\fcial Intelligence, UAI ,\n2017.\n[30] Weinan E, Chao Ma, and Qingcan Wang. A priori estimates of the population risk for residual\nnetworks. arXiv preprint arXiv:1903.02154 , 2019.\n[31] Weinan E, Chao Ma, and Lei Wu. Barron spaces and the compositional function spaces for\nneural network models. arXiv preprint arXiv:1906.08039 , 2019.\n[32] Weinan E, Chao Ma, and Lei Wu. A comparative analysis of the optimization and generaliza-\ntion property of two-layer neural network and random feature models under gradient descent\ndynamics. Science China Mathematics, pages 1-24, 2020; arXiv:1904.04326 , 2019.\n[33] Weinan E, Chao Ma, and Lei Wu. Machine learning from a continuous viewpoint. arXiv\npreprint arXiv:1912.12777 , 2019.\n[34] Weinan E, Chao Ma, and Lei Wu. A priori estimates of the population risk for two-\nlayer neural networks. Communications in Mathematical Sciences , 17(5):1407{1425, 2019;\narXiv:1810.06397, 2018.\n[35] Weinan E and Stephan Wojtowytsch. Kolmogorov width decay and poor approximators in\nmachine learning: Shallow neural networks, random feature models and neural tangent kernels.\narXiv:2005.10807 [math.FA] , 2020.\n[36] Weinan E and Stephan Wojtowytsch. On the Banach spaces associated with multi-layer ReLU\nnetworks of in\fnite width. arXiv:2007.15623 [stat.ML] , 2020.\n[37] Weinan E and Stephan Wojtowytsch. Representation formulas and pointwise properties for\nbarron functions. arXiv preprint arXiv:2006.05982 , 2020.\n[38] Ronen Eldan and Ohad Shamir. The power of depth for feedforward neural networks. In\nConference on learning theory , pages 907{940, 2016.\n[39] Nicolas Fournier and Arnaud Guillin. On the rate of convergence in Wasserstein distance of\nthe empirical measure. Probability Theory and Related Fields , 162(3-4):707{738, 2015.\n[40] Sebastian Goldt, Galen Reeves, Marc M\u0013 ezard, Florent Krzakala, and Lenka Zdeborov\u0013 a. The\ngaussian equivalence of generative models for learning with two-layer neural networks. arXiv\npreprint arXiv:2006.14709 , 2020.\n[41] Boris Hanin. Which neural net architectures give rise to exploding and vanishing gradients?\nInAdvances in Neural Information Processing Systems , pages 582{591, 2018.\n[42] Boris Hanin and David Rolnick. How to start training: The e\u000bect of initialization and archi-\ntecture. In Advances in Neural Information Processing Systems , pages 571{581, 2018.\n[43] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into recti\fers:\nSurpassing human-level performance on imagenet classi\fcation. In Proceedings of the IEEE\ninternational conference on computer vision , pages 1026{1034, 2015.\n50\n[44] Sepp Hochreiter and Jurgen Schmidhuber. Flat minima. Neural Computation , 9(1):1{42, 1997.\n[45] Kaitong Hu, Zhenjie Ren, David Siska, and Lukasz Szpruch. Mean-\feld Langevin dynamics\nand energy landscape of neural networks. arXiv:1905.07769 [math.PR] , 2019.\n[46] Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely con-\nnected convolutional networks. In Proceedings of the IEEE conference on computer vision and\npattern recognition , pages 4700{4708, 2017.\n[47] Arthur Jacot, Franck Gabriel, and Cl\u0013 ement Hongler. Neural tangent kernel: Convergence and\ngeneralization in neural networks. In Advances in neural information processing systems , pages\n8580{8589, 2018.\n[48] Adel Javanmard, Marco Mondelli, and Andrea Montanari. Analysis of a two-layer neural\nnetwork via displacement convexity. arXiv preprint arXiv:1901.01375 , 2019.\n[49] Kenji Kawaguchi. Deep learning without poor local minima. In Advances in neural information\nprocessing systems , pages 586{594, 2016.\n[50] Michael Kearns. E\u000ecient noise-tolerant learning from statistical queries. Journal of the ACM\n(JACM) , 45(6):983{1006, 1998.\n[51] Nitish S. Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping T.P.\nTang. On large-batch training for deep learning: Generalization gap and sharp minima. In In\nInternational Conference on Learning Representations (ICLR) , 2017.\n[52] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Inter-\nnational Conference on Learning Representations , 2015.\n[53] Jason M Klusowski and Andrew R Barron. Risk bounds for high-dimensional ridge function\ncombinations including neural networks. arXiv preprint arXiv:1607.01434 , 2016.\n[54] Rohith Kuditipudi, Xiang Wang, Holden Lee, Yi Zhang, Zhiyuan Li, Wei Hu, Rong Ge, and\nSanjeev Arora. Explaining landscape connectivity of low-cost solutions for multilayer nets. In\nAdvances in Neural Information Processing Systems , pages 14601{14610, 2019.\n[55] Yann A LeCun, L\u0013 eon Bottou, Genevieve B Orr, and Klaus-Robert M uller. E\u000ecient backprop.\nInNeural networks: Tricks of the trade , pages 9{48. Springer, 2012.\n[56] Moshe Leshno, Vladimir Ya Lin, Allan Pinkus, and Shimon Schocken. Multilayer feedforward\nnetworks with a nonpolynomial activation function can approximate any function. Neural\nnetworks , 6(6):861{867, 1993.\n[57] Hao Li, Zheng Xu, Gavin Taylor, Christoph Studer, and Tom Goldstein. Visualizing the\nloss landscape of neural nets. In Advances in Neural Information Processing Systems , pages\n6389{6399, 2018.\n[58] Qianxiao Li, Long Chen, Cheng Tai, and Weinan E. Maximum principle based algorithms for\ndeep learning. The Journal of Machine Learning Research , 18(1):5998{6026, 2017.\n51\n[59] Yuanzhi Li, Tengyu Ma, and Hongyang Zhang. Algorithmic regularization in over-\nparameterized matrix sensing and neural networks with quadratic activations. In Conference\nOn Learning Theory , pages 2{47, 2018.\n[60] Zhong Li, Chao Ma, and Lei Wu. Complexity measures for neural networks with general\nactivation functions using path-based norms. arXiv:2009.06132 [cs.LG] , 2020.\n[61] Roi Livni, Shai Shalev-Shwartz, and Ohad Shamir. On the computational e\u000eciency of training\nneural networks. In Advances in neural information processing systems , pages 855{863, 2014.\n[62] Chao Ma, Lei Wu, and Weinan E. A qualitative study of the dynamic behavior of adaptive\ngradient algorithms. arXiv preprint arXiv:2009.06125 , 2020.\n[63] Chao Ma, Lei Wu, and Weinan E. The quenching-activation behavior of the gradient descent\ndynamics for two-layer neural network models. arXiv preprint arXiv:2006.14450 , 2020.\n[64] Chao Ma, Lei Wu, and Weinan E. The slow deterioration of the generalization error of the\nrandom feature model. In Mathematical and Scienti\fc Machine Learning Conference , 2020.\n[65] VE Maiorov. On best approximation by ridge functions. Journal of Approximation Theory ,\n99(1):68{94, 1999.\n[66] Vitaly Maiorov and Allan Pinkus. Lower bounds for approximation by MLP neural networks.\nNeurocomputing , 25(1-3):81{91, 1999.\n[67] Y Makovoz. Uniform approximation by neural networks. Journal of Approximation Theory ,\n95(2):215{228, 1998.\n[68] Stefano Sarao Mannelli, Eric Vanden-Eijnden, and Lenka Zdeborov\u0013 a. Optimization and gen-\neralization of shallow neural networks with quadratic activation functions. arXiv preprint\narXiv:2006.15459 , 2020.\n[69] Robert J McCann. A convexity principle for interacting gases. Advances in mathematics ,\n128(1):153{179, 1997.\n[70] Song Mei, Andrea Montanari, and Phan-Minh Nguyen. A mean \feld view of the landscape of\ntwo-layer neural networks. Proceedings of the National Academy of Sciences , 115(33):E7665{\nE7671, 2018.\n[71] Phan-Minh Nguyen and Huy Tuan Pham. A rigorous framework for the mean \feld limit of\nmultilayer neural networks. arXiv:2001.11443 [cs.LG] , 2020.\n[72] Allan Pinkus. Approximation theory of the mlp model in neural networks. Acta numerica ,\n8(1):143{195, 1999.\n[73] Grant Rotsko\u000b and Eric Vanden-Eijnden. Parameters as interacting particles: long time con-\nvergence and asymptotic error scaling of neural networks. In Advances in neural information\nprocessing systems , pages 7146{7155, 2018.\n[74] Itay Safran and Ohad Shamir. Spurious local minima are common in two-layer relu neural\nnetworks. In International Conference on Machine Learning , pages 4433{4441, 2018.\n52\n[75] Andrew M Saxe, James L McClelland, and Surya Ganguli. Exact solutions to the nonlinear\ndynamics of learning in deep linear neural networks. arXiv preprint arXiv:1312.6120 , 2013.\n[76] Shai Shalev-Shwartz and Shai Ben-David. Understanding machine learning: From theory to\nalgorithms . Cambridge university press, 2014.\n[77] Ohad Shamir. Distribution-speci\fc hardness of learning neural networks. The Journal of\nMachine Learning Research , 19(1):1135{1163, 2018.\n[78] Justin Sirignano and Konstantinos Spiliopoulos. Mean \feld analysis of neural networks: A\ncentral limit theorem. arXiv preprint arXiv:1808.09372 , 2018.\n[79] Ivan Skorokhodov and Mikhail Burtsev. Loss landscape sightseeing with multi-point optimiza-\ntion. arXiv preprint arXiv:1910.03867 , 2019.\n[80] Mahdi Soltanolkotabi. Learning ReLUs via gradient descent. In Advances in neural information\nprocessing systems , pages 2007{2017, 2017.\n[81] Mahdi Soltanolkotabi, Adel Javanmard, and Jason D Lee. Theoretical insights into the op-\ntimization landscape of over-parameterized shallow neural networks. IEEE Transactions on\nInformation Theory , 65(2):742{769, 2018.\n[82] Grzegorz Swirszcz, Wojciech Marian Czarnecki, and Razvan Pascanu. Local minima in training\nof deep networks. arXiv preprint arXiv:1611.06310 , 2016.\n[83] Cheng Tai and Weinan E. Multiscale adaptive representation of signals: I. the basic framework.\nThe Journal of Machine Learning Research , 17(1):4875{4912, 2016.\n[84] Yuandong Tian. An analytical formula of population gradient for two-layered relu network and\nits applications in convergence and critical point analysis. arXiv preprint arXiv:1703.00560 ,\n2017.\n[85] T. Tieleman and G. Hinton. Lecture 6.5|RmsProp: Divide the gradient by a running average\nof its recent magnitude. COURSERA: Neural Networks for Machine Learning, 2012.\n[86] C\u0013 edric Villani. Optimal transport: old and new , volume 338. Springer Science & Business\nMedia, 2008.\n[87] Stephan Wojtowytsch. On the global convergence of gradient descent training for two-layer\nRelu networks in the mean \feld regime. arXiv:2005.13530 [math.AP] , 2020.\n[88] Lei Wu, Chao Ma, and Weinan E. How SGD selects the global minima in over-parameterized\nlearning: A dynamical stability perspective. In Advances in Neural Information Processing\nSystems , pages 8279{8288, 2018.\n[89] Lei Wu, Qingcan Wang, and Chao Ma. Global convergence of gradient descent for deep linear\nresidual networks. In Advances in Neural Information Processing Systems , pages 13389{13398,\n2019.\n[90] Lenka Zdeborov\u0013 a and Florent Krzakala. Statistical physics of inference: Thresholds and algo-\nrithms. Advances in Physics , 65(5):453{552, 2016.\n53\n[91] Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understand-\ning deep learning requires rethinking generalization. In International Conference on Learning\nRepresentations , 2017.\nA Proofs for Section 3.1\nProof of Theorem 6 Write the random feature model as\nfm(x) =Z\na\u001e(x;w)\u001am(da;dw);\nwith\n\u001am(a;w) =1\nmmX\nj=1\u000e(a\u0000aj)\u000e(w\u0000wj):\nSince supp ( \u001am)\u0012K:= [\u0000C;C]\u0002\n, the sequence of probability measures ( \u001am) is tight. By\nProkhorov's theorem, there exists a subsequence ( \u001amk) and a probability measure \u001a\u00032P(K)\nsuch that\u001amkconverges weakly to \u001a\u0003. Due to that g(a;w;x) =a\u001e(x;w) is bounded and\ncontinuous with respect to ( a;w) for anyx2[0;1]d, we have\nf\u0003(x) = lim\nk!1Z\na\u001e(x;w)\u001amk(da;dw) =Z\na\u001e(x;w)\u001a\u0003(da;dw); (57)\nDenote bya\u0003(w) =R\na\u001a\u0003(ajw)dathe conditional expectation of agivenw. Then we have\nf\u0003(x) =Z\na\u0003(w)\u001e(x;w)\u0019\u0003(dw);\nwhere\u0019\u0003(w) =R\n\u001a(a;w)dais the marginal distribution. Since supp ( \u001a\u0003)\u0012K, it follows\nthatja\u0003(w)j\u0014C. For any bounded and continuous function g: \n7!R, we have\nZ\ng(w)\u0019\u0003(w)dw=Z\ng(w)\u001a\u0003(a;w)dwda (58)\n= lim\nk!1Z\ng(w)\u001a\u0003\nm(a;w)dadw (59)\n= lim\nk!11\nmmkX\nj=1g(w0\nj) =Z\ng(w)\u00190(w)dw: (60)\nBy the strong law of large numbers, the last equality holds with probability 1. Taking\ng(w) =a\u0003(w)\u001e(x;w), we obtain\nf\u0003(x) =Z\na\u0003(w)\u001e(x;w)\u00190(w)dw:\nTo prove Theorem 7, we \frst need the following result ([33, Proposition 7]).\n54\nProposition 43. Given training set f(xi;f(xi))gn\ni=1, for any\u000e2(0;1), we have that with\nprobability at least 1\u0000\u000eover the random sampling of fw0\njgm\nj=1, there exists a2Rmsuch\nthat\n^Rn(a)\u00142 log(2n=\u000e)\nmkfk2\nH+8 log2(2n=\u000e)\n9m2kfk2\n1;\nkak2\nm\u0014kfk2\nH+r\nlog(2=\u000e)\n2mkfk2\n1(61)\nProof of Theorem 7 Denote by ~athe solution constructed in Proposition 43. Using the\nde\fnition, we have\n^Rn(^an) +k^ankpnm\u0014^Rn(~a) +k~akpnm:\nHence,k^ankpm\u0014Cn;m:=k~akpm+pn^Rn(~a):De\fneHC:=f1\nmPm\nj=1aj\u001e(x;w0\nj) :kakpm\u0014Cg.\nThen, we have Rad S(HC).Cpn[76]. Moreover, for any \u000e2(0;1), with probability 1 \u0000\u000e\nover the choice of training data, we have\nR(^an).^Rn(^an) + RadS(HCn;m) +r\nlog(2=\u000e)\nn\n\u0014^Rn(~a) +k~akpnm+k~akpnm+^Rn(~a) +r\nlog(2=\u000e)\nn\n\u00142^Rn(~a) + 2k~akpnm+r\nlog(2=\u000e)\nn:\nBy inserting Eqn. (61), we complete the proof.\nB Proofs for Section 3.2\nProof of Theorem 12. Letf(x) =E(a;w)\u0018\u001a\u0002\na\u001b(wTx)\u0003\n. Using the homogeneity of the ReLU\nactivation function, we may assume that kwk`1\u00111 andjaj\u0011kfkB\u001a-almost everywhere.\nBy [76, Lemma 26.2], we can estimate\nE(a;w)\u0018\u001am\"\nsup\nx2[0;1]d1\nmmX\ni=1\u0000\nai\u001b(wT\nix)\u0000f(x)\u0001#\n\u00142E(a;w)\u0018\u0019mE\u0018\"\nsup\nx2[0;1]d1\nmmX\ni=1\u0018iai\u001b(wT\nix)#\n;\n55\nwhere\u0018i=\u00061 with probability 1 =2 independently of \u0018jare Rademacher variables. Now we\nbound\nE\u0018\"\nsup\nx2[0;1]d1\nmmX\ni=1\u0018iai\u001b(wT\nix)#\n=E\u0018\"\nsup\nx2[0;1]d1\nmmX\ni=1\u0018ijaij\u001b\u0000\nwT\nix\u0001#\n=E\u0018\"\nsup\nx2[0;1]d1\nmmX\ni=1\u0018i\u001b\u0000\njaijwT\nix\u0001#\n\u0014E\u0018\"\nsup\nx2[0;1]d1\nmmX\ni=1\u0018ijaijwT\nix#\n=1\nmE\u0018\"\nsup\nx2[0;1]dxTmX\ni=1\u0018ijaijwi#\n=1\nmE\u0018\r\r\r\r\rmX\ni=1\u0018ijaijwi\r\r\r\r\r\n`1:\nIn the \frst line, we used the symmetry of \u0018ito eliminate the sign of ai, which we then take\ninto the (positively one-homogenous) ReLU activation function. The next inequality follows\nfrom the contraction lemma for Rademacher complexities, [76, Lemma 26.9], while the \fnal\nequality follows immediately from the duality of `1- and`1-norm. Recalling that\nkyk`2\u0014kyk`1\u0014p\nd+ 1kyk`28y2Rd+1;kaiwik`1=kfkB81\u0014i\u0014m\nwe bound\nE(a;w)\u0018\u001am\"\nsup\nx2[0;1]d1\nmmX\ni=1\u0000\nai\u001b(wT\nix)\u0000f(x)\u0001#\n\u00142 sup\nkyik`1\u0014kfkBE\u0018\r\r\r\r\r1\nmmX\ni=1\u0018iyi\r\r\r\r\r\n`1\n\u00142kfkBsup\nkyik`1\u00141E\u0018\r\r\r\r\r1\nmmX\ni=1\u0018iyi\r\r\r\r\r\n`1\n\u00142p\nd+ 1kfkBsup\nkyik`2\u00141E\u0018\r\r\r\r\r1\nmmX\ni=1\u0018iyi\r\r\r\r\r\n`2\n\u00142kfkBr\nd+ 1\nm\nby using the Rademacher complexity of the unit ball in Hilbert spaces [76, Lemma 26.10].\nApplying the same argument to \u0000\u00021\nmPm\ni=1ai\u001b(wT\nix)\u0000f(x)\u0003\n, we \fnd that\nE(a;w)\u0018\u0019msup\nx2[0;1]d\f\f\f\f\f1\nmmX\ni=1\u0000\nai\u001b(wT\nix)\u0000f(x)\u0001\f\f\f\f\f\u00144kfkBr\nd+ 1\nm:\nIn particular, there exists weights ( ai;wi)m\ni=1such that the inequality is true.\n56",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "zhZ3P9zY_fF",
"year": null,
"venue": "CoRR 2020",
"pdf_link": "http://arxiv.org/pdf/2007.15623v1",
"forum_link": "https://openreview.net/forum?id=zhZ3P9zY_fF",
"arxiv_id": null,
"doi": null
}
|
{
"title": "On the Banach spaces associated with multi-layer ReLU networks: Function representation, approximation theory and gradient descent dynamics",
"authors": [
"Weinan E",
"Stephan Wojtowytsch"
],
"abstract": "We develop Banach spaces for ReLU neural networks of finite depth $L$ and infinite width. The spaces contain all finite fully connected $L$-layer networks and their $L^2$-limiting objects under bounds on the natural path-norm. Under this norm, the unit ball in the space for $L$-layer networks has low Rademacher complexity and thus favorable generalization properties. Functions in these spaces can be approximated by multi-layer neural networks with dimension-independent convergence rates. The key to this work is a new way of representing functions in some form of expectations, motivated by multi-layer neural networks. This representation allows us to define a new class of continuous models for machine learning. We show that the gradient flow defined this way is the natural continuous analog of the gradient descent dynamics for the associated multi-layer neural networks. We show that the path-norm increases at most polynomially under this continuous gradient flow dynamics.",
"keywords": [],
"raw_extracted_content": "arXiv:2007.15623v1 [stat.ML] 30 Jul 2020ON THE BANACH SPACES ASSOCIATED WITH MULTI-LAYER RELU\nNETWORKS\nFUNCTION REPRESENTATION, APPROXIMATION THEORY AND GRADIE NT\nDESCENT DYNAMICS\nWEINAN E AND STEPHAN WOJTOWYTSCH\nAbstract. We develop Banach spaces for ReLU neural networks of finite de pthLand infinite\nwidth. The spaces contain all finite fully connected L-layer networks and their L2-limiting\nobjects under bounds on the natural path-norm. Under this no rm, the unit ball in the space for\nL-layer networks has low Rademacher complexity and thus favo rable generalization properties.\nFunctions in these spaces can be approximated by multi-laye r neural networks with dimension-\nindependent convergence rates.\nThe key to this work is a new way of representing functions in s ome form of expectations,\nmotivated by multi-layer neural networks. This representa tion allows us to define a new class\nof continuous models for machine learning. We show that the g radient flow defined this way is\nthe natural continuous analog of the gradient descent dynam ics for the associated multi-layer\nneural networks. We show that the path-norm increases at mos t polynomially under this\ncontinuous gradient flow dynamics.\nContents\n1. Introduction 1\n2. Generalized Barron spaces 3\n3. Banach spaces for multi-layer neural networks 9\n4. Indexed representation of arbitrarily wide neural networks 18\n5. Optimization of the continuous network model 29\n6. Conclusion 33\nReferences 35\nAppendix A. A brief review of measure theory 37\nAppendix B. On the existence and uniqueness of the gradient flow 41\n1.Introduction\nIt is well-known that neural networks can approximate any continu ous function on a compact\nset arbitrarily well in the uniform topology as the number of trainable parameters increase\n[Cyb89, Hor91, LLPS93]. However, the number and magnitude of th e parameters required may\nmake this result unfeasible for practical applications. Indeed it has been shown to be the case\nwhen two-layer neural networks are used to approximate genera l Lipschitz continuous functions\nDate: July 31, 2020.\n2020 Mathematics Subject Classification. 68T07, 46E15, 26B35, 35Q68, 34A12, 26B40.\nKey words and phrases. Barron space, multi-layer space, deep neural network, repr esentations of functions,\nmachine learning, infinitely wide network, ReLU activation , Banach space, path-norm, continuous gradientdescent\ndynamics, index representation.\n1\n2 WEINAN E AND STEPHAN WOJTOWYTSCH\n[EW20a]. It is therefore necessary to ask which functions can be ap proximated wellby neural\nnetworks, by which we mean that as the number of parameters goe s to infinity, the convergence\nrate should not suffer from the curse of dimensionality.\nIn classical approximation theory, the role of neural networks wa s taken by (piecewise) poly-\nnomials or Fourierseries and the natural function spaces were H¨ o lder spaces, (fractional) Sobolev\nspaces, or generalized versions thereof [Lor66]. In the high-dimen sional theories characteristic\nfor machine learning, these spaces appear inappropriate (for exa mple, approximation results of\nthe kind discussed above do not hold for these spaces) and other c oncepts have emerged, such\nas reproducing kernel Hilbert spaces for random feature models [R R08], Barron spaces for two-\nlayer neural networks [EMW19a, Bac17, EW20b, EMW19b, EW20a, E MW18, KB16], and the\nflow-induced space for residual neural network models [EMW19a].\nIn this article, we extend these ideas to networks with several hidd en (infinitely wide) layers.\nThe key is to find how functions in these spaces should be represent ed and what the right norm\nshould be. Our most important results are:\n(1) There exists a class of Banach spaces associated to multi-layer neural networks which\nhas low Rademacher complexity (i.e. multi-layer functions in these spa ces are easily\nlearnable).\n(2) The neural tree spaces introduced here are the appropriate function spaces for the cor-\nresponding multi-layer neural networks in terms of direct and inver se approximation\ntheorems.\n(3) The gradient flow dynamics is well defined in a much simpler subspac e of the correspond-\ning neural tree space. Functions in this space admit an intuitive repr esentation in terms\nof compositions of expectations. The path norm increases at most polynomially in time\nunder the natural gradient flow dynamics.\nThese results justify our choice of function representation and t he norm.\nNeural networks are parametrized by weight matrices which share indices only between ad-\njacent layers. To understand the approximation power of neural networks, we rearrange the\nindex structure of weights in a tree-like fashion and show that the a pproximation problem un-\nder path-norm bounds remains unchanged. This approach makes t he problem more linear and\neasier to handle from the approximation perspective, but is unsuita ble when describing training\ndynamics. To address this discrepancy, we introduce a subspace o f the natural function spaces\nfor very wide multi-layer neural networks (or neural trees) which automatically incorporates the\nstructure of neural networks. For this subspace, we investigat e the natural training dynamics\nand demonstrate that the path-norm increases at most polynomia lly during training.\nAlthough the function representation and function spaces are mo tivated by developing an\napproximation theory for multi-layer neural network models, once we have them, we can use\nthem as our starting point for developing alternative machine learnin g models and algorithms.\nIn particular, we can extend the program proposed in [EMW19b] on c ontinuous formulations\nof machine learning to function representations developed here. A s an example, we show that\ngradient descent training for multi-layer neural networks can be r ecovered as the discretization\nof a natural continuous gradient flow.\nThe article is organized as follows. In the remainder of the introduct ion, we discuss the\nphilosophy behind this study and the continuous approach to machin e learning. In Section 2,\nwe motivate the ‘neural tree’ approach, introduce an abstract c lass of function spaces and study\ntheir first properties. A special instance of this class tailored to mu lti-layer networks is studied\nin greater detail in Section 3. A class of function families with an explicit network structure is\nintroduced in Section 4. While Sections 2 and 3 are written from the ap proximation perspective,\nSection 5 is devoted to the study of gradient flow optimization of mult i-layer networks and its\nrelation to the function spaces we introduce. We conclude the artic le with a brief discussion of\nBANACH SPACES OF WIDE MULTI-LAYER NEURAL NETWORKS 3\nour results and some open questions in Section 6. Technical results from measure theory which\nare needed in the article are gathered in the appendix.\n1.1.Conventions and notation. LetK⊆Rdbe acompactset. Thenwedenote by C0(K)the\nspace of continuous functions on Kand byC0,α(K) the space of α-H¨ older continuous functions\nforα∈(0,1]. In particular C0,1is the space of Lipschitz-continuous functions. The norms are\ndenoted as\n/ba∇dblf/ba∇dblC0(K)= sup\nx∈K|f(x)|,[f]C0,α(K)= sup\nx,y∈K,x/\\e}atio\\slash=y|f(x)−f(y)|\n|x−y|α,/ba∇dblf/ba∇dblC0,α=/ba∇dblf/ba∇dblC0+[f]C0,α.\nSince all norms on Rdare equivalent, the space of H¨ older- or Lipschitz-continuous fun ctions does\nnot depend on the choice of norm on Rd. The H¨ older constant [ ·]C0,αhowever does depend on it,\nand using different ℓp-norms leads to a dimension-dependent factor. In this article, we c onsider\nalways consider Rdequipped with the ℓ∞-norm.\nLetXbe a Banach space. Then we denote by BXthe closed unit ball in X. Furthermore,\na review of notations, terminologies and results relating to measure theory can be found in the\nappendix.\nFrequently and without comment, we identify x∈Rdwith (x,1)∈Rd+1. This allows us to\nsimplify notation and treat affine maps as linear. In particular, for x∈Rdandw∈Rd+1we\nsimply write wTx=/summationtextd\ni=1wixi+wd+1.\n2.Generalized Barron spaces\nWe begin by reviewing multi-layer neural networks.\n2.1.Neural networks and neural trees. Afully connected L-layer neural network is a func-\ntion of the type\n(2.1) f(x) =mL/summationdisplay\niL=1aL\niLσ\nmL−1/summationdisplay\niL−1=1aL−1\niLiL−1σ\n/summationdisplay\niL−2...σ/parenleftiggm1/summationdisplay\ni1=1a1\ni2i1σ/parenleftiggd+1/summationdisplay\ni0=1a0\ni1i0xi0/parenrightigg/parenrightigg\n\n\nwhere the parameters aℓ\nijare referred to as the weightsof the neural network, mℓis thewidthof\ntheℓ-th layer, and σ:R→Ris a non-polynomial activation function. For the purposes of this\narticle, we take σto be the rectifiable linear unit σ(z) = ReLU( z) = max{z,0}.\nDeep neural networksare complicated functions of both their inpu txand their weights, where\ntheweightsofonelayersonlyshareanindexwithneighbouringlayers ,leadingtoparameterreuse.\nFor simplicity, consider a network with two hidden layers\nf(x) =m2/summationdisplay\ni2=1a2\ni2σ/parenleftiggm1/summationdisplay\ni1=1a1\ni2i1σ/parenleftiggd+1/summationdisplay\ni0=1a0\ni1i0xi0/parenrightigg/parenrightigg\nand note that fcan also be expressed as\nf(x) =m2/summationdisplay\ni2=1a2\ni2σ/parenleftiggm1/summationdisplay\ni1=1a1\ni2i1σ/parenleftiggd+1/summationdisplay\ni0=1b0\ni2i1i0xi0/parenrightigg/parenrightigg\nwithb0\ni2i1i0≡a0\ni1i0. In this way, an index in the outermost layer gets its own set of para meters\nfor deeper layers, eliminating parameter sharing. The function par ameters are arranged in a\ntree-like structure rather than a network with many cross-conn ections. On the other hand, a\nfunction of the form\nf(x) =m2/summationdisplay\ni2=1a2\ni2σ/parenleftiggm1/summationdisplay\ni1=1a1\ni2i1σ/parenleftiggd+1/summationdisplay\ni0=1a0\ni2i1i0xi0/parenrightigg/parenrightigg\n4 WEINAN E AND STEPHAN WOJTOWYTSCH\ncan equivalently be expressed as\nf(x) =m2/summationdisplay\ni2=1a2\ni2σ\nm1m2/summationdisplay\nj1=1b1\ni2j1σ/parenleftiggd+1/summationdisplay\ni0=1b0\nj1i0xi0/parenrightigg\n\nwith\nb1\ni2j1=/braceleftigg\na1\ni2,j1−(i2−1)m1if (i2−1)m1< j1≤i2m1\n0 else, b0\nj1i0=a⌊j1/m1⌋+1,j1−⌊j1/m1⌋,i0.\nThe cost of rearranging a three-dimensional index set into a two-d imensional one is listing a\nnumber of zero-elements explicitly in the preceding layer instead of im plicitly. Conversely, if we\nrearrange a two-dimensional index set into a three-dimensional on e, we need to repeat the same\nweight multiple times. For deeper trees, the index sets become even higher-dimensional, and the\nre-arrangement introduces even more trivial branches or redun dancies. Nevertheless, we note\nthat the space of finite neural networks of depth L\nF∞:=/braceleftigg∞/summationdisplay\niL=1aL\niLσ\n∞/summationdisplay\niL−1=1aL−1\niLiL−1σ\n/summationdisplay\niL−2...σ/parenleftigg∞/summationdisplay\ni1=1a1\ni2i1σ/parenleftiggd+1/summationdisplay\ni0=1a0\ni1i0xi0/parenrightigg/parenrightigg\n\n\n/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleal\nij= 0 for all but finitely many i,j,l/bracerightigg\nand the space of finite neural trees of depth L\n/tildewideF∞:=/braceleftigg∞/summationdisplay\niL=1aL\niLσ\n∞/summationdisplay\niL−1=1aL−1\niLiL−1σ\n/summationdisplay\niL−2...σ/parenleftigg∞/summationdisplay\ni1=1a1\niL...i2i1σ/parenleftiggd+1/summationdisplay\ni0=1a0\niL...i1i0xi0/parenrightigg/parenrightigg\n\n\n/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleal\niL...ik= 0 for all but finitely many l,i1,...,iL/bracerightigg\nare identical.\nRemark 2.1.We note that this perspective is only admissible concerning approxima tion theory.\nFor gradient flow-based training algorithms, it makes a huge differen ce\n•whether parameters are reused or not,\n•which set of weights that induces a certain function is chosen, and\n•how the magnitude of the weights is distributed across the layers (u sing the invariance\nσ(z) =λ−1σ(λz) forλ >0).\nA perspective more adapted to the training of neural networks is p resented in Section 5.\nFor given weights al\nijoral\niL...il, we consider the path-norm proxy , which is defined as\n/ba∇dblf/ba∇dblpnp=/summationdisplay\niL···/summationdisplay\ni0/vextendsingle/vextendsingleaL\niL...a0\ni1i0/vextendsingle/vextendsingleor/ba∇dblf/ba∇dblpnp=/summationdisplay\niL···/summationdisplay\ni0/vextendsingle/vextendsingleaL\niL...a0\niL...i0/vextendsingle/vextendsingle\nrespectively. Knowing the weights, the sum is easy to compute and it naturally controls the\nLipschitz norm of the function f.\nWhen we train a function fto approximate values yi=f∗(xi) at data points xi, the path-\nnorm proxy controls the generalization error , as we will show below. If the path-norm proxy of f\nis very large, the function values f(xi) heavily depend on cancellations between the partial sums\nwith positive and negative weights in the outermost layer. In the ext reme case, these partial\nsums may be several ordersof magnitude larger than f(xi). In that situation, the function values\nBANACH SPACES OF WIDE MULTI-LAYER NEURAL NETWORKS 5\nf∗(x) andf(x) may be entirely different for unseen data points x, even if they are close on the\ntraining sample {xi}N\ni=1. On the other hand, we will show below that functions with low path-\nnorm proxy generalize well. Thus controlling the path-norm proxy eff ectively means controlling\nthe generalization error, either directly or indirectly. We will make th is more precise below.\nWhile the path-norm proxy is easy to compute from the weights of a n etwork, it is a quantity\nrelated to the parameterization of a function, not the function its elf. The map from the weights\nal\nijto therealization fof the network as in (2.1) is highly non-injective. The path-norm of a\nfunction fis the infimum of the path-norm proxies over all sets of weights of an L-layer neural\nnetwork which have the realization f.\n2.2.Definition of Generalized Barron Spaces. Letσbe the rectified linear unit, i.e. σ(z) =\nmax{z,0}. ReLU is a popular activation function for neural networks and has two useful prop-\nerties for us: It is positively one-homogeneous and Lipschitz contin uous with Lipschitz constant\n1.\nLetK⊆Rdbe a compact set and Xbe a Banach space such that\n(1)Xembeds continuously into the space C0,1(K) of Lipschitz-functions on Kand\n(2) the closed unit ball BXinXis closed in the topology of C0(K).\nRecall the following corollary to the Arzel` a-Ascoli theorem.\nLemma2.2. [Dob10,Satz2.42] Letun:K→Rbe a sequence of functions such that /ba∇dblun/ba∇dblC0,1(K)≤\n1. Then there exists u∈C0,1(K)and a subsequence unksuch that unk→ustrongly in C0,α(K)\nfor allα <1and\n/ba∇dblu/ba∇dblC0,1(K)≤liminf\nk→∞/ba∇dblunk/ba∇dblC0,1(K)≤1.\nThusBXis pre-compact in the separable Banach space C0(K). Since BXisC0-closed, it is\ncompact, so in particular a Polish space. A brief review of measure th eory in Polish spaces and\nrelated topics used throughout the article is given in Appendix A.\nLetµbe a finite signed measure on the Borel σ-algebra of BX(with respect to the C0-norm).\nThenµis a signed Radon measure. The vector-valued function\nBX→C0(K), g/ma√sto→σ(g)\nis continuous and thus µ-integrable in the sense of Bochner integrals. We define\nfµ=/integraldisplay\nBXσ/parenleftbig\ng(·)/parenrightbig\nµ(dg)\n/ba∇dblf/ba∇dblX,K= inf/braceleftbig\n/ba∇dblµ/ba∇dblM(BX):µ∈ M(BX) s.t.f=fµonK/bracerightbig\n(2.2)\nBX,K=/braceleftbig\nf∈C0(K) :/ba∇dblf/ba∇dblX,K<∞/bracerightbig\n.\nHereM(BX) denotes the space of (signed) Radon measures on BX. The first integral can\nequivalently be considered as a Lebesgue integral pointwise for eve ryx∈Kor as a Bochner\nintegral. We will show below that BX,Kis a normed vector space of (Lipschitz-)continuous\nfunctions on K. We call BX,Kthe generalized Barron space modelled on X.\nRemark 2.3.The construction of the function space BX,Kabove resembles the approach to Bar-\nron spaces for two-layer networks [Bac17, EW20b, EMW19a, EMW18]. Note tha t Barron spaces\nare distinct from the class of functions considered by Barron in [Bar 93], which is sometimes\nreferred to as Barron class . While Barron spaces are specifically designed for applications con-\ncerning neural networks, the Barron class is defined in terms of sp ectral properties and a subset\nof Barron space for almost every activation function of practical importance.\n6 WEINAN E AND STEPHAN WOJTOWYTSCH\nExample 2.4.IfXis the spaceof affinefunctions from RdtoR(which is isomorphicto Rd+1), the\nBX,Kis the usualBarronspacefor two-layerneural networksas desc ribed in [EMW18, EMW19a,\nEW20b].\nDue to Lemma 2.2, we may choose X=C0,1(K).\nExample 2.5.IfX=C0,1(K), thenBX,K=C0,1(K) and the norms are equivalent to within a\nfactor of two. For f∈C0,1(K), we represent\nf=/ba∇dblf/ba∇dblC0,1(K)σ/parenleftbiggf\n/ba∇dblf/ba∇dblC0,1(K)/parenrightbigg\n−/ba∇dblf/ba∇dblC0,1(K)σ/parenleftbiggf\n/ba∇dblf/ba∇dblC0,1(K)/parenrightbigg\n=/integraldisplay\nBXσ(g)/parenleftbigg\n/ba∇dblf/ba∇dblC0,1·δf\n/bardblf/bardblC0,1−/ba∇dblf/ba∇dblC0,1·δ−f\n/bardblf/bardblC0,1/parenrightbigg\n(dg).\nThese examples are on opposite sides of the spectrum with Xbeing either the least complex\nnon-trivial space or the largest admissible space. Spaces of deep n eural networks lie somewhere\nbetween those extremes.\nRemark 2.6.For the classical Barron space, we usually consider measures supp orted on the unit\nsphere in the finite-dimensional space X. IfXis infinite-dimensional, typically only the unit ball\ninXis closed (and thus compact) in C0, but not the unit sphere. For mathematical convenience,\nwe choose the compact setting.\n2.3.Properties. Let us establish some first properties of generalized Barron space s.\nTheorem 2.7. The following are true.\n(1)BX,Kis a Banach-space.\n(2)X ֒−→ BX,Kand/ba∇dblf/ba∇dblBX,K≤2/ba∇dblf/ba∇dblX.\n(3)BX,K֒−→C0,1(K)and the closed unit ball of BX,Kis a closed subset of C0(K).\nProof.SinceX ֒−→C0,1(K), we know that there exist C1,C2>0 such that\n/ba∇dblg/ba∇dblC0(K)≤C1/ba∇dblg/ba∇dblX,[g]C0,1(K)≤C2/ba∇dblg/ba∇dblX∀g∈X.\nBanach space. By construction, BX,Kis isometric to the quotient space M(BX)/NKwhere\nNK=/braceleftbigg\nµ∈ M(BX)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplay\nBXσ/parenleftbig\ng(x)/parenrightbig\nµ(dg) = 0∀x∈K/bracerightbigg\n.\nIn particular, BX,Kis a normed vector space with the norm /ba∇dbl·/ba∇dblX,K. The map\nM(BX)→C0(K), µ/ma√sto→fµ=/integraldisplay\nBXσ(g)µ(dg)\nis continuous as/vextenddouble/vextenddouble/vextenddouble/vextenddouble/integraldisplay\nBXσ(g)µ(dg)/vextenddouble/vextenddouble/vextenddouble/vextenddouble\nC0(K)≤/integraldisplay\nBX/ba∇dblg/ba∇dblC0(K)|µ|(dg)≤C1/ba∇dblµ/ba∇dblM(BX)\nby the properties of Bochner spaces. Thus NKis the kernel of a continuous linear map, i.e. a\nclosed closed subspace. We conclude that BX,Kis a Banach space [Bre11, Proposition 11.8].\nXembeds into BX,K.Forg∈Xwith/ba∇dblg/ba∇dblX= 1 consider µ=δg−δ−gand observe that\nfµ=σ(g)−σ(−g) =g,/ba∇dblµ/ba∇dblM(BX)= 2.\nThe general case follows by homogeneity.\nBANACH SPACES OF WIDE MULTI-LAYER NEURAL NETWORKS 7\nBX,Kembeds into C0,1.We have already shown that /ba∇dblfµ/ba∇dblC0(K)≤C1/ba∇dblµ/ba∇dblM(BX). By taking\nthe infimum over µ, we find that /ba∇dblf/ba∇dblC0(K)≤R/ba∇dblf/ba∇dblBX,K. Furthermore, for any x/\\e}atio\\slash=y∈Kwe\nhave\n|fµ(x)−fµ(x′)| ≤/integraldisplay\nBX/vextendsingle/vextendsingleσ/parenleftbig\ng(x)/parenrightbig\n−σ/parenleftbig\ng(x′)/parenrightbig/vextendsingle/vextendsingle|µ|(dg)\n≤/integraldisplay\nBX/vextendsingle/vextendsingleg(x)−g(x′)/vextendsingle/vextendsingle|µ|(dg)\n≤/integraldisplay\nBX[g]C0,1|x−x′||µ|(dg)\n≤C2/ba∇dblµ/ba∇dblM(BX)|x−x′|\nWe can now take the infimum over µ.\nNow assume that ( fn)n∈Nis a sequence such that /ba∇dblfn/ba∇dblX,K≤1 for all n∈N. Choose a\nsequence of measures µnsuch that fn=fµnand/ba∇dblµn/ba∇dbl ≤1+1\nnfor alln∈N. By the compactness\ntheorem for Radon measures (see Theorem A.11 in the appendix), t here exists a subsequence\nµnkand a Radon measure µonBXsuch that µnk⇀ µas Radon measures and /ba∇dblµ/ba∇dbl ≤1.\nBy definition, the weak convergence of Radon measures implies that/integraldisplay\nBXF(g)µnk(dg)→/integraldisplay\nBXF(g)µ(dg)∀F∈C(BX).\nUsingF(g) =σ(g(x)), we find that fµnk→fµpointwise. In particular, if fµnconverges to a\nlimit˜funiformly, then ˜f=f∈BBX,K, i.e. the unit ball of BX,Kis closedin the C0-topology. /square\nThe last property establishes that BX,Ksatisfies the same properties which we imposed on\nX, i.e. we can repeat the construction and consider BBX,K,K.\nRemark 2.8.We have shown in [EW20b] that if Kis an infinite set, Barron space is generally\nneither separable nor reflexive. In particular, BX,Kis not expected to have either of these\nproperties in the more general case.\n2.4.Rademacher complexities. We show that generalized Barron spaces have a favorable\nproperty from the perspective of statistical learning theory.\nA convenient (and sometimes realistic) assumption is that all data sa mples accessible to a\nstatistical learner are drawn from a distribution Pindependently. The pointwise Monte-Carlo\nerror estimate follows from the law of large numbers which shows tha t for a fixed function fand\ndata distribution P, we have\n/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleE(X1,...,XN)∼πN/bracketleftiggN/summationdisplay\ni=1f(Xi)−/integraldisplay\nf(x)P(dx)/bracketrightigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle≤Cf√\nN\nTypically, the uniform error over a function class is much larger than the pointwise error. For\nexample for the class of one-Lipschitz functions\n/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleE(X1,...,XN)∼πNsup\n[f]C0,1≤1/bracketleftiggN/summationdisplay\ni=1f(Xi)−/integraldisplay\nf(x)P(dx)/bracketrightigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle=E(X1,...,XN)∼πN/bracketleftigg\nW1/parenleftigg\nP,1\nNN/summationdisplay\ni=1δXi/parenrightigg/bracketrightigg\nis the expected 1-Wasserstein distance between Pand the empirical measure of Nindependent\nsample points drawn from it. If Pis the uniform distribution on [0 ,1]d, this decays like N−1/d\nand thus much slower than N−1/2[FG15, EW20a].\nFor Barron-typespaces, the Monte-Carloerrorrate maybe att aineduniformly on the unit ball\nofBX,K. This is established using the Rademacher complexity of a function cla ss. Rademacher\ncomplexities essentially decouple the sign and magnitude of oscillations around the mean by\n8 WEINAN E AND STEPHAN WOJTOWYTSCH\nintroducing additional randomness in a problem. For general inform ation on Rademacher com-\nplexities, see [SSBD14, Chapter 26].\nDefinition 2.9. LetS={x1,...,x N}be a set of points in K. TheRademacher complexity of\nH ⊆C0,1(K) onSis defined as\n(2.3) Rad( H;S) =Eξ/bracketleftigg\nsup\nh∈H1\nNN/summationdisplay\ni=1ξih(xi)/bracketrightigg\nwhere the ξiare iid random variables which take the values 1 and −1 with probability 1 /2 each.\nTheξiare either referred to as symmetric Bernoulli or Rademacher varia bles, depending on\nthe author.\nTheorem 2.10. Denote by Fthe unit ball of BX,K. LetSbe any sample set in Rd. Then\nRad(F;S)≤2 Rad(BX,S).\nProof.Define the function classes H1={σ(g) :g∈BX},H2={−σ(g) :g∈BX}and\nH=H1∪H2. All three are compact in C0.\nWe decompose µ=µ+−µ−in its mutually singular positive and negative parts and write\nf=fµinBX,Kas\nfµ(x) =/integraldisplay\nBXσ(g(x))µ+(dg)+/integraldisplay\nBX−σ(g(x))µ−(dg)\n=/integraldisplay\nH1h(x)(ρ+\n♯µ+)(dh)+/integraldisplay\nH2h(x)(ρ−\n♯µ−)(dh)\n=/integraldisplay\nHh(x) ˆµ(dh)\nwhereρ±:BX→ His given by g/ma√sto→ ±σ(g) and ˆµ=ρ+\n♯µ++ρ−\n♯µ−. In particular, we note that\nˆµis a non-negative measure and /ba∇dblˆµ/ba∇dbl=/ba∇dblµ/ba∇dbl. We conclude that the closed unit ball in BX,Kis\nthe closed convex hull of H.\nSinceσis1-Lipschitz,thecontractionLemma[SSBD14,Lemma26.9]impliesth atRad(H1;S)≤\nRad(BX;S). Due to [SSBD14, Lemma 26.7], we find that\nRad(BBX,K;S) = Rad( H;S)\n= Rad(H1∪(−H1);S)\n≤Rad(H1;S)+Rad( −H1;S)\n= 2 Rad( H1;S)\n= 2 Rad( BX;S)\nsince for any ξ, the supremum is non-negative. /square\nFor a prioriestimates, it suffices to bound the expected Rademach ercomplexity. However, the\nuse of randomness in the problem is complicated, and most known bou nds work on any suitably\nbounded sample set.\nExample 2.11.IfHlinis the class of linear functions on Rdwithℓ1-norm smaller or equal to 1\nandSis any sample set of Nelements in [ −1,1]d, then\nRad(Hlin;S)≤/radicalbigg\n2 log(2d)\nN,\nBANACH SPACES OF WIDE MULTI-LAYER NEURAL NETWORKS 9\nsee [SSBD14, Lemma 26.11]. If Haffis the unit ball in the class of affine functions x/ma√sto→wTx+b\nwith the norm |w|ℓ1+|b|, we can simply extend xto (x,1) and see that\nRad(Haff;S)≤/radicalbigg\n2 log(2d+2)\nN.\nWe show that Monte-Carlo rate decay is the best possible result for Rademacher complexities\nunder very weak conditions.\nExample 2.12.LetFbe a function class which contains the constant functions f≡ −1 and\nf≡1 forα,β∈R. Then there exists c >0 such that\nRad(F;S)≥c|α−β|√\nN\nfor any sample set SwithNelements. Up to scaling and a constant shift (which does not affect\nthe complexity), we may assume that β= 1,α=−1. Then\nRad(F;S)≥Eξ1\nmsup\nf≡±1m/summationdisplay\ni=1ξif(xi)\n=Eξ1\nm/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsinglem/summationdisplay\ni=1ξi/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle\n∼1√\n2πm\nby the central limit theorem.\n3.Banach spaces for multi-layer neural networks\n3.1.neural tree spaces. In this section, we discuss feed-forward neural networks of infin ite\nwidth and finite depth L. LetK⊆Rdbe a fixed compact set. Consider the following sequence\nof spaces.\n(1)W0(K) = (Rd)∗⊕R/tildewide=Rd+1is the space of affine functions from RdtoR(restricted to\nK).\n(2) ForL≥1, we set WL(K) =BWL−1(K),K.\nSince we consider Rdto be equipped with the ℓ∞-norm, we take W0to be equipped with its\ndual, the ℓ1-norm. Up to a dimension-dependent normalization constant, this d oes not affect the\nanalysis.\nThusWLis the function space for L+ 1-layer networks (i.e. networks with Lhidden lay-\ners/nonlinearities). Here we use inductively that WLembeds into C0,1(K) continuously and\nthat the unit ball of WLisC0-closed because the same properties held true for WL−1. Due to\nthe tree-like recursive construction, we refer to WLas neural tree space (with Llayers).\nHere and in the following, we often assume that Kis a fixed set and will suppress it in the\nnotation WL=WL(K).\nRemark 3.1.For a network with one hidden layer, by construction the coefficient s in the inner\nlayer are ℓ∞-bounded, while the outer layer is bounded in ℓ1(namely as a measure). Due to\nthe homogeneity of the ReLU activation function, the bounds can b e easily achieved and the\nfunction space is not reduced compared to just requiring the path -norm proxy to be finite.\nFor other activation functions, an ℓ∞-bound on the coefficients in the inner layer may restrict\nthe space of functions which can be approximated. In particular, if σisCk-smooth, then x/ma√sto→\naσ(wTx) isCk-smooth uniformly in w∈BR(0)⊆Rd+1. As a consequence, the space of σ-\nactivated two-layer networks whose inner layer coefficients are ℓ∞-bounded embeds continuously\n10 WEINAN E AND STEPHAN WOJTOWYTSCH\nintoCk. At least if k > d/2, it follows from [Bar93] that this space is smaller than the space of\nfunctions which can be approximated by σ-activated two-layernetworks with uniformly bounded\npath-norm (see also [EW20b, Theorem 3.1]).\nIt is likely that neural tree spaces with more general activation req uire parametrization by\nRadon measures on entire Banach spaces of functions. For netwo rks with a single hidden layer,\nsome results in this direction were presented in the appendix of [EW20 a]. While Radon measures\nonRd+2are less convenient than those on Sd+1, many results can be carried over since Rd+2is\nlocally compact.\nThe situation is very different for networks with two hidden layers. T he space X=W1on\nwhichW2=BXis modelled is infinite-dimensional, dense in C0, and not locally compact in the\nC0-topology. The restriction to the compact set BXsimplifies the analysis considerably.\n3.2.Embedding of finite networks. The space WLcontains all finite networks with L≥1\nhidden layers.\nTheorem 3.2. Let\n(3.1) f(x) =mL/summationdisplay\niL=1aL\niLσ\nmL−1/summationdisplay\niL−1=1aL−1\niLiL−1σ\n/summationdisplay\niL−2...σ/parenleftiggm1/summationdisplay\ni1=1a1\ni2i1σ/parenleftiggd+1/summationdisplay\ni0=1a0\ni1i0xi0/parenrightigg/parenrightigg\n\n\nThenf∈ WLand\n(3.2) /ba∇dblf/ba∇dblWL≤mL/summationdisplay\niL=1···m1/summationdisplay\ni1=1d+1/summationdisplay\ni0=1/vextendsingle/vextendsingleaL\niLaL−1\niLiL−1...a0\ni1i0/vextendsingle/vextendsingle\nProof.The statement is obvious for L= 1 as\nf(x) =m1/summationdisplay\ni=1ai1σ/parenleftiggd+1/summationdisplay\ni0=1ai1,i0xi0/parenrightigg\n=/integraldisplay\nSdσ(wTx)/parenleftiggm/summationdisplay\ni=1ai|wi|·δwi/|wi|/parenrightigg\n(dw).\nis a classical Barron function, where we simplified notation by setting wi= (ai1,...,a i(d+1))∈\nRd+1. We proceed by induction.\nLetfbe like in (3.1). By the induction hypothesis, for any fixed 1 ≤iL≤mL, the function\ngIL(x) :=mL−1/summationdisplay\niL−1=1aL−1\niLiL−1σ\n/summationdisplay\niL−2...σ/parenleftiggm1/summationdisplay\ni1=1a1\ni2i1σ/parenleftiggd+1/summationdisplay\ni0=1a0\ni1i0xi0/parenrightigg/parenrightigg\n\nlies inWL−1with the appropriate norm bound. We note that\nf(x) =mL/summationdisplay\niL=1aiLσ/parenleftbig\ngiL(x)/parenrightbig\n=mL/summationdisplay\niL=1¯aiLσ/parenleftbig\n¯giL(x)/parenrightbig\nwhere\n¯giL=giL/summationtextmL\niL−1=1···/summationtextm1\ni1=1/summationtextd+1\ni0=1/vextendsingle/vextendsingleaL\niL−1aL−1\niL−1iL−2...a0\ni1i0/vextendsingle/vextendsingle\n¯aiL=aiLmL/summationdisplay\niL−1=1···m1/summationdisplay\ni1=1d+1/summationdisplay\ni0=1/vextendsingle/vextendsingleaL\niL−1aL−1\niL−1iL−2...a0\ni1i0/vextendsingle/vextendsingle.\nIt follows that f∈ WLwith appropriate norm bounds. /square\nBANACH SPACES OF WIDE MULTI-LAYER NEURAL NETWORKS 11\n3.3.Inverse Approximation. We show that WLdoes not only contain all finite ReLU net-\nworks with Lhidden layers, but also their limiting objects.\nTheorem 3.3 (Compactness Theorem) .Letfnbe a sequence of functions in WLsuch that\nCL:= liminf n→∞/ba∇dblfn/ba∇dblWL<∞. Then there exists f∈ WLand a subsequence fnksuch that\n/ba∇dblf/ba∇dblWL≤CLandfnk→fstrongly in C0,α(K)for allα <1.\nProof.The resultistrivialfor L= 0since W0is afinite-dimensionallinearspace. Usingthe third\nproperty from Theorem 2.7 inductively, we find that WLembeds continuously into C0,1, thus\ncompactly into C0,αfor allα <1. This establishes the existence of a convergent subsequence.\nSinceBWLisC0-closed, it follows that the limit lies in WL. /square\nCorollary 3.4 (Inverse Approximation Theorem) .Let\nfn(x) =mn,L/summationdisplay\niL=1an,L\niLσ\nmn,L−1/summationdisplay\niL−1=1an,L−1\niLiL−1σ\n/summationdisplay\niL−2...σ/parenleftiggmn,1/summationdisplay\ni1=1an,1\ni2i1σ/parenleftiggd+1/summationdisplay\ni0=1an,0\ni1i0xi0/parenrightigg/parenrightigg\n\n\nbe finite L-layer network functions such that\nsup\nn∈Nmn,L/summationdisplay\niL=1···mn,1/summationdisplay\ni1=1d+1/summationdisplay\ni0=1/vextendsingle/vextendsinglean,L\niLan,L−1\niLiL−1...an,0\ni1i0/vextendsingle/vextendsingle<∞.\nIfPis a compactly supported probability measure and f∈L1(P)such that fn→finL1(P), then\nf∈ WL(sptP)and\n(3.3) /ba∇dblf/ba∇dblWL(sptP)≤liminf\nn→∞mn,L/summationdisplay\niL=1···mn,1/summationdisplay\ni1=1d+1/summationdisplay\ni0=1/vextendsingle/vextendsinglean,L\niLan,L−1\niLiL−1...an,0\ni1i0/vextendsingle/vextendsingle.\nProof.Follows from Theorems 3.3 and 3.2. /square\nIn particular, we make no assumption whether the width of any layer goes to infinity, or at\nwhat rate. The path-norm does not control the number of (non- zero) weights of a network.\n3.4.Direct Approximation. In Sections 3.2 and 3.3, we showed that WLis large enough\nto contain all finite ReLU networks with Lhidden layers and their limiting objects, even in\nweak topologies. In this section, we prove conversely that WLis small enough such that every\nfunction can be approximated by finite networks with Lhidden layers (with rate independent of\nthe dimensionality), i.e. WLis the smallest suitable space for these objects.\nIn fact, we prove a stronger result with an approximation rate in a r easonably weak topology.\nThe rate however depends on the number of layers. Recall the follo wing result on convex sets in\nHilbert spaces.\nLemma 3.5. [Bar93, Lemma 1] LetGbe a set in a Hilbert space Hsuch that /ba∇dblg/ba∇dblH≤Rfor all\ng∈ G. Iffis in the closed convex hull of G, then for every m∈Nandε >0, there exist m\nelements g1,...,g m∈ Gsuch that\n(3.4)/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddoublef−1\nmm/summationdisplay\ni=1gi/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble\nH≤R+ε√m.\nThe result is attributed to Maurey in [Bar93] and proved using the la w of large numbers.\n12 WEINAN E AND STEPHAN WOJTOWYTSCH\nTheorem 3.6. LetPbe a probability measure with compact support spt(P)⊆BR(0). Then for\nanyL≥1,f∈ WLandm∈N, there exists a finite L-layer ReLU network\n(3.5)fm(x) =m/summationdisplay\niL=1aL\niLσ\nm2/summationdisplay\niL−1=1aL−1\niLiL−1σ\nm3/summationdisplay\niL−2=1...σ\nmL/summationdisplay\ni1=1a1\ni2i1σ/parenleftiggd+1/summationdisplay\ni0=1a0\ni1i0xi0/parenrightigg\n\n\n\nsuch that\n(1)\n(3.6) /ba∇dblfm−f/ba∇dblL2(P)≤L(2+R)/ba∇dblf/ba∇dblWL√m\n(2) the norm bound\n(3.7)m/summationdisplay\niL=1···mL/summationdisplay\ni1=1d+1/summationdisplay\ni0=1/vextendsingle/vextendsingleaL\niLaL−1\niLiL−1...a0\ni1i0/vextendsingle/vextendsingle≤ /ba∇dblf/ba∇dblWL\nholds.\nRemark 3.7.Note that the width of deep layers increases rapidly. This is due to th e fact that\nwe construct an approximating network inductively. The procedur e leads to a tree-like structure\nwhere parametersare not shared, but every neuron in the ℓ-th layerhas its own set of parameters\nintheℓ+1-thlayerand aiℓiℓ−1= 0forallotherparameterpairings. Thisisequivalenttostandard\narchitectures from the perspective of approximation theory und er path-norm bounds, since the\npath norm does not control the number of neurons.\nThe total number of parameters in the network of the direct appr oximation theorem is\nM=m+m·m2+···+mL−1·mL+mL(d+1)\n=L−1/summationdisplay\nℓ=0m2ℓ+1+mL(d+1)\n=m1−m2L\n1−m2+mL(d+1)\n∼m2L−1\nby the geometric sum. Thus the decay rate in the direct approximat ion theorem is of the order\nM−1\n2(2L−1). This recovers the Monte-Carlo rate M−1/2in the case L= 1 [EMW19a, Theorem\n4], but quickly degenerates as Lincreases. Part of the problem is that the rapidly branching\nstructure combined with neural network indexing induces explicitly lis ted zeros in the set of\nweights as explained in Section 2.1. A neural tree expressing the sam e function would require\nonly∼(d+L)mLweights.\nNote, however, that the approximation rate is independent of dime nsiond. In this sense, we\nare not facing a curse of dimensionality, but a curse of depth.\nIt is unclear whether this rate can be improved in the general settin g. Functions in Barron\nspace are described as the expectation of a suitable quantity, while multi-layer functions are\ndescribedasiteratedconditionalexpectationsandnon-linearities . Inthissetting, itisnotobvious\nwhether the Monte-Carlo rate should be expected.\nProof of Theorem 3.6. Without loss of generality /ba∇dblf/ba∇dblWL= 1. Since WL֒− →C0,1with constant\n1, we find that /ba∇dblf/ba∇dblL2(P)≤(1+R)/ba∇dblf/ba∇dblWLfor allf∈ WL.\nRecall from the proof of Theorem 2.10 that the unit ball of WLis the closed convex hull of\nthe class H={±σ(g) :/ba∇dblg/ba∇dblWL−1≤1}. Thus by Lemma 3.5 there exist g1,...gM∈ WL−1and\nBANACH SPACES OF WIDE MULTI-LAYER NEURAL NETWORKS 13\nεi∈ {−1,1}such that/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddoublef−1\nmm/summationdisplay\ni=1εiσ(gi(x))/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble\nL2(P)<2+R√m.\nIfL= 1,giis an affine linear map and fm(x) =/summationtextm\ni=1εi\nmσ(gi(x)) is a finite neural network. Thus\nthe Theorem is established for L= 1.\nWe proceed by induction. Assume that the theorem has been prove d forL−1≥1. Then we\nnote that /ba∇dblgi/ba∇dblWL−1≤1, so for 1 ≤i≤mwe can find a finite L−1-layer network ˜ gisuch that\n/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddoublef−1\nmm/summationdisplay\ni=1εiσ(˜gi(x))/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble\nL2(P)≤/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddoublef−1\nmM/summationdisplay\ni=1εiσ(gi(x))/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble\nL2(P)+1\nmm/summationdisplay\ni=1/ba∇dblgi−˜gi/ba∇dblL2(P)\n≤2+R√m+m\nm(L−1)(2+R)√m.\nWe merge the mtrees associated with ˜ giinto a single tree, increasing the width of each layer by\na factor of m, and add an outer layer of width mwith coefficients aiL=εiL\nm. /square\nRemark 3.8.Letp∈[2,∞). Then by interpolation\n/ba∇dblf−fm/ba∇dblLp(P)≤ /ba∇dblf−fm/ba∇dbl2\np\nL2(P)/ba∇dblf−fm/ba∇dbl1−2\np\nL∞(P)≤C/ba∇dblf/ba∇dblWLm−1/p.\nCorollary 3.9. For every compact set Kandf∈ WL(K), there exists a sequence of finite\nneural networks with Lhidden layers\nfn(x) =mn,L/summationdisplay\niL=1an,L\niLσ\nmn,L−1/summationdisplay\niL−1=1an,L−1\niLiL−1σ\n/summationdisplay\niL−2...σ/parenleftiggmn,1/summationdisplay\ni1=1an,1\ni2i1σ/parenleftiggd+1/summationdisplay\ni0=1an,0\ni1i0xi0/parenrightigg/parenrightigg\n\n\nsuch that /ba∇dblfn/ba∇dblWL≤ /ba∇dblf/ba∇dblWLandfn→finC0,α(K)for every α <1.\nProof.We take R >0 such that K⊆BR(0) and take Pto be the uniform distribution on\nBR(0). Fix ε >0 andµsuch that f=fµonKand/ba∇dblµ/ba∇dblM(BBL−1)≤ /ba∇dblf/ba∇dblWL+ε. Then we can\napproximate fµinL2(P) by Theorem 3.6 with the norm bound /ba∇dblfn/ba∇dblWL≤ /ba∇dblf/ba∇dblWL+ε.\nBy compactness, we find that fnconverges to a limit in C0,α(BR(0)) for all α <1, which\ncoincides with the L2(P)-limitf. In particular, fnconverges in C0,α(K). We can eliminate the\nεin the norm bound by a diagonal sequence argument. /square\nRemark 3.10.The direct and indirect approximation theorems show that neural t ree spaces are\nthe correct function spaces for neural networks under path-n orm bounds. The construction of\nvector spaces and proofs made ample use of the equivalence betwe en neural networks and neural\ntrees. It is tempting to try to force more classical neural networ k structures by prescribing\nthat the width of all layers tends to infinity at the same rate. Howev er, this does not change\nthe approximation spaces since in the direct approximation theorem , we can repeat a function\nfrom the approximating sequence multiple times until the width of the most restrictive layer is\nsufficiently large to pass to the next element in the sequence. A more successful approach is\ndiscussed in Section 4.\n3.5.Composition of multi-layer functions. Letf∈/parenleftbig\nWL(K))kbe anL-layer function with\nvalues in Rk. SinceKis compact and fis continuous, f(K) is also compact. Let g∈ Wℓ(f(K))\nbe anℓ-layer function on f(K).\nLemma 3.11. g◦f∈ WL+ℓ(K)and\n(3.8)/vextenddouble/vextenddoubleg◦f/vextenddouble/vextenddouble\nWL+ℓ(K)≤ /ba∇dblg/ba∇dblBℓ(f(K))sup\n1≤i≤k/ba∇dblfi/ba∇dblBL(K).\n14 WEINAN E AND STEPHAN WOJTOWYTSCH\nProof.We proceed by induction. First consider the case ℓ= 0. Then g(x) =wTf(x), so\nwTf(x) =/summationtextk\ni=1wifi(x) is a (weighted) sum of L-layer functions, i.e. an L-layer function. By\nthe triangle inequality we have\n/ba∇dblg◦f/ba∇dblWL≤k/summationdisplay\ni=1|wi|/ba∇dblfi/ba∇dblWL≤ /ba∇dblw/ba∇dblℓ1sup\n1≤i≤k/ba∇dblfi/ba∇dblWL=/ba∇dblg/ba∇dblW0sup\n1≤i≤k/ba∇dblfi/ba∇dblWL\nNow assume that the theorem has been proved for ℓ−1withℓ≥1. To avoid double superscripts,\ndenote by Bℓthe closed unit ball in Wℓ(K). Letg(z) =/integraltext\nBℓ−1σ(h(z))µ(dh). Then\n(g◦f) =/integraldisplay\nBℓ−1σ/parenleftbig\n(h◦f)/parenrightbig\nµ(dh)\n=/parenleftbigg\nsup\n1≤i≤k/ba∇dblfi/ba∇dblBL/parenrightbigg/integraldisplay\nBℓ−1σ/parenleftbiggh◦f\nsup1≤i≤k/ba∇dblfi/ba∇dblBL/parenrightbigg\nµ(dh)\n=/parenleftbigg\nsup\n1≤i≤k/ba∇dblfi/ba∇dblBL/parenrightbigg/integraldisplay\nBL+ℓ−1σ(j(·)) (F♯µ)(dj)\nwhere\nF:Bℓ−1→BL+ℓ−1, F(h) =h◦f\nsup1≤i≤k/ba∇dblfi/ba∇dblWL\nis well-defined by the induction hypothesis. By definition, g◦f∈ WL+ℓwith the appropriate\nnorm bound. /square\nFor generalized Barron spaces, we showed that /ba∇dblf/ba∇dblX,K≤2/ba∇dblf/ba∇dblXfor allf∈X, thus by\ninduction /ba∇dblf/ba∇dblWℓ+L≤2ℓ/ba∇dblf/ba∇dblWLforL≥1. We show that this naive bound can be improved to\nbe independent of the number of additional layers.\nLemma 3.12. Letℓ,L≥1andf∈ WL(K). Then f∈ Wℓ+L(K)and/ba∇dblf/ba∇dblWℓ+L(K)≤\n2/ba∇dblf/ba∇dblWL(K).\nProof.Without loss of generality, /ba∇dblf/ba∇dblWL(K)≤1. We note that g1=σ(f) andg2=σ(−f) are\nboth in the unit ball of WL+1and non-negative, i.e. g1=σ(g1) andg2=σ(g2). Thus g1,g2\nare also in the unit ball of WL+2. By induction, we observe that /ba∇dblgi/ba∇dblWL+ℓ(K)≤1 for all ℓ≥1,\ni= 1,2 and thus\n(3.9) /ba∇dblf/ba∇dblWL+ℓ(K)=/ba∇dblg1+g2/ba∇dblWL+ℓ(K)≤ /ba∇dblg1/ba∇dblWL+ℓ(K)+/ba∇dblg2/ba∇dblWL+ℓ(K)≤2.\n/square\n3.6.Rademacher complexity. Considering statistical learning theory, neural tree spaces in-\nherit the convenient properties of the space of affine functions. T hese convenient properties are\none of the reasons why we study the path-norm in the first place. R ecall the definition and\ndiscussion of Rademacher complexities from Section 2.4.\nLemma 3.13. For every L, and every set of NpointsS⊆[−1,1]d, the hypothesis class HL\ngiven by the closed unit ball in WLsatisfies the Rademacher complexity bound\n(3.10) Rad/parenleftbig\nHL;S/parenrightbig\n≤2L/radicalbigg\n2 log(2d+2)\nN.\nProof.This follows directly from Example 2.11 and Theorem 2.10 by induction. /square\nThe complexity bound has an immediate application in statistical learnin g theory.\nBANACH SPACES OF WIDE MULTI-LAYER NEURAL NETWORKS 15\nCorollary 3.14 (Generalizationgap) .LetPbe any probability distribution supported on [−1,1]d×\nRand(X1,Y1)...,(XN,YN)be iid random variables with law P. Consider the hypothesis space\nH={h∈ WL(K) :/ba∇dblh/ba∇dblWL(K)≤1}. Assume that ℓ:R×R→[0,¯c]is a bounded loss function.\nThen, with probability at least 1−δover the choice of data points X1,...,X N, the estimate\nsup\nh∈H/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle1\nN/summationdisplay\ni=1ℓ/parenleftbig\nh(Xi),Yi/parenrightbig\n−/integraldisplay\n[−1,1]d×Rℓ/parenleftbig\nh(x),y/parenrightbig\nP(dx⊗dy)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle≤2L+1/radicalbigg\n2 log(2d+2)\nN+¯c/radicalbigg\n2 log(2/δ)\nN(3.11)\nholds.\nProof.This follows directly from Lemma 3.13 and [SSBD14, Theorem 26.5]. /square\nThus it is easy to “learn” a multi-layer function with low path norm in the sense that a\nrelatively small size of sample data points is sufficient to understand w hether the function has\nlow population risk or not. More sophisticated methods can provide d imension-dependent decay\nrates 1/2 + 1/(2d+2) of the generalization error at the expense of constants scalin g like√\nd\ninstead of log( d) [BK18, Remark 1].\n3.7.Generalization error estimates for regularized model. As an application, we prove\nthat empirical risk minimization with explicit regularization is a successf ul strategy in learning\nmulti-layer functions. For technical reasons, we work with a bound ed modification of L2-risk\ninstead of the mean squared error functional.\nLetPbe a probability measure on [ −1,1]dandS={x1,...,x N}be a set of samples drawn\niid from P. Denote\nR,Rn:WL→R,R(f) =/integraldisplay\nRdℓ(x,f(x))P(dx),RN(f) =1\nNN/summationdisplay\ni=1ℓ(xi,f(xi))\nwhere the loss function ℓsatisfies\nℓ(x,y)≤min/braceleftbig\n¯c,|y−f∗(x)|2/bracerightbig\n.\nFor finite neural networks with weights ( aL,...,a0)∈RmL×···×Rm1×dwe denote\n/hatwideRN(aL,...,a0) =RN(faL,...,a0)\nfaL,...,a0(x) =mL/summationdisplay\niL=1aL\niLσ\nmL−1/summationdisplay\niL−1=1aL−1\niLiL−1σ\n/summationdisplay\niL−2...σ/parenleftiggm1/summationdisplay\ni1=1a1\ni2i1σ/parenleftiggd+1/summationdisplay\ni0=1a0\ni1i0xi0/parenrightigg/parenrightigg\n\n.\nTheorem 3.15 (Generalization error) .Assume that the target function satisfies f∗∈ WL.\nLetFmbe the class of neural networks with architecture like in The orem 3.6. The minimizer\nfm∈ Fmof the regularized risk functional\n/hatwideRn(aL,...,a0)+9L2\nm/bracketleftiggmL/summationdisplay\niL=1···d+1/summationdisplay\ni0=1/vextendsingle/vextendsingleaL\niLaL−1\niLiL−1...a0\ni1i0/vextendsingle/vextendsingle/bracketrightigg2\nsatisfies the risk bound\n(3.12) R(fm)≤18L2/ba∇dblf∗/ba∇dbl2\nWL\nm+2L+3/2/ba∇dblf∗/ba∇dblWL/radicalbigg\n2 log(2d+2)\nN+¯c/radicalbigg\n2 log(2/δ)\nN.\nThe first term comes from the direct approximation theorem. The e xplicit scaling in Llooks\nunproblematic, but recall that the network re quires ∼m2L−1parameters. The second term\nstems from the Rademacher bound and is subject to the ‘curse of d epth’. An improvement in\n16 WEINAN E AND STEPHAN WOJTOWYTSCH\neither term would lead to better a priori estimates. The third term is purely probabilistic and\nunproblematic.\nProof of Theorem 3.15. Denoteλ=λm= 9L2m−1and letˆfm=fˆaL,...,ˆa0be like in Theorem\n3.6, i.e.\n/ba∇dblˆfm−f∗/ba∇dblL2(Pn)≤3L/ba∇dblf∗/ba∇dblWL√m,/summationdisplay\niL,...,i0/vextendsingle/vextendsingleaL\niL...a0\ni1i0/vextendsingle/vextendsingle≤ /ba∇dblf∗/ba∇dblWL.\nThen by definition\n/hatwideRn(aL,...,a0)+λ/bracketleftiggmL/summationdisplay\niL=1···d+1/summationdisplay\ni0=1/vextendsingle/vextendsingleaL\niLaL−1\niLiL−1...a0\ni1i0/vextendsingle/vextendsingle/bracketrightigg2\n≤/hatwideRn(ˆaL,...,ˆa0)+λ/bracketleftiggmL/summationdisplay\niL=1···d+1/summationdisplay\ni0=1/vextendsingle/vextendsingleˆaL\niLˆaL−1\niLiL−1...ˆa0\ni1i0/vextendsingle/vextendsingle/bracketrightigg2\n≤9L2/ba∇dblf∗/ba∇dbl2\nWL\nm+λ/ba∇dblf∗/ba∇dbl2\nWL.\nIn particular\n/ba∇dblfaL,...,a0/ba∇dbl2\nWL≤/bracketleftiggd+1/summationdisplay\ni0=1/vextendsingle/vextendsingleaL\niLaL−1\niLiL−1...a0\ni1i0/vextendsingle/vextendsingle/bracketrightigg2\n≤2λ/ba∇dblf∗/ba∇dbl2\nWL\nλ= 2/ba∇dblf∗/ba∇dbl2\nWL.\nThe Rademacher complexity is the supremum of linear random variable s, so Rad( BL\nR;S) =\nR·Rad(BL\n1;S) whereBL\nRdenotes the ball of radius Rcentered at the origin in WL. We conclude\nthat, with probability at least 1 −δover the draw of the training sample, we have\nR(faL,...,a0)≤ Rn(faL,...,a0)+2L+3/2/ba∇dblf∗/ba∇dblWL/radicalbigg\n2 log(2d+2)\nN+¯c/radicalbigg\n2 log(2/δ)\nN\n=18L2/ba∇dblf∗/ba∇dbl2\nWL\nm+2L+3/2/ba∇dblf∗/ba∇dblWL/radicalbigg\n2 log(2d+2)\nN+¯c/radicalbigg\n2 log(2/δ)\nN.\n/square\nRemark 3.16.Since/ba∇dblf/ba∇dblL∞≤/parenleftbig\n1 + supx∈K|x|/parenrightbig\n/ba∇dblf/ba∇dblWLfor allf∈ WL(K), we can repeat the\nargument for the loss function ℓ(x,y) =|y−f∗(x)|2, which is a priori unbounded, but can be\nmodified outside of the interval which fm,f∗take values in due to the a priori norm bound. The\nconstant ¯ cin (3.12) in this case is\n¯c= 4/ba∇dblf∗/ba∇dbl2\nWL([−1,1]d).\nRemark 3.17.For large L, these bounds degenerate rapidly. In [BK18], the authors show th at\nunder the stronger condition that a balanced version of the path- norm (which measures the\naverage weights of incoming and outcoming paths at all nodes in all lay ers), a better bound\non the Rademacher complexity is available. The balanced path norm ac hieves control over\ncancellations and the balancing of weights at different layers.\nHeuristically, the proof proceeds as follows: Let S={x1,...,x N}be a sample set in [ −1,1]d\nand the hypothesis space Hbe given by the unit ball in WL. By the direct approximation\ntheorem, there exists a network with O(m2L) weights which approximates fwith/ba∇dblf/ba∇dblWL≤1 to\naccuracy ∼L√minL2(PN) wherePNis the uniform measure on S. Thus the covering number\nNε,L2(PN)(H) ofHin theL2(PN)-distance shouldscale like Lε−4L.\nBANACH SPACES OF WIDE MULTI-LAYER NEURAL NETWORKS 17\nSincef(x1),...f(xN)⊂B√m(0)⊆RN(withrespecttotheEuclideandistance),theRademacher\ncomplexity can be bounded by\nRad(H;S)≤2−K√m√m+6√m\nmK/summationdisplay\ni=12−i/radicalig\nlog/parenleftbig\nN2−i√m,L2(PN)/parenrightbig\n≤2−K+C√mK/summationdisplay\ni=12−i/radicalig\nlog/parenleftbig\nLm−2L24iL)/parenrightbig\n≈2−K\n√m+C√\nL√mK/summationdisplay\ni=12−i√\ni\nfor anyK∈Nusing [SSBD14, Lemma 27.1]. Taking K→ ∞, only√\nLenters in the estimate.\nThe point in the proof that needs to be made rigorous is the connect ion between covering the\nparameter space with an ε-fine net and covering the function class with an ε-fine net. For a\nneural network\nf(x) =ε2σ/parenleftbigg1\nεx/parenrightbigg\nthe path-norm is bounded, but an ε-small change in the outer layer would lead to a large change\nin the function space. Thus a balanced version of the path-norm is n eeded. In some cases, this\nmay be possible through rescaling layers, but see Remark 4.12 for a p ossible obstruction. Similar\nideas are explored below in Section 4.2, although we do not estimate th e Rademacher complexity\nexplicitly.\nThe ability to obtain Rademacher estimates from covering also sugge sts that improvements\nin the direct approximation theorem may not be possible, since the co mplexity of the function\nclasses should increase with increasing depth.\nUnfortunately, the convenience in learning functions comes at a pr ice when considering the\napproximation power of neural tree spaces as described in [EW20a, Corollary 3.4] for general\nfunction classes of low complexity.\nCorollary 3.18. For any d≥3exists a1-Lipschitz function φon[0,1]dsuch that\n(3.13) limsup\nt→∞/parenleftbigg\ntγinf\n/⌊ard⌊lf/⌊ard⌊lX≤t/ba∇dblφ−f/ba∇dblL2(Q)/parenrightbigg\n=∞.\nfor allγ >2\nd−2.\nThus to approximate even relatively regularfunctions in a fairly weak topology up to accuracy\nε, the path-norm of a network with Lhidden layers may have to grow (almost) as quickly as\nε−d−2\n2independently of L. In particular, increasing the depth of an infinitely wide network doe s\nnot increase the approximation power sufficiently to approximate ge neral Lipschitz functions\n(while the path norm remains bounded by the same constant).\n3.8.Countably wide neural networks. Let us briefly comment on another natural concept\nof infinitely wide neural networks. The space of countably wide netw orks\nf(x) =∞/summationdisplay\niL=1aL\niLσ\n∞/summationdisplay\niL−1=1aL−1\niLiL−1σ\n/summationdisplay\niL−2...σ/parenleftigg∞/summationdisplay\ni1=1a1\ni2i1σ/parenleftiggd+1/summationdisplay\ni0=1a0\ni1i0xi0/parenrightigg/parenrightigg\n\n\nequipped with the path-norm\n/ba∇dblf/ba∇dbl= inf\na/summationdisplay\niL···/summationdisplay\ni0/vextendsingle/vextendsingleaL\niL...a0\ni1i0/vextendsingle/vextendsingle\n18 WEINAN E AND STEPHAN WOJTOWYTSCH\nis a subspace of WLby the same reasoning as Theorem 3.2 and the fact that the cross- product\nof a finite number of countable sets is countable. Like in the introduc tion, we can show that the\nspaces of countably wide neural networks and neural trees coinc ide.\nUnlike finite neural networks, countable networks form a vector s pace. The space of countably\nwide networks is a proper subspace of WLwhich contains all finite neural networks. The direct\napproximation theorem implies that the unit ball in the space of count ably wide neural networks\nis not closed in weaker topologies like C0,αorLp. Thus the space of countably wide neural\nnetworks is not suitable from the perspective of variational analys is.\nIntuitively, any convergent infinite sum contains a finite number of m acroscopic terms and\nan infinite tail of rapidly decaying terms. Thus at initialization and thro ughout training, a\nscale difference would exist in a countable neural network between le ading order neurons and\ntail neurons. This is not a useful way to think of neural networks w here parameters in a fixed\nlayer are typically chosen randomly from the same distribution and th en optimized by gradient\nflow-type algorithms. It should be noted however that common sch emes like Xavier initialization\n[GB10] choose the weights in a fashion which makes the path-norm gr ows beyond all bounds as\nthe number of neurons goes to infinity.\n4.Indexed representation of arbitrarily wide neural network s\n4.1.Neural networks with general index sets. The spaces considered above are a bit ab-\nstract. In this section, we discuss a more concrete representat ion for a subspace of WL. As\nwe show below, this subspace is invariant under the gradient flow dyn amics. For all practical\npurposes, it might just be the right set of functions that we need t o consider.\nIn [EW20b, Section 2.8], we showed that f:Rd→Ris a Barron function if and only if there\nexist measurable maps a,b: (0,1)→Randw: (0,1)→Rdsuch that\nf(x) =fa,w,b(x) =/integraldisplay1\n0aθσ/parenleftbig\nwT\nθx+bθ/parenrightbig\ndθ.\nFurthermore\n/ba∇dblf/ba∇dblB(K)= inf/braceleftbigg/integraldisplay1\n0|a|/bracketleftbig\n|w|+|b|/bracketrightbig\n(θ)dθ/vextendsingle/vextendsingle/vextendsingle/vextendsinglef=fa,w,bonK/bracerightbigg\n.\nThus we can think of Barron space as replacing the finite sum over ne urons by an integral and\nreplacing the index set {1,...,m}by the (continuous) unit interval. We extend the approach to\nmulti-layer networks in some generality.\nDefinition 4.1. For 0≤i≤L, let (Ω i,Ai,πi) be probability spaces where Ω 0={0,...,d}\nandπ0is the normalized counting measure. Consider measurable functions aL: ΩL→Rand\nai: Ωi+1×Ωi→Rfor 0≤i≤L−1. Then define\n(4.1)\nfaL,...,a0(x) =/integraldisplay\nΩLa(L)\nθLσ/parenleftBigg/integraldisplay\nΩL−1...σ/parenleftbigg/integraldisplay\nΩ1a1\nθ2,θ1σ/parenleftbigg/integraldisplay\nΩ0a0\nθ1,θ0xθ0π0(dθ0)/parenrightbigg\nπ1(dθ1)/parenrightbigg\n... π(L−1)(dθL−1)/parenrightBigg\nπL(dθL).\nConsider the norm\n(4.2)\n/ba∇dblf/ba∇dblΩL,...,Ω0;K= inf/braceleftigg/integraldisplay\n/producttextL\ni=0Ωi/vextendsingle/vextendsinglea(L)\nθL...a(0)\nθ1θ0/vextendsingle/vextendsingle/parenleftbig\nπL⊗···⊗π0/parenrightbig\n(dθL⊗···⊗dθ0)/vextendsingle/vextendsingle/vextendsingle/vextendsinglef=faL,...,a0onK/bracerightigg\nAs usual, we set\n(4.3) XΩL,...,Ω0;K={f∈C0,1(K) :/ba∇dblf/ba∇dblΩL,...,Ω0;K<∞}.\nWe call XΩL,...,Ω0;Ktheclass of neural networks over Kmodeled on the index spaces Ωi=\n(Ωi,Ai,πi).\nBANACH SPACES OF WIDE MULTI-LAYER NEURAL NETWORKS 19\nThe representation in (4.1) can also be written as:\n(4.4) f(x) =EθL∼πLa(L)\nθLσ(EθL−1∼πL−1...σ(Eθ1∼π1a1\nθ2,θ1σ(a0\nθ1·x))...)\nRepresenting functions as some form of expectations is the start ing point for the continuous\nformulation of machine learning.\nAs we mentioned above, W1(K) =X(0,1),{0,...,d}where the unit interval is equipped with\nLebesgue measure. The collection of finiteneural networks is realized when all sigma-algebras\nAicontain only finitely many sets (in particular, if all probability spaces a re finite). In this\nsituation, XΩL,...,Ω0is not even a vector space.\nLemma 4.2. (1) ForL≥2and any selection of probability spaces ΩL,...,Ω1, the space of\nneuronal embeddings XΩL,...,Ω0;Kis a subset of the neural tree space WL(K).\n(2) IfΩi= (0,1)andπiis Lebesgue measure for all i≥1, thenXΩL,...,Ω0;Kis a vector-space\nand/ba∇dbl·/ba∇dblΩL,...,Ω0;Kis a norm on it.\n(3) IfΩi= (0,1)andπiis Lebesgue measure for all i≥1, thenXΩL,...,Ω0;Kcontains all\nfinite neural networks with Lhidden layers. In particular, XΩL,...,Ω0;Kis a subspace of\nWL(K)which is dense in WL(K)with respect to the C0,α-topology for all α <1and\nconsequently in Lp(P)for any probability measure on K,p∈[1,∞].\n(4)X(0,1),(0,1),{0,...,d};Kcontains Barron space W1(K)and\n/ba∇dblf/ba∇dbl(0,1),(0,1),{0,...,d};K≤2/ba∇dblf/ba∇dblW1(K)∀f∈ W1(K).\n(5) Letf∈/parenleftbig\nW1(K))kbe a vector-valued Barron function and g∈ W1(Rk)a scalar-valued\nBarron function. Then the composition g◦flies inX(0,1),(0,1),{0,...,d};Kand\n(4.5) /ba∇dblg◦f/ba∇dbl(0,1),(0,1),{0,...,d};K≤ /ba∇dblg/ba∇dblW1(Rk)k/summationdisplay\ni=1/ba∇dblfi/ba∇dblW1(K).\nIn particular, this includes\n•the absolute value/positive part/negative of a Barron func tion\n•the pointwise maximum/minimum of two Barron functions,\n•the product of two Barron functions.\nProof.First claim. This can be proved exactly like Theorem 3.2.\nSecond claim. For any choice of parameter spaces Ω i, the set of functions XΩL,...,Ω0;Kis a\nbalanced cone, i.e. if f∈XΩL,...,Ω0;Kthenλf∈XΩL,...,Ω0;Kfor allλ∈R. It remains to show\nthatXΩL,...,Ω0;Kis closed under function addition. Let\nf(x) =/integraldisplay1\n0aL\nθLσ/parenleftigg/integraldisplay1\n0...σ/parenleftigg/integraldisplay1\n0a1\nθ2θ1σ/parenleftigg\n1\nd+1d+1/summationdisplay\nθ0=1a0\nθ1θ0xθ0/parenrightigg/parenrightigg/parenrightigg\ng(x) =/integraldisplay1\n0bL\nθLσ/parenleftigg/integraldisplay1\n0...σ/parenleftigg/integraldisplay1\n0b1\nθ2θ1σ/parenleftigg\n1\nd+1d+1/summationdisplay\nθ0=1b0\nθ1θ0xθ0/parenrightigg/parenrightigg/parenrightigg\n.\nThen\n(f+g)(x) =/integraldisplay1\n0cL\nθLσ/parenleftigg/integraldisplay1\n0...σ/parenleftigg/integraldisplay1\n0c1\nθ2θ1σ/parenleftigg\n1\nd+1d+1/summationdisplay\nθ0=1c0\nθ1θ0xθ0/parenrightigg/parenrightigg/parenrightigg\n20 WEINAN E AND STEPHAN WOJTOWYTSCH\nwhere\ncL\nθ=/braceleftigg\n2aL\n2θθ∈(0,1/2)\n2bL\n2θ−1θ∈(1/2,1), cℓ\nθξ=\n\n4aℓ\n2θ,2ξθ,ξ∈(0,1/2)\n4bℓ\n2θ−1,2ξ−1θ,ξ∈(1/2,1)\n0 else.\nEssentially, we construct two parallel networks that are added in t he final layer and otherwise\ndo not interact. The pre-factors stem from the fact that we re- arrange a mean-field index set\nand could be eliminated if we chose more general measure spaces (e.g .ZorR) as index sets.\nThird claim. Any finite neural network can be written as a mean field neural netw ork\nf(x) =1\nmLmL/summationdisplay\niL=1aL\niLσ\n1\nmL−1mL−1/summationdisplay\niL−1=1aL−1\niLiL−1σ/parenleftigg\n...σ/parenleftigg\n1\nm1m1/summationdisplay\ni1=1a1\ni2i1σ/parenleftigg\n1\nd+1d+1/summationdisplay\ni0=1a0\ni1i0xi0/parenrightigg/parenrightigg/parenrightigg\n\nDefine the functions\naL:(0,1)→R, aL(s) =aL\nifori−1\nmL≤s <i\nmL\naℓ:(0,1)2→R, aℓ(r,s) =aℓ\nijfori−1\nmℓ+1≤r <i\nmL,j−1\nmℓ≤s <i\nmℓ.\nfor 0≤ℓ < L. Thenf=faL,...,a0.\nFourth claim. Letfbe a Barron function. Then, according to [EW20b, Section 2.8], fcan\nbe written as\nf(x) =/integraldisplay1\n0¯a1\nθσ/parenleftigg\n1\nd+1d+1/summationdisplay\ni=1a0\nθ,ixi/parenrightigg\ndθ\nFor ¯a1,a0∈L2(0,1). In particular,\nf(x) =/integraldisplay1\n0a2\nθ2σ/parenleftigg/integraldisplay1\n0a1\nθ2θ1σ/parenleftigg\n1\nd+1d+1/summationdisplay\ni=1a0\nθ1,ixi/parenrightigg\ndθ1/parenrightigg\ndθ2\nwhere\na2\nθ2=/braceleftigg\n2θ2<1/2\n−2θ2>1/2, a1\nθ2θ1=/braceleftigg\n¯aθ1θ2<1/2\n−¯aθ1θ2>1/2.\nFifth claim. Letfk+1≡1. For 1 ≤i≤k, let\nfi(x) =/integraldisplay1\n0ai\nsσ\n1\nd+1d+1/summationdisplay\nj=1bi\ns,jxj\nds\ng(y) =/integraldisplay1\n0ctσ/parenleftigg\n1\nk+1k+1/summationdisplay\nl=1dt,lyl/parenrightigg\ndt.\nThen\n(g◦f)(x) =/integraldisplay1\n0ctσ\n1\nk+1k+1/summationdisplay\ni=1dt,i/integraldisplay1\n0ai\nsσ\n1\nd+1d+1/summationdisplay\nj=1bi\ns,jxj\nds\ndt\n=/integraldisplay1\n0ctσ\n/integraldisplay1\n0¯atsσ\n1\nd+1d+1/summationdisplay\nj=1¯bs,jxj\nds\ndt\nwhere\n¯ats=dt,iai\n(k+1)(s−i−1\nk+1)fori−1\nk+1≤s≤i\nk+1,¯bs,j=bi\n(k+1)(s−i−1\nk+1),jfori−1\nk+1.\nBANACH SPACES OF WIDE MULTI-LAYER NEURAL NETWORKS 21\nFor the special cases observe that\ng(z) =σ(z), g(z1,z2) = max{z1,z2}=z1+σ(z2−z1)\nare Barron functions, thus the first two claims are immediate. Furt hermore\n˜g(z) = max{0,z}2=/integraldisplay\nR10,∞2σ(z−ξ)dξ\nis a Barron function on bounded intervals, and so is z/ma√sto→z2. The Barron functions f1,f2are\ncontinuous on a compact set Kand hence bounded. It follows that\nf1f2=1\n4/bracketleftig/parenleftbig\nf1+f2/parenrightbig2−/parenleftbig\nf1−f2/parenrightbig2/bracketrightig\n∈X(0,1),(0,1),{0,...,d};K\n/square\nIn particular, X(0,1),(0,1),{0,...,d};Kcontains many functions which are not in Barron space\n(compare [EW20b, Remark 5.12]).\nRemark 4.3.The unit interval with Lebesgue measure is a probability space with tw o convenient\nproperties for our purposes:\n(1) For any finite collection of numbers 0 ≤α1,...,α N≤1 such that/summationtextN\ni=1αi= 1, there\nexist disjoint measurable subsets Ii⊆(0,1) such that L(Ii) =αifor all 1≤i≤N. This\nallows us to embed finite networks of arbitrary width (and can be ext ended to countable\nsums).\n(2) There exist measurable bijections between the unit interval an d many index sets which\nappear larger at first sight. By rearranging decimal representat ions, we may for exam-\nple construct a measurable bijection between (0 ,1) and (0 ,1)dfor anyd≥1. Using\na hyperbolic tangent or similar for rescaling, we can further show th at a measurable\nbijection between (0 ,1) andRdexists. Furthermore, using the characteristic function of\na probability measure πon (0,1), we can find a measurable map φ: (0,1)→(0,1) such\nthatφ♯πis Lebesgue measure. For details, see e.g. [EW20b, Section 2.8].\nThe entire analysis remains valid for any index set with these two prop erties. We describe a\nmore natural (but also more complicated) approach in Section 4.4.\nRemark 4.4.Let (Ω ℓ,Aℓ,πℓ), (/tildewideΩℓ,/tildewideAℓ,/tildewideπℓ) be families of probability spaces for 0 ≤ℓ≤ΩLand\nφℓ: Ωℓ→/tildewideΩℓmeasurable maps such that /tildewideπℓ=φℓ\n♯πℓ. Then the spaces\nXΩL,...,Ω0;K=WπL,...,π0(K)\ncoincide and the norms induced by network representations with th e different index spaces agree.\nRemark 4.5.We never used that the measures πiare probability measures (or even finite). More\ngeneral measures could be used on the index set. In particular, th e analysis of this section also\napplies to the space of countably wide neural networks (which corr esponds to the integers with\nthe counting measure).\nThere currently seems little gain in pursuing that generality, and we w ill remain in the natural\nmean field setting of networks indexed by probability spaces.\n4.2.Networks with Hilbert weights. We can bound the path-norm by a more convenient\nexpression. At first glance, it looks as though the weight functions aiare required to satisfy a\nrestrictiveintegrabilityconditionlike ai∈LL+1(πL⊗···⊗π0). Thiscanbeweakenedsignificantly\nby using the neural-network structure in which indices for layers se parated by one intermediate\nlayer are independent.\n22 WEINAN E AND STEPHAN WOJTOWYTSCH\nLemma 4.6. For any f, the path-norm is bounded by\n/ba∇dblf/ba∇dblΩL,...,Ω0;K≤inf/braceleftigg\n/ba∇dblaL/ba∇dblL2(πL)L−1/productdisplay\ni=0/ba∇dblai/ba∇dblL2(πi+1⊗πi)/vextendsingle/vextendsingle/vextendsingle/vextendsingleais.t.f=faL,...,a0onK/bracerightigg\n(4.6)\nProof.To simplify notation, we denote πi(dθi) = dθias in the case of the unit interval. The\nproof goes through in the general case. We quickly observe that f or a network with two hidden\nlayers, we can easily bound/integraldisplay\nΩ2×Ω1×Ω0/vextendsingle/vextendsinglea2\nθ2a1\nθ2θ1a0\nθ1θ0/vextendsingle/vextendsingledθ2dθ1dθ0=/integraldisplay\nΩ2×Ω1×Ω0/vextendsingle/vextendsinglea2\nθ2a0\nθ1θ0/vextendsingle/vextendsingle/vextendsingle/vextendsinglea1\nθ2θ1/vextendsingle/vextendsingledθ2dθ1dθ0\n≤/parenleftbigg/integraldisplay\nΩ2×Ω1×Ω0/vextendsingle/vextendsinglea2\nθ2a0\nθ1θ0/vextendsingle/vextendsingle2dθ2dθ1dθ0/parenrightbigg1\n2/parenleftbigg/integraldisplay\nΩ2×Ω1×Ω0/vextendsingle/vextendsinglea1\nθ2θ1/vextendsingle/vextendsingle2dθ2dθ1dθ0/parenrightbigg1\n2\n=/parenleftbigg/integraldisplay\nΩ2/vextendsingle/vextendsinglea2\nθ2/vextendsingle/vextendsingle2dθ2/parenrightbigg1\n2/parenleftbigg/integraldisplay\nΩ2×Ω1/vextendsingle/vextendsinglea1\nθ2θ1/vextendsingle/vextendsingle2dθ2dθ1/parenrightbigg1\n2/parenleftbigg/integraldisplay\nΩ1×Ω0/vextendsingle/vextendsinglea0\nθ1θ0/vextendsingle/vextendsingle2dθ1dθ0/parenrightbigg1\n2\nIn the general case, we set Ω L+1={0}to simplify notation. the argument follows as above by\n/ba∇dblfaL,...,a0/ba∇dblΩL,...,Ω0;K≤/integraldisplay\n/producttextL+1\ni=0Ωi/vextendsingle/vextendsinglea(L)\nθL+1θL...a(0)\nθ1θ0/vextendsingle/vextendsingledθL...dθ0\n=/integraldisplay\n/producttextL+1\ni=0Ωi/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle⌊L/2⌋/productdisplay\ni=0a2i\nθ2i+1θ2i/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle⌊(L−1)/2⌋/productdisplay\ni=0a2i+1\nθ2i+2θ2i+1/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingledθL...dθ0\n≤\n/integraldisplay\n/producttextL+1\ni=0Ωi/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle⌊L/2⌋/productdisplay\ni=0a2i\nθ2i+1θ2i/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle2\ndθL...dθ0\n1\n2\n/integraldisplay\n/producttextL+1\ni=0Ωi/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle⌊L−1/2⌋/productdisplay\ni=0a2i+1\nθ2i+2θ2i+1/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle2\n|dθL...dθ0\n1\n2\n=/ba∇dblaL/ba∇dblL2(πL)L−1/productdisplay\ni=0/ba∇dblai/ba∇dblL2(πi+1×πi).\nWe may now take the infimum over all coefficient functions. /square\nThe lemma allows us to analyze networks in a convenient fashion using o nlyL2-norms. In\nnumerical simulations, explicit regularization by penalizing L2-norms provides a smoother alter-\nnative to penalizing the path-norm directly. Note that the proof is b uilt on the network index\nstructure and does not extend to neural trees.\nLemma 4.7. The realization map\n(4.7)F:L2(πL)×L2(πL⊗πL−1)···×L2(π1⊗π0)→C0(K), F(aL,...,a 0) =faL,...,a0\nis locally Lipschitz-continuous.\nProof.ForL= 1 and x∈K, note that\n/vextendsingle/vextendsinglefa1,a0(x)−f¯a1,¯a0(x)/vextendsingle/vextendsingle=/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplay\nΩ1a1\nθ1σ/parenleftigg\n1\nd+1d+1/summationdisplay\nθ0=1a0\nθ1θ0xθ0/parenrightigg\n−¯a1\nθ1σ/parenleftigg\n1\nd+1d+1/summationdisplay\nθ0=1¯a0\nθ1θ0xθ0/parenrightigg\nπ1(dθ1)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle\n≤/integraldisplay\nΩ1/vextendsingle/vextendsinglea1\nθ1−¯a1\nθ1/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleσ/parenleftigg\n1\nd+1d+1/summationdisplay\nθ0=1a0\nθ1θ0xθ0/parenrightigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle\n+/vextendsingle/vextendsingle¯a1\nθ1/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleσ/parenleftigg\n1\nd+1d+1/summationdisplay\nθ0=1a0\nθ1θ0xθ0/parenrightigg\n−σ/parenleftigg\n1\nd+1d+1/summationdisplay\nθ0=1¯a0\nθ1θ0xθ0/parenrightigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleπ1(dθ1)\nBANACH SPACES OF WIDE MULTI-LAYER NEURAL NETWORKS 23\n≤ /ba∇dbla1−¯a1/ba∇dblL2(Ω1)/ba∇dbla0/ba∇dblL2(Ω1×Ω0)sup\nx∈K|x|+/ba∇dbl¯a1/ba∇dblL2(Ω1)/ba∇dbla0−¯a0/ba∇dblL2(Ω1×Ω0)sup\nx∈K|x|.\nThe general case follows analogously by induction. /square\nWe define a third class of spaces for the L2-approach.\nDefinition 4.8. For 0≤i≤L, let (Ω i,Ai,πi) be a probability space where Ω 0={0,...,d}and\nπ0is the normalizedcountingmeasure. Let aL∈L2(πL) andai∈L2(πi+1⊗πi) for0≤i≤L−1.\nThen define like in (4.1)\nfaL,...,a0(x) =/integraldisplay\nΩLa(L)\nθLσ/parenleftBigg/integraldisplay\nΩL−1...σ/parenleftbigg/integraldisplay\nΩ1a1\nθ2,θ1σ/parenleftbigg/integraldisplay\nΩ0a0\nθ1,θ0xθ0π0(dθ0)/parenrightbigg\nπ1(dθ1)/parenrightbigg\n... π(L−1)(dθL−1)/parenrightBigg\nπL(dθL).\nWe define the class of neural networks over Kwith Hilbert weights over the index spaces Ωi=\n(Ωi,Ai,πi) as the image of L2(πL)×···×L2(π1⊗π0) under the realization map (4.7) and denote\nit by\nWπL,...,π0(K) =/braceleftbigg\nf:K→R/vextendsingle/vextendsingle/vextendsingle/vextendsingle∃aL∈L2(πL), aℓ∈L2(πℓ⊗πℓ) s.t.f≡faL,...,a0onK/bracerightbigg\n.\nThe function class is equipped with the measure of complexity\n(4.8) QπL,...,π0;K(f) = inf/braceleftigg\n/ba∇dblaL/ba∇dblL2(πL)L−1/productdisplay\ni=0/ba∇dblai/ba∇dblL2(πi+1⊗πi)/vextendsingle/vextendsingle/vextendsingle/vextendsingleais.t.f=faL,...,a0onK/bracerightigg\n.\nWedeclareanotionofconvergenceon WπL,...,π0(K)bythe convergenceoftheweightfunctions\nin theL2-strong topology. To avoid pathological cases, we normalize the we ights across layers.\nUsing the homogeneity of σ, note that faL,...,a0=fλℓaℓ,...,λ0a0∈ WπL,...,π0(K) forλi>0 such\nthat/producttextL\ni=0λi= 1. In particular, we may assume without loss of generality that\n/ba∇dblaℓ/ba∇dblL2=/parenleftiggL/productdisplay\ni=0/ba∇dblai/ba∇dblL2/parenrightigg1\nL+1\nfor allℓ≥1.\nDefinition 4.9. We say that a sequence of functions fn∈ WπL,...,π0(K) converges weakly to a\nlimitf∈ WπL,...,π0(K) if there exist coefficient functions aL,n,...,a0,nforn∈NandaL,...,a0\nsuch that\n(1)fn=faL,n,...,a0,nfor alln∈Nandf=faL,...,a0.\n(2)/ba∇dblaℓ,n/ba∇dbl=/parenleftig/producttextL\ni=0/ba∇dblai,n/ba∇dblL2/parenrightig1\nL+1for alln∈Nand 0≤ℓ≤L.\n(3) limsupn→∞/bracketleftig/producttextL\ni=0/ba∇dblai,n/ba∇dblL2−Q(fn)/bracketrightig\n= 0.\n(4)aℓ,n→aℓin theL2-strong topology for all 0 ≤ℓ≤n.\nTo evaluate the notion of convergence, consider the case L= 1 and write ( a,w) for (a1,a0).\nWe interpret a0as anRd+1-valued function on (0 ,1) rather than a scalar function on (0 ,1)×\n{0,...,d}. Then it is easy to see that\n(an,wn)→(a,w) strongly in L2(0,1)⇒(an,wn)♯L →(a,w)♯Lin Wasserstein .\nThe inverse statement holds up to a rearrangement of the index se t. The Wasserstein distance\nis associated with the weak convergence of measures, while the top ology of Barron space is\nassociated with the the norm topology for the total variation norm (strong convergence). This\njustifies the terminology of ‘weak convergence’ of arbitrarily wide n eural networks.\n24 WEINAN E AND STEPHAN WOJTOWYTSCH\nWeak convergence is locally metrizable, but not induced by a norm. A r elaxed version of\nconvergence described above is metrizable by the distance functio n\ndHW(f,g) = inf/braceleftiggL/summationdisplay\nℓ=0/ba∇dblaℓ,f−aℓ,g/ba∇dblL2(πℓ)/vextendsingle/vextendsingle/vextendsingle/vextendsingleaL,f,...,a0,gs.t.f=faL,f,...,a0,f, g=faL,g,...,a0,gand\n/ba∇dblaℓ,h/ba∇dbl ≡/parenleftiggL/productdisplay\ni=0/ba∇dblai,h/ba∇dblL2/parenrightigg1\nL+1\n≤2Q(h)1\nL+1forh∈ {f,g}/bracerightigg\n. (4.9)\nThethirdconditionhasbeenweakenedfrom/producttextL\ni=0/ba∇dblai,n/ba∇dblL2−Q(fn)→0toQ(fn)≤/producttextL\ni=0/ba∇dblai,n/ba∇dblL2≤\n2Q(fn). The normalization is required to ensure that functions in which one layer can be cho-\nsen identical do not have zero distance by shifting all weight to the o ne layer. Which mode of\nconvergence is superior to another remains to be seen. Equipped w ith the Hilbert weight metric\ndHW, the spaces WπL,...,π0(K) are complete.\nTo avoid the unwieldy terminology of arbitrarily wide neural networks with Hilbert weights,\nwe introduce the following simpler terminology.\nDefinition 4.10. The metric spaces WπL,...,π0(K),d) equipped with the metric dHWfrom (4.9)\nare called multi-layer spaces for short.\nRemark 4.11.As seen in Lemma 4.6, the inclusions\n(4.10) WπL,...,π0(K)⊆XΩL,...,Ω0;K⊆ WL(K)\nhold. The last three points of Lemma 4.2 hold with WπL,...,π0(K) in place of XΩL,...,Ω0;K. We\nnote however that the functions\ncL\nθ=/braceleftigg\n2aL\n2θθ∈(0,1/2)\n2bL\n2θ−1θ∈(1/2,1), cℓ\nθξ=\n\n4aℓ\n2θ,2ξθ,ξ∈(0,1/2)\n4bℓ\n2θ−1,2ξ−1θ,ξ∈(1/2,1)\n0 else\nsatisfy\n/ba∇dblcL/ba∇dbl2\nL2(0,1)= 2/bracketleftig\n/ba∇dblaL/ba∇dbl2\nL2(0,1)+/ba∇dblbL/ba∇dbl2\nL2(0,1)/bracketrightig\n,/ba∇dblcℓ/ba∇dblL2/parenleftbig\n(0,1)2/parenrightbig= 4/bracketleftbigg\n/ba∇dblaℓ/ba∇dblL2/parenleftbig\n(0,1)2/parenrightbig+/ba∇dblbℓ/ba∇dblL2/parenleftbig\n(0,1)2/parenrightbig/bracketrightbigg\n.\nIn particular, if Ωℓ= (0,1) andπℓis Lebesgue measure for all 1 ≤ℓ≤L, thenWπL,...,π0(K) is\na linear space, but both QπL,...,π0;KanddHWgenerally fail to be a norm.\nRemark 4.12.It is not clear whether the inclusions in (4.10) are necessarily strict. In the case\nof Barron space, it is easily possible to normalize by replacing\na1\nθ1/ma√sto→a1\nθ1\nρθ1, a0\nθ1θ0/ma√sto→ρθ1a0\nθ1θ0\nsuch that both layers have the same magnitude in L2(0,1), even if they are only assumed to be\nmeasurable with finite path-norm a priori. For multiple layers, this may not be possible. Let\n(4.11) as≡1, b st=f(s−t), c t≡1\nwherefis a one-periodic function on Rwhich is in L1(0,1), but not L2(0,1). Then any normal-\nization\nas/ma√sto→as\nρs, b st/ma√sto→ρs˜ρtbst, c t/ma√sto→ct\n˜ρt\nfails to make b L2-integrable. Whether or not this can be compensated by choosing o ther weights\nwith the same realization remains an open question.\nBANACH SPACES OF WIDE MULTI-LAYER NEURAL NETWORKS 25\n4.3.Networks with two hidden layers. We investigate the space X(0,1),(0,1),{0,...,d};Kand\nWL1,L1,π0(K) more closely where π0denotes counting measure and L1is the Lebesgue measure\non (0,1). In general, any network modelled on probability spaces Ω 2,Ω1,Ω0can be written as\nf(x) =/integraldisplay\nΩ2a2\nθ2σ/parenleftigg/integraldisplay\nΩ1a1\nθ2,θ1σ/parenleftiggd+1/summationdisplay\nθ0=1a0\nθ1,θ0xθ0/parenrightigg\nπ1(dθ1)/parenrightigg\nπ2(dθ2)\n=/integraldisplay\nΩ2a2\nθ2ρθ2σ/parenleftigg/integraldisplay\nΩ1a1\nθ2,θ1|wθ1|\nρθ2σ/parenleftigg\nwT\nθ1\n|wθ1|(x,1)/parenrightigg\nπ1(dθ1)/parenrightigg\nπ2(dθ2)\n=/integraldisplay\nΩ2a2\nθ2ρθ2σ/parenleftbigg/integraldisplay\nR×Sd˜aσ(˜wTx)(Ψ(θ2,·)♯π1)(d˜a⊗d˜w)/parenrightbigg\nπ2(dθ2)\nwherewθ= (a0\nθ1,1,...,a0\nθ1,d+1) and\nΨ : Ω2×Ω1→R×Sd,Ψ(θ2,θ1) =/parenleftigg\na1\nθ2θ1|wθ1|\nρθ2,wθ1\n|wθ1|/parenrightigg\n.\nSince the second component of Ψ does not depend on θ2, the marginal πof Ψ(θ2,·)♯π1on the\nsphere is independent of θ2. We can therefore write/integraldisplay\nR×Sd˜aσ(˜wTx)(Ψ(θ2,·)♯π1)(d˜a⊗d˜w) =/integraldisplay\nSd¯aθ2(w)σ(wTx)¯π(dw)\nby integrating in the a-direction and making ¯ aa function of w(see [EW20b, Section 2.3] for the\ntechnical details). Thus\nf(x) =/integraldisplay\nΩ2a2\nθ2ρθ2σ/parenleftbigg/integraldisplay\nSd¯aθ2(w)σ(wTx)¯π(dw)/parenrightbigg\nπ2(dθ2).\nWe can in particular choose ρ≥0 such that\n/integraldisplay\nSd/vextendsingle/vextendsingle¯aθ2(w)/vextendsingle/vextendsingle¯π(dw)≤/integraldisplay\nΩ1|a1\nθ2θ1||wθ1|\nρθ2π1(dθ1) =1\nρθ2/integraldisplay\nΩ1|a1\nθ2θ1||wθ1|π1(dθ1)≤1\nfor allθ2∈Ω2. Then the map\nF: Ω2→BX, θ 2/ma√sto→fθ2=/integraldisplay\nSd¯aθ2(w)σ(wTx)¯π(dw)\nis well-defined and Bochner integrable. In particular\nf(x) =/integraldisplay\nΩ2a(2)\nθ2ρθ2σ/parenleftbig\nfθ2(x)/parenrightbig\nπ2(dθ2)\n=/integraldisplay\nBXσ(g(x))µ(dg)\nwhereµ=F♯/parenleftbig\n(a(2)ρ)·π2/parenrightbig\n. By construction, µis concentrated on the subspace Y¯πof Barron\nfunctions which can be represented with an L1-density with respect to the measure ¯ π, by which\nwe mean that |µ|(BX\\Y¯π) = 0. This equation can be sensibly interpreted since any measure\ncan be extended to a potentially larger σ-algebra containing all null sets.\nThus general functions in W2(K) andX(0,1),(0,1),{0,...,d};Kboth take the form\nf(x) =/integraldisplay\nBXσ(g(x))µ(dg)\nwhereBXisthe unit ball in Barronspace, but in the secondcase, µis concentratedona subspace\nY¯π. This space is a quotient of L1(¯π) by a closed subspace and thus closed in Barron space, but\n26 WEINAN E AND STEPHAN WOJTOWYTSCH\nmay be dense in C0(K). If ¯πis the uniform distribution on Sd, thenY¯πis dense in C0since\nL1(¯π) is dense in the space of Radon measures on Sdwith respect to the weak topology.\nClaim:There is no distribution ¯ πonSdsuch that every Barron function can be expressed\nwith anL1-density with respect to ¯ πifKis the closure of an open set.\nProof of claim: Barron space is not separable since\n/ba∇dblσ(wT\n1·)−σ(wT\n2·)/ba∇dblB1(K)≥[σ(wT\n1·)−σ(wT\n2·)]C0,1(K)≥1\nif one of the hyperplanes {x:wT\n1/2x= 0}intersects the interior of K. This is the case for\nuncountably many w∈Sd. On the other hand, L1(¯π) (and also its quotient by the kernel of the\nrealization map) is separable for any Radon measure. Thus the two s paces cannot coincide. /square\nThe claim can be phrased and proved in greater generality if Kis a manifold or similar. We\nnote that for fixed ¯ π, the space Y¯πembeds continuously into C0,1(K), but its unit ball is not\nclosed in C0(K). Nevertheless, we may consider the space\nBY¯π,K=/braceleftigg\nfµ(x) =/integraldisplay\nBX∩Y¯π,Kσ(g(x))µ(dg)/vextendsingle/vextendsingle/vextendsingle/vextendsingleµadmissible/bracerightigg\nwhere admissible measures are finite (signed) Radon measures for w hichY¯πis measurable. Every\ndistribution ¯ πonSdcanbeobtainedasthepush-forwardofLebesguemeasureonthe unitinterval\nalong a measurable map φ: (0,1)→Sd, see e.g. [EW20b, Section 2.8]. Thus the associatedspace\nof neural networks with two hidden layers is\nX(0,1),(0,1),{0,...,d};K=/uniondisplay\n¯πBY¯π,K=:/tildewiderW2(K)\nwhere the union is over all probability distributions ¯ πonSd. Thus the first layer of f∈ W2\nis wide enough to contain the entire unit ball of W1, while the first layer of f∈/tildewiderW2can only\nexpress a separable subset of the unit ball in W1. The question whether this reduces expressivity\nor whether in fact W2=/tildewiderW2remains open.\nFinally, consider the space WL1,L1,π0(K) where the weights of a function satisfy\na2∈L2(0,1), a1∈L2/parenleftbig\n(0,1)2/parenrightbig\n, a0∈L2/parenleftbig\n(0,1)×{0,...,d}/parenrightbig\n=L2/parenleftbig\n(0,1);Rd).\nWe proceed as before, but normalize with respect to L2rather than L1/L∞. Again, we can\nconsider the maps\nΨ : (0,1)×(0,1)→Rd+2,(θ2,θ1)/ma√sto→/parenleftbig\na1\nθ2θ1,a0\nθ1/parenrightbig\nand note as before that\n/integraldisplay1\n0a1\nθ2,θ1σ/parenleftiggd+1/summationdisplay\nθ0=1a0\nθ1,θ0xθ0/parenrightigg\ndθ1=/integraldisplay\nRd+1¯aθ2(w)σ(wTx)¯π(dw)\nwhere this time ¯ a∈L2(¯π) for almost all θ2∈(0,1). Thus the first layer of f∈ WL1,L1,π0takes\nvalues in a single reproducing kernel Hilbert space H¯πassociated to the kernel\nk¯π(x,x′) =/integraldisplay\nRd+1σ(wTx)σ(wTx′)¯π(dw)\nwhile the first layer of f∈ W2may be wide enough to contain every function in the unit ball of\nBarron space. Again, the relationship between the function space s remains open.\nBANACH SPACES OF WIDE MULTI-LAYER NEURAL NETWORKS 27\n4.4.Natural index sets. In this section, we focus on the natural index set for WπL,...,π0(K).\nAbove, we allowed the index spaces Ω ito be generic or focused on the case Ω i= (0,1). While\n(0,1) is simple and mathematically convenient, it is not a natural choice. F irst consider the\nsimpler case of neural networks with a single hidden layer. The classic al representation in this\ncase is\nf(x) =/integraldisplay\nR×Rd+1aσ(wTx)π(da⊗dw)\nfor some distribution πonRd+2, see [EW20b] and the sources cited therein. Using the scaling\ninvariance σ(·) =λ−1σ(λ·) if necessary, we may assume that\n/integraldisplay\nRd+2|a|2+|w|2π(da⊗dw)<∞.\nThen we set Ω 1=Rd+2,Ω0={0,...,d}and\na1\nθ1= (θ1)1, a0\nθ1,θ0= (θ1)1+θ0,\ni.e. we index Rd+2by itself. In this equation, ( θ1)idenotes the i-the component of the vector\nθ1∈Rd+2.\nFor networks with more than one hidden layer, the output of the fir st layer is vector-valued.\nThe preceding analysis determined that the first hidden layer takes values in the reproducing\nkernel Hilbert space H¯π. It thus seems reasonable at first glance to choose H¯πas an index space\nfor the second hidden layer. This intuition is flawed since the output o f the first hidden layer is\nan RKHS function of x, a variable which is fixed when calculating the output of the network a nd\ninaccessible to the second hidden layer. The previous observation h as no bearing on the inner\nworkings of neural networks, but only on the approximation power of functions described by a\ngiven neural network architecture.\nPursuing a different route, we note that πis a Radon measure on R×Rd+1whereRis the\noutput and Rd+1the input layer (interpreting xas (x,1)). For networks with two hidden layers,\nwe note that\n/vextenddouble/vextenddouble/vextenddouble/vextenddouble/integraldisplay\nΩ1a1\nθ2θ1σ/parenleftbigg/integraldisplay\nΩ0a0\nθ1θ0xθ0π0(dθ0)/parenrightbigg\nπ1(dθ1)/vextenddouble/vextenddouble/vextenddouble/vextenddouble2\nL2(π2)\n≤/integraldisplay\nΩ2/parenleftbigg/integraldisplay\nΩ1/vextendsingle/vextendsinglea1\nθ2θ1/vextendsingle/vextendsingle2π1(dθ1)/parenrightbigg2\n2/parenleftbigg/integraldisplay\nΩ1/integraldisplay\nΩ0/vextendsingle/vextendsinglea0\nθ1θ0/vextendsingle/vextendsingle2π0(dθ0)π1(dθ1)/parenrightbigg2\n2\nπ2(dθ2) sup\nx∈K|x|2\n=/ba∇dbla1/ba∇dblL2(π2⊗π1)/ba∇dbla0/ba∇dblL2(π1⊗π0)sup\nx∈K|x|2\nfor allx∈K. We can thus viewa neuralnetworkwith twohidden layersandparam eterfunctions\na2,a1,a0as a composition of linear and non-linear maps in the following way:\n(1) Letπ1be the distribution of vectors w:= (a0\nθ1θ0)d+1\nθ0=1onRd+1andA1:Rd→L2(π1) is\nthe affine map described by\n(A1x)θ1=/integraldisplay\nΩ0a0\nθ1θ0xθ0=1\nd+1d+1/summationdisplay\nθ0=1a0\nθ1θ0xθ0.\nWe may use Rd+1as its own index set, i.e. a0\nθ1:=θ1. To emphasize the fact that index\nset and distribution are natural, we denote w=1\nd+1θ1,¯π=π1.\n(2) The non-linearity σacts onL2(π1) by pointwise application.\n28 WEINAN E AND STEPHAN WOJTOWYTSCH\n(3) Let (Ω 2,A2,π2) be a general probability space used as an index set. The linear map\nA2:L2(π1)→L2(π2) is given by\n(A2f)θ2=/integraldisplay\nΩ1a1\nθ2θ1fθ1π1(dθ1) =/a\\}b∇acketle{ta1\nθ2:,z/a\\}b∇acket∇i}htL2(π1)\nwherea1\nθ2:(θ1) =a1\nθ2θ1.\n(4) The non-linearity σacts onL2(π2) by pointwise application.\n(5) The map A3:L2(π2)→Ris given by\nA3f=/integraldisplay\nΩ2a2\nθ2fθ2π2(dθ2).\nThen\nf(x) =/parenleftbig\nA3◦σ◦A2◦σ◦A1)(x)\n=/integraldisplay\nΩ2a2\nθ2σ/parenleftigg/angbracketleftbigg\na1\nθ2:,σ/parenleftbigg1\nd+1/a\\}b∇acketle{ta0\nθ1:,x/a\\}b∇acket∇i}htRd+1/parenrightbigg/angbracketrightbigg\nL2(π1)/parenrightigg\nπ(dθ2)\n=/integraldisplay\nR×L2(¯π)˜aσ/parenleftig\n/a\\}b∇acketle{t˜h,σ(wTx)/a\\}b∇acket∇i}htL2(¯π)/parenrightig\n(H♯π2)(d˜a⊗d˜h)\nwhere\nH: Ω2→R×L2(¯π), θ 2/ma√sto→(a2\nθ2,a1\nθ2:).\nThus we may in a natural way interpret\n•Ω0={0,...,d}with the normalized counting measure.\n•Ω1=Rd+1=L2(Ω0). ¯π=π1can be any probability distribution on Ω 1with finite\nsecond moments.\n•Ω2=R×L2(¯π) andπ2is a probability distribution with finite second moments.\nMore generally, we set\n•Ω0={0,...,d}with the normalized counting measure ¯ π0.\n•Ωℓ=L2(¯πℓ−1) and a measure ¯ πℓwith finite second moments on Ω ℓfor 1≤ℓ≤L−1.\n•ΩL=R×L2(¯πL−1) and a measure ¯ πLwith finite second moments on Ω L.\nThe outermost index space Ω Lhas the additional factor Rcompared to Ω ℓbecause both the first\nand the last operations in a neural network are linear. Note that Ω ℓis a Polish space for every\nℓby induction.\nAll considerations above were for fixed x. Asxvaries, a neural network with Lhidden layers\ntakes the form f(x) = (zL◦···◦z1)(x) where\n(1)z1∈C0,1(sptP,Ω1),z0(w,x) =wTx=Ewi∼π0wixiwhereweinterpret w∈Ω1=L2(π0).\n(2)xℓ∈C0,1(sptP,Ωℓ+1) is defined by zℓ(y,f) =/a\\}b∇acketle{tf, σ(y)/a\\}b∇acket∇i}htπℓ−1wherey∈πℓ−1is the output\nof the previous layer and f∈πℓ−1is the natural index of zℓ. Thuszℓ(·,y)∈L2(πℓ−1) =\nΩℓ.\n(3)zL(y) =/integraltext\nΩL˜aσ(/a\\}b∇acketle{tf,y/a\\}b∇acket∇i}htπL−1)πL(d˜a⊗df).\nAll natural index spaces above are separable Hilbert spaces and th erefore isomorphic to each\nother (for all ℓfor which Ω ℓif infinite-dimensional) and to both L2(0,1) andℓ2. However, the\napplication of the non-linearity σinL2andℓ2is not invariant under Hilbert-space isomorphisms.\nIt makes a big difference whether we take the positive part of a func tionf∈L2(0,1) set all\nnegative Fourier-coefficients of a function to zero. Luckily, natur al isomorphisms preserve the\nstructure of continuous neural network models as in Remark 4.4.\nBANACH SPACES OF WIDE MULTI-LAYER NEURAL NETWORKS 29\n5.Optimization of the continuous network model\nWe now study gradient flows for the risk functionals in the continuou s setting. We will\nrestrict ourselves to the indexed representation with L2-weights. The most natural optimization\nalgorithm for weight-functions aℓ∈L2((0,1)2) is theL2-gradient flow. We show that the usual\ngradient descent dynamics of neural network training can be reco vered as discretizations of\nthe continuous optimization algorithm. In this sense, we follow the ph ilosophy of designing\noptimization algorithms for continuous models and discretizing them la ter which was put forth\nin [EMW19b]. We present our findings in the simplest possible setting.\n5.1.Discretizations of the continuous gradient flow. We now show that a natural dis-\ncretization of the continuous gradient flow recovers the gradient descent dynamics for the usual\nmulti-layer neural networks with the “mean-field” scaling. This is a ge neral feature of Vlasov\ntype dynamics.\nThe following computations are purely formal, assuming that solution s to all ODEs proposed\nbelow exist – the issue of existence and uniqueness of solutions is brie fly discussed in Appendix\nB. The arguments however are based on an identity and energy diss ipation property which are\nexpected to be stable when considering generalized solutions. For s mooth activation functions\nσ, all computations can be made rigorous and solutions exist.\nLemma 5.1. Consider a discretized version of the continuous indexed re presentation:\nf(x) =1\nmLmL/summationdisplay\niL=1aL\niLσ\n1\nmL−1mL−1/summationdisplay\niL−1=1aL−1\niLiL−1σ/parenleftigg\n...σ/parenleftigg\n1\nm1m1/summationdisplay\ni1=1a1\ni2i1σ/parenleftigg\n1\nd+1d+1/summationdisplay\ni0=1a0\ni1i0xi0/parenrightigg/parenrightigg/parenrightigg\n.\nDefine functions\naL:(0,1)→R, aL(s) =aL\nifori−1\nmL≤s <i\nmL\naℓ:(0,1)2→R, aℓ(r,s) =aℓ\nijfori−1\nmℓ+1≤r <i\nmL,j−1\nmℓ≤s <i\nmℓ.\nfor0≤ℓ < L. Thenf=faL,...,a0and the coefficient functions aL,...,a0evolve by the L2-\ngradient flow of\nR(aL,...,a0) =/integraldisplay\nRdℓ/parenleftbig\nfaL,...,a0(x),y/parenrightbig\nP(dx⊗dy)\nif and only if the parameters aL\ni,aℓ\nijevolve by the time-rescaled gradient flows\n˙aL\ni=−mL∂aL\niR/parenleftbig\naL\niL,...,a0\ni1i0/parenrightbig\n˙aℓ\nij=−mℓ+1mℓ∂aℓ\nijR/parenleftbig\naL\niL,...,a0\ni1i0/parenrightbig\n0≤i≤L−1 (5.1)\nwhere the risk of finitely many weights is defined accordingly .\nPassing to a single index set (0 ,1) for all layers, we lose the information about the scaling\nof the width and compensate by prescribing layer-wise learning rate s which lead to balanced\ntraining velocities.\nProof.The proof for networks with one hidden layer can be found in [EW20a , Lemma 2.8]. To\nsimplify the presentation, we focus on the case of two hidden layers . The general case follows\nthe same way. Consider the network\nf(x) =1\nMM/summationdisplay\ni=1aiσ\n1\nmm/summationdisplay\nj=1bijσ/parenleftigg\n1\nd+1d+1/summationdisplay\nk=1cjkxk/parenrightigg\n\n30 WEINAN E AND STEPHAN WOJTOWYTSCH\nand compute the gradient\n∇ai,bij,cjkR(a,b,c) =∇ai,bij,cjk/integraldisplay\nRdℓ/parenleftbig\nfa,b,c(x),y/parenrightbig\nP(dx⊗dy)\n=/integraldisplay\nRd(∂1ℓ)/parenleftbig\nfa,b,c(x),y/parenrightbig\n∇ai,bij,cjkfa,b,c,(x)P(dx⊗dy)\n=/integraldisplay\nRd(∂1ℓ)/parenleftbig\nfa,b,c(x),y/parenrightbig\n1\nMσ/parenleftig\n1\nm/summationtextm\nj=1bijσ/parenleftig\n1\nd+1/summationtextd+1\nk=1cjkxk/parenrightig/parenrightig\n1\nMaiσ′/parenleftbig1\nm/summationtext\nlbilσ(...)/parenrightbig1\nmσ/parenleftig\n1\nd+1/summationtextd+1\nk=1cikxk/parenrightig\n1\nM/summationtextM\ni=1aiσ′(...)1\nmσ′/parenleftig\n1\nd+1/summationtextd+1\nl=1cjlxl/parenrightig\n1\nd+1xk\nP(dx⊗dy)\n=/integraldisplay\nRd(∂1ℓ)/parenleftbig\nfa,b,c(x),y/parenrightbig\n1\nMσ(fbi:,c(x))\n1\nMmaiσ′(fbi:,c(x))σ/parenleftbig\nfcj:(x)/parenrightbig\n1\nm(d+1)1\nM/summationtextM\ni=1aiσ′(fbi:,c(x))σ′/parenleftbig\nfcj:(x)/parenrightbig\nP(dx⊗dy)\nwhere\nfbi:,c(x) =1\nmm/summationdisplay\nj=1bijσ/parenleftigg\n1\nd+1d+1/summationdisplay\nk=1cjkxk/parenrightigg\nandfcj:(x) =1\nd+1d+1/summationdisplay\nl=1cjlxl.\nEqually, we can compute the L2-gradient by taking variations\nδa;φR(a,b,c) = lim\nh→0R(a+hφ,b,c)−R(a,b,c)\nh\n=/integraldisplay\nRdlim\nh→0ℓ/parenleftbig\nfa+hφ,b,c(x),y/parenrightbig\n−ℓ/parenleftbig\nfa,b,c(x),y/parenrightbig\nhP(dx⊗dy) (5.2)\n=/integraldisplay\nRd(∂1ℓ)/parenleftbig\nfa,b,c(x),y/parenrightbig\nlim\nh→0fa+hφ,b,c(x)−fa,b,c(x)\nhP(dx⊗dy)\n=/integraldisplay\nRd(∂1ℓ)/parenleftbig\nfa,b,c(x),y/parenrightbig/integraldisplay1\n0φ(s)σ/parenleftbig\nfbs:,c(x)/parenrightbig\ndsP(dx⊗dy)\n=/integraldisplay1\n0/parenleftbigg/integraldisplay\nRd(∂1ℓ)/parenleftbig\nfa,b,c(x),y/parenrightbig\nσ/parenleftbig\nfbs:,c(x)/parenrightbig\nP(dx⊗dy)/parenrightbigg\nφ(s)ds\nsincefa,b,cis linear in a. Thus the L2-gradient is of Rwith respect to ais represented by the\nL2-function\nδaR(a,b,c;s) =/integraldisplay\nRd(∂1ℓ)/parenleftbig\nfa,b,c(x),y/parenrightbig\nσ/parenleftbig\nfbs:,c(x)/parenrightbig\nP(dx⊗dy)\nwhere again\nfbs:,c(x) =/integraldisplay1\n0bst/parenleftigg\n1\nd+1d+1/summationdisplay\ni=1ctixi/parenrightigg\ndt.\nUsing the chain rule instead of linearity, we compute\nδb;φR(a,b,c) =/integraldisplay\nRd(∂1ℓ)/parenleftbig\nfa,b,c(x),y/parenrightbig\nlim\nh→0fa+hφ,b,c(x)−fa,b,c(x)\nhP(dx⊗dy)\n=/integraldisplay\nRd(∂1ℓ)(...)/integraldisplay1\n0aslim\nh→0σ/parenleftig/integraltext1\n0/parenleftbig\nbs,t+hφs,t/parenrightbig\nσ(fct:(x))dt/parenrightig\n−σ/parenleftig/integraltext1\n0bs,tσ(fct:(x))dt/parenrightig\nhdsP(dx⊗dy)\n=/integraldisplay\nRd(∂1ℓ)(...)/integraldisplay1\n0asσ′(fbs:c(x))/integraldisplay1\n0φs,tσ(fct:(x))dsdtP(dx⊗dy)\nBANACH SPACES OF WIDE MULTI-LAYER NEURAL NETWORKS 31\n=/integraldisplay\n(0,1)2φs,t/parenleftbigg/integraldisplay\nRd(∂1ℓ)(...)/integraldisplay1\n0asσ′(fbs:c(x))σ(fct:(x))/parenrightbigg\ndsdt\nand obtain\nδbR(a,b,c;s,t) =/integraldisplay\nRd(∂1ℓ)/parenleftbig\nfa,b,c(x),y/parenrightbig\nasσ′/parenleftbig\nfbs:c(x)/parenrightbig\nσ(fct:(x))P(dx⊗dy)\nδcR(a,b,c;t) =/integraldisplay\nRd(∂1ℓ)/parenleftbig\nfa,b,c(x),y/parenrightbig/integraldisplay1\n0a(s)σ′(fbs:c(x))b(s,t)σ′(fct:(x))xi\nd+1dsP(dx⊗dy).\nWe can now see by comparing the terms that the gradient flow of a fin ite number of weights,\ninterpreted as a step function, is a solution to the L2-gradient flow under the appropriate time-\nscaling.\nThe general case for deep neural networks follows the same way, in which case\nδaℓR(aL,...,a0;θℓ+1,θℓ) =/integraldisplay\nRd(∂1ℓ)/parenleftbig\nfaL,...,a0(x),y/parenrightbig/integraldisplay\n(0,1)L−ℓ−1aL\nθLσ′(faL−1\nθL:...a0(x))...aℓ+1\nθℓ+1θℓ\nσ′(faℓ\nθℓ:...a0(x))σ(faℓ−1\nθℓ:...a0(x))dθL...dθℓ+2P(dx⊗dy).\n/square\nIf the learning rates are not adapted to the layer width, the weight s of different layers may\nmove at different rates. In the natural time scaling, some layers wo uld evolve at positive speed\nwhile others would remain frozen at their initial position in the limit. In pa rticular, if the\nwidth of the two outermost layers goes to infinity, the index set of t he second layer has size\nmLmL−1≫mL, meaning that the outermost layer would move much faster. In [AOY 19], the\nauthors consider the opposite extreme where the coefficients of t he first and last layers are frozen\nand only intermediate layers evolve (with mℓ≡mfor allℓ).\nRemark5.2.Alternativeproposalsformulti-layernetworktrainingin mean fieldsc aling[AOY19,\nNgu19, NP20, SS19]. In this article, we opted for a particularly simple description of wide multi-\nlayer networks and the natural extension of gradient descent dy namics. All results proved here\nhold fornetworkswith finite layersofany width and therefore shou ld remainvalid moregenerally\nfor another description of the parameter distribution associated to infinitely wide multi-layer\nnetworks.\n5.2.Growth of the path norm. Assuming existence of the gradient-flow evolution for the\nmoment, we prove that the path-norm of an arbitrarily wide neural network increases at most\npolynomially in time under natural training dynamics. First, we conside r the second moments.\nLemma 5.3. Consider the risk functional\nR(aL,...a0) =/integraldisplay\nRdℓ/parenleftbig\nfaL,...,a0(x),y/parenrightbig\nP(dx⊗dy)\nwhereℓ:R×R→[0,∞)is a sufficiently smooth loss function and Pis a compactly supported\ndata distribution. Then\n(5.3)/vextenddouble/vextenddoubleai(t)/vextenddouble/vextenddouble\nL2(πi+1⊗πi)≤/vextenddouble/vextenddoubleai(0)/vextenddouble/vextenddouble\nL2(πi+1⊗πi)+/radicalig\nR/parenleftbig\naL(0),...a0(0)/parenrightbig\nt1/2\nProof.We calculate\nd\ndt/integraldisplay\nΩi+1×Ωi/parenleftbig\nai\nθi+1θi(t)/parenrightbig2(πi+1⊗πi)(dθi+1⊗dθi) = 2/integraldisplay\nΩi+1×Ωiai\nθi+1θi(t)dai\nθi+1θi(t)\ndtdθi+1dθi\n≤2/parenleftigg/integraldisplay\nΩi+1×Ωi/parenleftbig\nai\nθi+1θi(t)/parenrightbig2dθi+1dθi/parenrightigg1\n2/parenleftigg/integraldisplay\nΩi+1×Ωi/parenleftbiggd\ndtai\nθi+1θi(t)/parenrightbigg2\ndθi+1dθi/parenrightigg1\n2\n32 WEINAN E AND STEPHAN WOJTOWYTSCH\nso\nd\ndt/ba∇dblai/ba∇dblL2(πi+1⊗πi)=d\ndt/ba∇dblai/ba∇dbl2\nL2(πi+1⊗πi)\n2/ba∇dblai/ba∇dblL2(πi+1⊗πi)≤/vextenddouble/vextenddouble/vextenddouble/vextenddoubled\ndtai/vextenddouble/vextenddouble/vextenddouble/vextenddouble\nL2(πi+1⊗πi)≤/vextendsingle/vextendsingle/vextendsingle/vextendsingled\ndtR(aL,...a0)/vextendsingle/vextendsingle/vextendsingle/vextendsingle1\n2\nsince the L2-gradient flow naturally satisfies the energy dissipation identity\nd\ndtR(aL,...a0) =−L/summationdisplay\ni=0/vextenddouble/vextenddouble/vextenddouble/vextenddoubled\ndtai/vextenddouble/vextenddouble/vextenddouble/vextenddouble2\nL2(πi+1⊗πi).\nThus\n/vextenddouble/vextenddoubleai(t)/vextenddouble/vextenddouble\nL2(πi+1⊗πi)≤/vextenddouble/vextenddoubleai(0)/vextenddouble/vextenddouble\nL2(πi+1⊗πi)+/integraldisplayt\n0d\nds/ba∇dblai(s)/ba∇dblL2(πi+1⊗πi)ds\n≤/vextenddouble/vextenddoubleai(0)/vextenddouble/vextenddouble\nL2(πi+1⊗πi)+/parenleftbigg/integraldisplayt\n01ds/parenrightbigg1\n2/parenleftbigg/integraldisplayt\n0/vextendsingle/vextendsingle/vextendsingle/vextendsingled\ndsR/parenleftbig\naL(s),...a0(s)/parenrightbig/vextendsingle/vextendsingle/vextendsingle/vextendsingleds/parenrightbigg1\n2\n≤/vextenddouble/vextenddoubleai(0)/vextenddouble/vextenddouble\nL2(πi+1⊗πi)+/radicalig\nR/parenleftbig\naL(0),...a0(0)/parenrightbig\nt1/2\nsince the risk is monotone decreasing and bounded from below by zer o. /square\nRemark 5.4.Like in [Woj20, Lemma 3.3], a more careful analysis shows that the incr ease in the\nL2-norm actually satisfies the stronger estimate\nlim\nt→∞/ba∇dblai(t)/ba∇dblL2\nt1/2= 0.\nThe proof of this result is based on the energy dissipation identity wh ich characterizes weak\nsolutions to gradient flows.\nCorollary 5.5. Assume that /ba∇dblai(0)/ba∇dblL2(πi+1⊗πi)≤C0for alli= 0,...,Land some constant\nC0>0. Then\n(5.4) /ba∇dblfaL(t),...,a0(t)/ba∇dblΩL,...,Ω0;K≤/parenleftbigg\nC0+/radicalig\nR/parenleftbig\naL(0),...a0(0)/parenrightbig\nt1/2/parenrightbiggL+1\nfor allt >0.\nProof.Follows from Lemmas 4.6 and 5.3. /square\nAs such, neural tree spaces are also the relevant class of functio n spaces for suitably initialized\nneural networks which are trained by a gradient descent algorithm . Like in [WE20, Theorem 2],\nthe slow increase of the norm together with the poor approximation property from Corollary3.18\nimplies that the training of multi-layer networks may be subject to th e curse of dimensionality\nwhen trying to approximate general Lipschitz functions in L2(P) for a truly high-dimensional\ndata-distribution P.\nCorollary 5.6. Consider population and empirical risk functionals\nR(aL,...,a0) =1\n2/integraldisplay\n[0,1]d(faL,...,a0−f∗)2(x)dx,Rn(aL,...,a0) =1\n2nn/summationdisplay\ni=1(fπ−f∗)2(xi)\nwheref∗is a Lipschitz-continuous target function and the points xiare iid samples from the\nuniform distribution on [0,1]d. There exists f∗satisfying\nsup\nx∈[0,1]d/vextendsingle/vextendsinglef∗(x)/vextendsingle/vextendsingle+ sup\nx/\\e}atio\\slash=y|f∗(x)−f∗(y)|\n|x−y|≤1\nBANACH SPACES OF WIDE MULTI-LAYER NEURAL NETWORKS 33\nsuch that the weight functions of aL,...,a0evolving by L2-gradient flow of either RnorRsatisfy\nlimsup\nt→∞/bracketleftbig\ntγR(aL(t),...,a0(t))/bracketrightbig\n=∞\nfor allγ >2L\nd−2.\n6.Conclusion\nThe classical function spaces which have been proved very succes sful in low-dimensional anal-\nysis (Sobolev, BV, BD, ...) seem ill-equipped to tackle problems in mac hine learning. The\nsituation has been partially remedied in some cases by introducing the function spaces asso-\nciated to different models, like reproducing kernel Hilbert spaces fo r random feature models,\nBarron space for two-layer neural networks or the flow-induced function space for infinitely deep\nResNets [EMW19a].\nIn this article, we introduced several function classes for fully con nected multi-layer feed-\nforward networks:\n(1) The neural tree spaces WL(K) for questions related to approximation theory and varia-\ntional analysis,\n(2) theclassesofarbitrarilywideneuralnetworksmodelledongene ralindexspacesΩ L,...,Ω0,\nwhich we denoted by XΩL,...,Ω0;K, and\n(3) the classes of arbitrarily wide neural networks modelled on gene ral index spaces with\nHilbert weights (or multi-layer spaces), which we denoted by WπL,...,π0(K).\nThe key to the definition of these spaces is the representation of f unctions.\nNeural tree spaces are built using a tree-like index structure, and network weights have no\nnatural meaning. This point of view thus cannot encompass training algorithms which operate\non networkweights. Byanalogywith classicalapproximationtheory , we canthink offinite neural\nnetworks as polynomials (finitely parametrized functions) and of ne ural tree spaces as Sobolev\nor Besov classes obtained as the closure under a weak norm, but to o general for classical Taylor\nseries. We denoted these by Wfor ‘wide’ structures.\nThe classes of arbitrarily wide neural networks are introduced as v ery general function classes\nwhich exhibit the natural neural network structure via generalize d index spaces. In the general\nclass of arbitrarily wide networks, weight functions are assumed to be merely measurable with\nintegrable products, which is a too large space to study training dyn amics. The restriction of\nthe multi-layer norm to this space is a natural norm, and the closure of the unit ball in the space\nof arbitrarily wide neural networks and neural tree space coincide s.\nTo study training dynamics, we consider the space of arbitrarily wide neural networks with\nHilbert weights, where the L2-inner product induces a gradient flow in the natural way. The\nrestriction of the path norm does not control the L2-magnitude of the weight functions, so we\nstudied a different measure of complexity on this function space (wh ich is not usually a norm).\nThe complexity measure was seen to bound the path-norm from abo ve and to grow at most like\ntL+1\n2in time under gradient flow training.\nIt is immediate that WπL,...,π0(K)⊆XΩL,...,Ω0;K⊆ WL(K) with inclusions that are strict if\nthe index spaces are finite. Whether the inclusions are strict in the g eneral case, is not clear.\nIn the case of three-layer networks, they can be interpreted as the spaces in which the first\nhidden layer is wide enough to output Barron space, a separable sub space of Barron space and a\nreproducing kernel Hilbert space respectively. All three spaces c ontain all Barron functions and\ntheir compositions.\nOne naturally asks which one of these spaces is most suited for desc ribing multi-layer neural\nnetworks. An ideal space should (1) be complete, (2) have a nice ap proximation theory, (3) have\na low Rademacher complexity, and (4) most importantly, be concret e enough so that one can\n34 WEINAN E AND STEPHAN WOJTOWYTSCH\nmake use of the function representation for practical purposes . At this point, we cannot prove\nany of the spaces introduced here satisfies all these requirement s. Our feeling is that the space\nWπL,...,π0(K) for sufficiently large index spaces (Ω ℓ,Aℓ,πℓ) might be the most promising one,\neven though at this point it is only a metric vector space, not a norme d space (see Definitions\n4.9 and 4.10 and the surrounding paragraphs.). However, it seems t o be the most relevant space\nfor practical purposes.\nA number of questions remain open.\n(1) Beyond first observations, the relationship between the neur al tree spaces WL(K) and\nits subspace WπL,...,π0(K) for sufficiently expressive index sets remains unexplored. The\nfirst space is suited for variational and approximation problems, wh ile the second is a\nnatural object for mean-field training. It is an important question how much of the\nhypothesis space we can explore using natural training dynamics.\nEven for networks with two hidden layers, only heuristic observatio ns about W2(K)\nand its subspaces WL,L,π0;Kand/tildewiderW2=X(0,1),(0,1),{0,...,d};Kof network-like functions are\navailable. Whether the two can be treated in a unified perspective re mains to be seen.\n(2) The direct approximation theorem holds for neural tree space s, but not with the Monte-\nCarlo rate(in terms of free parameters). Whether a better rate can be achieved for\nfunctions in neural tree space for L≫1 (or at least a space of arbitrarily wide neural\nnetworks) remains an important open problem.\n(3) The properties of the complete metric vector spaces WπL,...π0(K) have not been studied\nyet.\n(4) We defined a monotonically increasing sequence of spaces WLforL∈N. Examples 2.4\nand 2.5 show that BX,Kmay much larger than Xor exactly the same, depending on X.\nConcerning neural networks, it is clear that W1is much larger than W0. In [EW20b],\nwe give give an easy to check criterion which implies that a function is no t inW1and\nprovide examples of functions which are in WL,L,π0(K)⊆ W2, but not W1. Beyond this,\nthe relationship between the spaces WℓandWLforℓ < Lis largely unexplored.\n(5) In this paper, we considered the minimization of an integral risk f unctional. A more\nclassical problem in numerical analysis concerns the discretization o f variational prob-\nlems and partial differential equations. In both applications, a key c omponent is the\napproximation of a solution f∗of the problem by functions fmin a finitely parameter-\nized hypothesis class (Galerkin spaces or neural networks). Ofte n, the approximation\nrate/ba∇dblfm−f∗/ba∇dbl ≤m−αof solutions fmof the discretized problem to the true solution\ndepends on the properties of f∗(as well as the choice of norm).\nFor many variational problems and partial differential equations, a priori estimates on\nthe solutions in Sobolev or H¨ older spaces are available. The regularit y off∗is therefore\nunderstood, as well as the expected rate of convergence fm→f.\nIn machine learning, a regularity theory of this type is generally missin g. It is often\nunclear in which function space the minimizer of a well-posed risk funct ional should lie,\nand thus equally unclear what type of machine learning model to use ( random feature\nmodel, shallow neural network, deep neural network, ResNet, .. .). A regularity theory\nwhich bounds the necessary number of layers in a neural network f rom above or below\neven for specific learning applications is not yet available.\nAs shown in Corollary 5.6, gradient descent may converge very slowly if the target\nfunction does not lie in the correct target space andL\nd≪1.\n(6) Even assuming that the solution to a variational problem is known explicitly, it remains\ndifficult to decide whether it lies in WLfor a given L. Only for L= 1 a positive criterion\nis given in [Bar93] and a negative criterion following [EW20b, Theorem 5 .4]. In general,\nBANACH SPACES OF WIDE MULTI-LAYER NEURAL NETWORKS 35\nit remains hard to check whether a function can be expressed as a n eural network of\ndepthL.\n(7) In this article, we focused on fully connected networks with infin itely wide layers. The\ntheory for other types of neural networks (convolutional, recu rrent, residual) will be the\nsubject of future work.\nStarting with the articles [HR17, E17, LCTW17, EHL18], deep ResNet s have been\nmodeled as discretizations of an ODE flow (sometimes referred to as ‘neural ODEs’).\nA function space for infinitely deep residual networks with skip-con nections after every\nlayer has been proposed in [EMW19a]. In this model, the width of increm ented layers\nis constant, but the width of the residual block may go to infinity. Th e case of ResNets\nwhich are both very wide and very deep and have skip-connections e veryℓ≥1 layers is\ncurrently unexplored.\nAs demonstrated in Example 2.12, Rademacher complexity cannot giv e a significantly\nbetter generalization bound for the space of convolutional netwo rks than for the space of\nfully connected networks. Despite many heuristic explanations, th e factors contributing\nto the success of convolutional networks in image processing have not been understood\nrigorously (for non-linear activation functions).\n(8) Even for finite neural networks with ReLU activation and more t han one hidden layer,\nwe are not aware of rigorous results for the existence of solutions to the gradient flow\nequations in any strong or weak formulation.\n(9) In many applications, neural networks are initialized with parame ters that scale in such\na way that the path-norm grows beyond all bounds as the number o f neurons increases.\nLearning rates may not be adapted to the width of the layers in applic ations, and the\nscaling invariance σ(z)≡λσ(λ−1z) forλ >0 may lead to coefficients which are of very\ndifferent magnitude on different layers. In this situation, our analys is does not apply,\nand it can be shown rigorously in some cases that very wide networks of fixed depth may\nbehave like linear models [ADH+19, DZPS18, DLL+18, EMWW19, EMW19c, JGH18].\nThese analyses typically make use of over-parametrization by assu ming that the net-\nwork has many more neurons than the data set has training samples . In this scaling\nregime, the correct function spaces and training dynamics for wide networks under pop-\nulation risk are generally unexplored.\nReferences\n[ADH+19] S. Arora, S. S. Du, W. Hu, Z. Li, R. R. Salakhutdinov, and R. Wang. On exact computation with an\ninfinitely wide neural net. In Advances in Neural Information Processing Systems , pages 8139–8148,\n2019.\n[AFP00] L. Ambrosio, N. Fusco, and D. Pallara. Functions of bounded variation and free discontinuity prob -\nlems, volume 254. Clarendon Press Oxford, 2000.\n[AOY19] D. Ara` ujo, R. I. Oliveira, and D. Yukimura. A mean-fi eld limit for certain deep neural networks.\narXiv:1906.00193 [math.ST] , 2019.\n[Bac17] F. Bach. Breaking the curse of dimensionality with c onvex neural networks. The Journal of Machine\nLearning Research , 18(1):629–681, 2017.\n[Bar93] A. R. Barron. Universal approximation bounds for su perpositions of a sigmoidal function. IEEE\nTransactions on Information theory , 39(3):930–945, 1993.\n[BK18] A. R. Barron and J. M. Klusowski. Approximation and es timation for high-dimensional deep learning\nnetworks. arXiv preprint arXiv:1809.03090 , 2018.\n[Bre11] H. Brezis. Functional analysis, Sobolev spaces and partial differenti al equations . Universitext.\nSpringer, New York, 2011.\n[CB20] L. Chizat and F. Bach. Implicit bias of gradient desce nt for wide two-layer neural networks trained\nwith the logistic loss. arxiv:2002.04486 [math.OC] , 2020.\n[Cyb89] G. Cybenko. Approximation by superpositions of a si gmoidal function. Mathematics of control,\nsignals and systems , 2(4):303–314, 1989.\n36 WEINAN E AND STEPHAN WOJTOWYTSCH\n[DLL+18] S. S. Du, J. D. Lee, H. Li, L. Wang, and X. Zhai. Gradient des cent finds global minima of deep\nneural networks. arXiv:1811.03804 [cs.LG] , 2018.\n[Dob10] M. Dobrowolski. Angewandte Funktionalanalysis: Funktionalanalysis, Sob olev-R¨ aume und elliptis-\nche Differentialgleichungen . Springer-Verlag, 2010.\n[DZPS18] S. S. Du, X. Zhai, B. Poczos, and A. Singh. Gradient d escent provably optimizes over-parameterized\nneural networks. arXiv:1810.02054 [cs.LG] , 2018.\n[E17] W. E. A proposal on machine learning via dynamical syst ems.Communications in Mathematics and\nStatistics , 5(1):1–11, 2017.\n[EG15] L. C. Evans and R. F. Gariepy. Measure theory and fine properties of functions . CRC press, 2015.\n[EHL18] W. E, J. Han, and Q. Li. A mean-field optimal control fo rmulation of deep learning. Research in the\nMathematical Sciences , 6:, arXiv:1807.01083 [math.OC], 07 2018.\n[Els96] J. Elstrodt. Maß-und Integrationstheorie , volume 7. Springer, 1996.\n[EMW18] W. E, C. Ma, and L. Wu. A priori estimates of the popula tion risk for two-layer neural networks.\nComm. Math. Sci. , 17(5):1407 – 1425 (2019), arxiv:1810.06397 [cs.LG] (2018 ).\n[EMW19a] W. E, C. Ma, and L. Wu. Barron spaces and the composit ional function spaces for neural network\nmodels. arXiv:1906.08039 [cs.LG] , 2019.\n[EMW19b] W. E, C. Ma, and L. Wu. Machine learning from a contin uous viewpoint. arxiv:1912.12777\n[math.NA] , 2019.\n[EMW19c] W. E, C. Ma, and L. Wu. A comparative analysis of opti mization and generalization properties of\ntwo-layer neural network and random feature models under gr adient descent dynamics. Sci. China\nMath., https://doi.org/10.1007/s11425-019-1628-5, arXiv:19 04.04326 [cs.LG] (2019).\n[EMWW19] W. E, C. Ma, Q. Wang, and L. Wu. Analysis of the gradie nt descent algorithm for a deep neural\nnetwork model with skip-connections. arXiv:1904.05263 [cs.LG] , 2019.\n[EW20a] W. E and S. Wojtowytsch. Kolmogorov width decay and p oor approximators in machine learn-\ning: Shallow neural networks, random feature models and neu ral tangent kernels. arXiv:2005.10807\n[math.FA] , 2020.\n[EW20b] W. E and S. Wojtowytsch. Representation formulas an d pointwise properties for barron functions.\nIn preparation , 2020.\n[FG15] N. Fournier and A. Guillin. On the rate of convergence in Wasserstein distance of the empirical\nmeasure. Probability Theory and Related Fields , 162(3-4):707–738, 2015.\n[GB10] X. Glorot and Y. Bengio. Understanding the difficulty o f training deep feedforward neural networks.\nInProceedings of the thirteenth international conference on artificial intelligence and statistics ,\npages 249–256, 2010.\n[Hor91] K. Hornik. Approximation capabilities of multilay er feedforward networks. Neural networks ,\n4(2):251–257, 1991.\n[HR17] E. Haber and L. Ruthotto. Stable architectures for de ep neural networks. Inverse Problems ,\n34(1):014004, 2017.\n[JGH18] A. Jacot, F. Gabriel, and C. Hongler. Neural tangent kernel: Convergence and generalization in\nneural networks. In Advances in neural information processing systems , pages 8571–8580, 2018.\n[KB16] J. M. Klusowski and A. R. Barron. Risk bounds for high- dimensional ridge function combinations\nincluding neural networks. arXiv preprint arXiv:1607.01434 , 2016.\n[Kle06] A. Klenke. Wahrscheinlichkeitstheorie , volume 1. Springer, 2006.\n[LCTW17] Q. Li, L. Chen, C. Tai, and E. Weinan. Maximum princi ple based algorithms for deep learning. The\nJournal of Machine Learning Research , 18(1):5998–6026, 2017.\n[LLPS93] M. Leshno, V. Y. Lin, A. Pinkus, and S. Schocken. Mul tilayer feedforward networks with a nonpoly-\nnomial activation function can approximate any function. Neural networks , 6(6):861–867, 1993.\n[Lor66] G. Lorentz. Approximation of Functions . Holt, Rinehart and Winston, New York, 1966.\n[Mun74] J. R. Munkres. Topology: a First Course . Prentice-Hall, 1974.\n[Ngu19] P.-M. Nguyen. Mean field limit of the learning dynami cs of multilayer neural networks.\narXiv:1902.02880 [cs.LG] , 2019.\n[NP20] P.-M. Nguyen and H. T. Pham. A rigorous framework for t he mean field limit of multilayer neural\nnetworks. arXiv:2001.11443 [cs.LG] , 2020.\n[RR08] A. Rahimi and B. Recht. Uniform approximation of func tions with random bases. In 2008 46th\nAnnual Allerton Conference on Communication, Control, and Computing , pages 555–561. IEEE,\n2008.\n[R˚ uˇ z06] M. R˚ uˇ ziˇ cka. Nichtlineare Funktionalanalysis: Eine Einf¨ uhrung . Springer-Verlag, 2006.\n[SS19] J. Sirignano and K. Spiliopoulos. Mean field analysis of deep neural networks. arXiv:1903.04440\n[math.PR] , 2019.\nBANACH SPACES OF WIDE MULTI-LAYER NEURAL NETWORKS 37\n[SSBD14] S. Shalev-Shwartz and S. Ben-David. Understanding machine learning: From theory to algorithms .\nCambridge university press, 2014.\n[WE20] S. Wojtowytsch and W. E. Can shallow neural networks b eat the curse of dimensionality? A mean\nfield training perspective. arXiv:2005.10815 [cs.LG] , 2020.\n[Woj20] S. Wojtowytsch. On the global convergence of gradie nt descent training for two-layer Relu networks\nin the mean field regime. arXiv:2005.13530 [math.AP] , 2020.\n[Yos12] K. Yosida. Functional analysis . Springer Science & Business Media, 2012.\nAppendix A.A brief review of measure theory\nWe briefly review some notions of measure theory used throughout the article. We assume\nfamiliarity with the basic notions of topology, measure theory, and f unctional analysis (metrics,\ntopologies, σ-algebras, measures, Banach spaces, dual spaces, weak topolo gies, ...). Further\nbackground material can be found e.g. in [Bre11, Els96, Mun74, Yos 12, Kle06].\nA.1.General measure theory. Let (X,A) be a measurable space. A signed measure is a map\nµ:A →R∪{−∞,∞}such that for any collection {Ai}i∈Zof measurable disjoint sets we have\nµ/parenleftigg∞/uniondisplay\ni=1Ai/parenrightigg\n=∞/summationdisplay\ni=1µ(Ai)\n(σ-additivity), assuming that the right hand side is defined. A signed me asureµadmits a Hahn\ndecomposition µ=µ+−µ−whereµ+,µ−are mutually singular (non-negative) measures. All\nproofs for this section can be found in [Kle06, Chapter 7.5] for proo fs in this section. Being\nmutually singular means that there exist measurable sets A+,A−such that\nµ+(A+) =µ+(X), µ−(A+) = 0, µ+(A−) = 0, µ−(A−) =µ−(X),\ni.e.µ+,µ−“live” on different subset of X. The (non-negative) measure |µ|=µ++µ−is called\nthe total variation measure of µ. The total variation norm of µis defined as\n/ba∇dblµ/ba∇dbl=|µ|(X) =µ+(A+)+µ−(A−) = sup\nA,A′∈Aµ(A)−µ(A′).\nLetX,Ybe measurable spaces, φ:X→Ya measurable map and µa (signed) measure on X.\nThen we define the push-forward φ♯µofµalongφby (φ♯µ)(A) =µ(φ−1(A)) for all measurable\nA⊆Y. Note that by definition/integraldisplay\nXf(φ(x))µ(dx) =/integraldisplay\nYf(y)(φ♯µ)(dy)∀f:Y→R.\nFurthermore, /ba∇dblφ♯µ/ba∇dbl ≤ /ba∇dblµ/ba∇dbl(since the images φ(A+) andφ(A−) may intersect non-trivially) and\n/ba∇dblφ♯µ/ba∇dbl=/ba∇dblµ/ba∇dblifµis a (non-negative) measure (since no cancellations can occur).\nA.2.Measure theory and topology. All measurable spaces considered in this article have\ncompatible topological and measure theoretic structures. The fo llowing kind of spaces have\nproved to be well suited for many applications.\nDefinition A.1. APolish space is a second countable topological space Xsuch that there exists\na metric donXwhich induces the topology of Xand such that ( X,d) is a complete metric\nspace.\nIn particular, compact metric spaces are Polish. Since Polish spaces are metrizable, being\nsecond countable and separable is equivalent here.\nLemma A.2. [Els96, Appendix A.22] LetX,Ybe Polish spaces. The following are Polish spaces.\n(1) An open subset U⊆Xwith the subspace topology.\n(2) A closed subset U⊆Xwith the subspace topology.\n38 WEINAN E AND STEPHAN WOJTOWYTSCH\n(3)X×Ywith the product topology.\nAll but the first point are trivial. If Uis a non-empty open set, note that the metric\ndU(x,x′) =d(x,x′)+/vextendsingle/vextendsinglefU(x)−fU(x′)/vextendsingle/vextendsingle, f U(x) =1\ndist(x,∂U)\ninduces the same topology as donUand is complete if dis complete on X. There are various\ncompatibility notions between the topological structure and measu re theoretic structure of a\nspaceX.\nDefinition A.3. LetXbe a Hausdorff space (so that compact sets are closed ⇒Borel).\n(1) The Borel σ-algebra is the σ-algebra generated by the collection of open subsets of X.\nWe will always assume that measures are defined on a the Borel σ-algebra.\n(2) A measure µis called locally finite if every set x∈Xhas a neighbourhood Usuch that\nµ(U)<∞. Locally finite measures are also referred to as Borel measures .\n(3) A measure µis called inner regular if\nµ(A) = sup{µ(K)|K⊆A, Kis compact }\nfor all measurable sets A. An inner regular Borel measure is called a Radon measure .\n(4) A measure µis called outer regular if\nµ(U) = inf{µ(U)|A⊆U, Uis open}\nfor all measurable sets A. A measure is called regularif it is both inner and outer regular.\n(5) A measure µis called moderate ifX=/uniontext∞\nk=1Ukwhere the Ukare open sets of finite\nmeasure.\nOn Polish spaces, most measures of importance are Radon measure s. The following result is\ndue to Ulam.\nTheorem A.4. [Els96, Kapitel VIII, Satz 1.16] LetXbe a Polish space. Then every Borel\nmeasure µonXis moderate and regular (in particular, a Radon measure).\nFor Radon measures, we can define the analogue of the support of a function to capture the\nset the measure ‘sees’.\nDefinition A.5. Letµbe a Radon measure. We set\nspt(µ) =/intersectiondisplay\nKclosed,µ(X\\K)=0K.\nThe support of a measure is closed. Note that the measure µ=/summationtext∞\ni=1aiδqihas support Rif\naiis a summable sequence of positive numbers and qiis an enumeration of Q. We say that µ\nconcentrates onQsinceµ(R\\ Q) = 0. The support of a measure µcan be significantly larger\nthan a set on which µconcentrates.\nA.3.Continuous functions on metric spaces. In many analysis classes, the space of contin-\nuous functions on [0 ,1] is shown to be separable as a corollary to the Stone-Weierstrass theorem\nwith the dense set of polynomials with rational coefficients. This can b e shown in a simpler way\nand greater generality.\nTheorem A.6. LetXbe a compact metric space and C(X)the space of continuous real-valued\nfunctions on Xwith the supremum norm. Then C(X)is separable.\nBANACH SPACES OF WIDE MULTI-LAYER NEURAL NETWORKS 39\nProof.SinceXis compact, it has a countable dense subset {xn}n∈N. Consider a family of\ncontinuous functions ηn,m:X→[0,1] such that\nηn,m(x) =/braceleftigg\n1d(x,xn)≤1\nm\n0d(x,xn)≥2\nm.\nDenote\nFn,m=\n\nn/summationdisplay\ni=1m/summationdisplay\nj=1ai,jηi,j(x)/vextendsingle/vextendsingle/vextendsingle/vextendsingleai,j∈Q∀i,j∈N\n\n,F=∞/uniondisplay\nn,m=1Fn,m.\nThenFis a countable subset of C(X). Iff:X→Ris continuous, it is uniformly continuous,\nand it is easy to see by contradiction that fcan be approximated uniformly by functions in\nF. /square\nRemark A.7.The same holds for the space of continuous functions from a compa ct metric space\nXinto a separable metric space Ywith the metric\nd(f,g) = sup\nx∈XdY(f(x),g(x))\nand more generally on locally compact Hausdorff spaces and the comp act-open topology on the\nspace of continuous maps.\nA.4.Measure theory and functional analysis. Radon measures allow a convenient func-\ntional analytic interpretation due to the following Riesz representa tion theorem. We only invoke\nthe theorem in the special case of compact spaces and note that c ompact metric spaces are both\nlocally compact and separable. The same result holds in greater gene rality, which we shall avoid\nto focus on the setting where the space of continuous functions is a Banach space.\nTheorem A.8. [AFP00, Theorem 1.54] LetXbe a compact metric space and C(X;Rm)the\nspace of all continuous Rm-valued functions on X. LetLbe a continuous linear functional on\nC(X;Rm). Then there exist a (non-negative) Radon measure µand aµ-measurable function\nν:X→Sm−1such that\nL(f) =/integraldisplay\nX/a\\}b∇acketle{tf(x),ν(x)/a\\}b∇acket∇i}htµ(dx)∀f∈C(X;Rm).\nFurthermore, /ba∇dblL/ba∇dblC(X;Rm)∗=/ba∇dblµ/ba∇dbl.\nDenote by Athe Borel σ-algebra of X. The function\nν·µ:A →Rm,(ν·µ)(A) =/integraldisplay\nAν(x)µ(dx)\nis called a vector valued Radon measure ifm≥2 (and a signed Radon measure ifm= 1). Vector-\nvalued Radon measures are σ-additive on the Borel σ-algebra. The measure µis called the total\nvariation measure of ν·µ. In the following, we will denote vector-valued Radon measures simp ly\nbyµand the total variation measure by |µ|, like we did before for signed measures. The theorem\nadmits the following interpretation and extension.\nTheorem A.9. The dual space of C(X;Rm)is the space of Rm-valuedRadon measures M(X;Rm)\nwith the norm\n/ba∇dblµ/ba∇dblM(X;Rm)=|µ|(X).\nWe denote the space of Rm-valued Radon measures by M(X;Rm)andM(X;R) =:M(X).\n40 WEINAN E AND STEPHAN WOJTOWYTSCH\nDefinition A.10. We say that a sequence of (signed, vector-valued) Radon measur esµncon-\nverges weakly to µand write µn⇀ µif/integraldisplay\nXf(x)µn(dx)→/integraldisplay\nXf(x)µ(dx)∀f∈C(X) =C(X;R).\nIn this terminology, the weak convergence of Radon measures coin cides with weak* conver-\ngenceinthe dualspaceof C(X). BytheBanach-Alaoglutheorem[Bre11, Theorem3.16],the unit\nball ofM(X) is compact in the weak* topology. Since C(X) is separable, the weak* topology of\nM(X) is metrizable [Bre11, Theorem 3.28]. Thus if µnis a bounded sequence in M(X), there\nexists a weakly convergent subsequence. This establishes the compactness theorem for Radon\nmeasures .\nTheorem A.11. Letµnbe a sequence of (signed, vector-valued) Radon measures suc h that\n/ba∇dblµn/ba∇dbl ≤1. Then there exists a (signed, vector-valued) Radon measure µsuch that µn⇀ µ.\nA good exposition in the context of Euclidean spaces can be found in [E G15, Chapter 1] with\narguments which can be applied more generally.\nA.5.Bochner integrals. Bochner integrals are a generalization of Lebesgue integrals to fun c-\ntions with values in Banach spaces. A quick introduction can be found e.g. in [Yos12, Chapter\nV, part 5] or [R˚ uˇ z06, Kapitel 2.1].\nDefinition A.12. Let (X,A,µ) be a measure space and Ya Banach space. A function f:X→\nYis calledBochner-measurable if there exists a sequence of step functions fn=/summationtextn\ni=1yiχAiwith\nyi∈Y,Ai∈ Asuch that fn→fpointwise µ-almost everywhere.\nFor real-valued functions, Bochner-measurability coincides with th e usual notion of measura-\nbility.\nLemma A.13. LetXbe a compact metric space, Aits Borel sigma algebra, µa measure on A\nandYa Banach space. Then every continuous function f:X→Yis uniformly continuous and\nthus Bochner-measurable.\nAfunction fisBochner-integrable iftheintegrals/summationtextn\ni=1µ(Ai)yiofthe approximatingsequence\nfnconverge and do not depend on the choice of fn.\nLemma A.14. LetXbe a compact metric space, Aits Borel sigma algebra, µa finite measure\nonAandYa Banach space. Then every continuous function f:X→Yis additionally bounded\nand thus Bochner-integrable.\nBochner-integrals are linked to Lebesgue-integrals in the following w ay.\nLemma A.15. Letfbe a Bochner-measurable function. Then fis Bochner-integrable if and\nonly if/ba∇dblf/ba∇dbl:X→Ris Lebesgue-integrable. Furthermore,/vextenddouble/vextenddouble/vextenddouble/vextenddouble/integraldisplay\nXf(x)µ(dx)/vextenddouble/vextenddouble/vextenddouble/vextenddouble\nY≤/integraldisplay\nX/ba∇dblf(x)/ba∇dblYµ(dx).\nIfµis a finite signed measure, these notions generalize in the obvious way .\nDefinition A.16. Let (Ω,A,µ) be a measure space, p∈[1,∞] andXa Banach space. Then\nthe Bochner space Lp(Ω;X) is the space of all Bochner-measurable functions f: Ω→Xsuch\nthat/ba∇dblf/ba∇dbl ∈Lp(Ω).\nThe following is proved in the unnumbered example following [R˚ uˇ z06, L emma 1.23]. The\nclaim is formulated in the special case where Ω 1is an interval and Ω 2⊆Rd, but the proof holds\nmore generally.\nBANACH SPACES OF WIDE MULTI-LAYER NEURAL NETWORKS 41\nLemma A.17. Let(Ωi,Ai,µi)be measure spaces for i= 1,2. Thenf∈Lp(µ1⊗µ2)if and only\nif the function\nF: Ω1→Lp(Ω2),/bracketleftbig\nF(ω1)/bracketrightbig\n(ω2) =f(ω1,ω2)\nis well-defined and in Lp(Ω1,Lp(Ω2)).\nFurthermore, we recall the following immediate result, which we will ap ply in conjunction\nwith the previous Lemma in the special case that H=L2(0,1).\nLemma A.18. IfHis a Hilbert space, so is L2(Ω;H)with the inner production\n/a\\}b∇acketle{tf,g/a\\}b∇acket∇i}htL2(H)=/integraldisplay\nΩ/a\\}b∇acketle{tf(ω),g(ω)/a\\}b∇acket∇i}htµ(dω).\nAppendix B.On the existence and uniqueness of the gradient flow\nFor networks with smooth activation functions, the preceding ana lysis can be justified rigor-\nously. We briefly discuss some obstacles in the case of ReLU activatio n.\nExample B.1.Generically, solutionsofgradientflowtrainingforReLU-activationa renon-unique,\neven for functions with one hidden layer. We consider a network with one hidden layer, one\nneuron, and a risk functional with one data point:\nfa,b(x) =aσ(b1x−b2),R(a,b) =/vextendsingle/vextendsinglefa,b(1)−1/vextendsingle/vextendsingle2=/vextendsingle/vextendsinglea(b1−b2)+−1/vextendsingle/vextendsingle2.\nIfa,bis initialized as a0= 1,b0= (1,1), then one solution of the gradient flow inclusion is\nconstant in time. This solution is obtained as the limit of gradient flow tr aining for regularized\nactivation functions σεsatisfying ( σε)′(0) = 0. Another solution is the solution ( a,b) of ReLU\ntraining is\n\n˙at\n˙b1\nt˙b2\nt\n=−∇a,b/vextendsingle/vextendsinglea(b1−b2)−1/vextendsingle/vextendsingle2=−2/parenleftbig\na(b1−b2)−1/parenrightbig\nb1−b2\na\n−a\n,\nfor which the risk decays to zero. This is obtained as the limit of appro ximating gradient flows\nassociated to σεwith (σε)′(0) = 1.\nAs the training dynamics are non-unique, the Picard-Lindel¨ off theo rem cannot apply. In\n[Woj20, Lemma 3.1], we showed that the situation can be remedied by c onsidering gradient flows\nof population risk for suitably regular data distributions P. A key ingredient of the proof is that\nfor fixed w,σ′(wTx) is well-defined except on a hyper-plane in Rd, which we assume to be P-null\nsets. An existence proof based on the Peano existence theorem is also presented in a specific\ncontext in [CB20].\nThis argument cannot be extended to networks with multiple hidden la yers since terms of the\nformσ′(f(x)) occur where fcan be a general Barron function (or even more general for deep\nnetworks). The level sets of Barron functions may be highly irregu lar and even for C1-smooth\nBarron functions, Sard’s theorem need not apply [EW20b, Remark 3.2]. In particular, for any\ndata distribution P, we can find a non-constant Barron function fsuch that P({f= 0})>0.\nIt thus appears inevitable to consider a class of weak solutions base d on energy dissipation\nproperties or differential inclusions. We note that the proofs in this article are based on purely\n42 WEINAN E AND STEPHAN WOJTOWYTSCH\nformal identities and the energy dissipation property. We thus exp ect the results to remain valid\nfor suitable generalized solutions.\nWeinan E, Department of Mathematics and Program in Applied an d Computational Mathematics,\nPrinceton University, Princeton, NJ 08544, USA\nE-mail address :[email protected]\nStephanWojtowytsch, Programin Applied and ComputationalM athematics, Princeton University,\nPrinceton, NJ 08544, USA\nE-mail address :[email protected]",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "JNVM3KGmIP",
"year": null,
"venue": "CoRR 2020",
"pdf_link": "http://arxiv.org/pdf/2005.10807v2",
"forum_link": "https://openreview.net/forum?id=JNVM3KGmIP",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Kolmogorov Width Decay and Poor Approximators in Machine Learning: Shallow Neural Networks, Random Feature Models and Neural Tangent Kernels",
"authors": [
"Weinan E",
"Stephan Wojtowytsch"
],
"abstract": "We establish a scale separation of Kolmogorov width type between subspaces of a given Banach space under the condition that a sequence of linear maps converges much faster on one of the subspaces. The general technique is then applied to show that reproducing kernel Hilbert spaces are poor $L^2$-approximators for the class of two-layer neural networks in high dimension, and that multi-layer networks with small path norm are poor approximators for certain Lipschitz functions, also in the $L^2$-topology.",
"keywords": [],
"raw_extracted_content": "arXiv:2005.10807v2 [math.FA] 2 Oct 2020KOLMOGOROV WIDTH DECAY AND POOR APPROXIMATORS IN\nMACHINE LEARNING: SHALLOW NEURAL NETWORKS, RANDOM\nFEATURE MODELS AND NEURAL TANGENT KERNELS\nWEINAN E AND STEPHAN WOJTOWYTSCH\nDedicated to Andrew Majda on the occasion of his 70th birthda y\nAbstract. We establish a scale separation of Kolmogorov width type bet ween subspaces of a\ngiven Banach space under the condition that a sequence of lin ear maps converges much faster\non one of the subspaces. The general technique is then applie d to show that reproducing\nkernel Hilbert spaces are poor L2-approximators for the class of two-layer neural networks i n\nhigh dimension, and that multi-layer networks with small pa th norm are poor approximators\nfor certain Lipschitz functions, also in the L2-topology.\n1.Introduction\nIt has been known since the early 1990s that two-layer neural net works with sigmoidal or\nReLU activation can approximate arbitrary continuous functions o n compact sets in the uniform\ntopology [Cyb89, Hor91]. In fact, when approximating a suitable (infi nite-dimensional) class of\nfunctions in the L2topology of anycompactly supported Radon probability measure, two-layer\nnetworks can evade the curse of dimensionality [Bar93]. In this art icle, we show that\n(1) infinitely wide random feature functions with norm bounds are mu ch worse approxima-\ntors in high dimension compared to two-layer neural networks.\n(2) infinitely wide neural networks are subject to the curse of dime nsionality when approxi-\nmating general Lipschitz functions in high dimension.\nIn both cases, we consider approximation in the L2([0,1]d)-topology. The second statement\napplies more generally to any model in which few data samples are need ed to estimate integrals\nuniformly over the hypothesis class. In the first point, we can cons ider more general kernel meth-\nods instead of random features (including certain neural tangent kernels), and the second claim\nholds true for multi-layer networks as well as deep ResNets of boun ded width. We conjecture\nthat Lipschitz functions in the second statement could be replaced withCkfunctions for fixed\nk. Precise statements of the results are given in Corollary 3.4 and Exa mple 4.3.\nTo prove these results, we show more generally that if X,Yare subspaces of a Banach space\nZand a sequence of linear maps Anconverges quickly to a limit AonX, but not on Y, then\nthere must be a Kolmogorov width-type separation between XandY. The classical notion\nof Kolmogorov width is considered in Lemma 2.1 and later extended to a stronger notion of\nseparation in Lemma 2.3.\nWe apply the abstract result to the pairs X= Barron space (for two-layer networks) or X=\ntree-like function space (for multi-layer networks)/ Y= Lipschitz space, and X= RKHS/Y=\nDate: October 5, 2020.\n2020 Mathematics Subject Classification. 68T07, 41A30, 41A65, 46E15, 46E22 .\nKey words and phrases. Curse of dimensionality, two-layer network, multi-layer n etwork, population risk,\nBarron space, reproducing kernel Hilbert space, random fea ture model, neural tangent kernel, Kolmogorov width,\napproximation theory.\n1\n2 WEINAN E AND STEPHAN WOJTOWYTSCH\nBarron space. In the first case, the sequence of linear maps is give n by a type of Monte-Carlo\nintegration, in the second case by projection onto the eigenspace s of the RKHS kernel.\nThis article is structured as follows. In Section 2, we prove the abst ract result which we apply\nto Barron/tree-like function spaces and Lipschitz space in Section 3 and to RKHS and Barron\nspace in Section 4. We conclude by discussing our results and some op en questions in Section\n5. In appendices A and B, we review the natural function spaces fo r shallow neural networks\nand kernel methods respectively. In Appendix B, we specifically foc us on kernels arising from\nrandom feature models and neural tangent kernels for two-layer neural networks.\n1.1.Notation. We denote the closed ball of radius r >0 around the origin in a Banach space\nbyBX\nrand the unit ball by BX\n1=BX. The space of continuous linear maps between Banach\nspacesX,Yis denoted by L(X,Y) and the continuous dual space of XbyX∗=L(X,R). The\nsupport of a Radon measure µis denoted by spt µ.\n2.An Abstract Lemma\n2.1.Kolmogorov Width Version. The Kolmogorov width of a function class Fin another\nfunction class Gwith respect to a metric don the union of both classes is defined as the biggest\ndistance of an element in Gfrom the class F:\nwd(F;G) = sup\ng∈Gdist(g,F) = sup\ng∈Ginf\nf∈Fd(f,g).\nIn this article, we consider the case where Gis the unit ball in a Banach space Y,Fis the ball\nof radiust >0 in a Banach space Xandd=dZis induced by the norm on a Banach space Z\ninto which both XandYembed densely. As tincreases, points in Yare approximated to higher\ndegrees of accuracy by elements of X. The rate of decay\nρ(t) :=wdZ(BX\nt,BY\n1)\nprovides a quantitative measure of density of XinYwith respect to the topology of Z. For\na different point of view on width focusing on approximation by finite-d imensional spaces, see\n[Lor66, Chapter 9].\nIn the following Lemma, we show that if there exists a sequence of line ar operatorson Zwhich\nbehaves sufficiently differently on XandY, thenρmust decay slowly as t→ ∞.\nLemma 2.1. LetX,Y,Z,W be Banach spaces such that X,Y ֒−→Z. Assume that An,A:Z→\nWare continuous linear operators such that\n/ba∇dblAn−A/ba∇dblL(X,W)≤CXn−α,/ba∇dblAn−A/ba∇dblL(Y,W)≥cYn−β,/ba∇dblAn−A/ba∇dblL(Z,W)≤CZ\nforβ <αand constants CX,cY,CZ>0. Then\n(2.1) ρ(t)≥2−β(cY/2)α\nα−β\nCZCβ\nα−β\nXt−β\nα−β∀t≥cY\n2CX\nand\n(2.2) liminf\nt→∞/parenleftBig\ntβ\nα−βρ(t)/parenrightBig\n≥(cY/2)α\nα−β\nCZCβ\nα−β\nX.\nProof.Choose a sequence yn∈BYsuch/ba∇dbl(An−A)yn/ba∇dblW≥cYn−βandxn∈Xsuch that\nxn∈argmin{x:/bardblx/bardblX≤tn}/ba∇dblx−yn/ba∇dblZfortn:=cY\n2CXnα−β\n(see Remark 2.2). Then\ncYn−β≤ /ba∇dbl(An−A)yn/ba∇dblW\nKOLMOGOROV WIDTH: SHALLOW NETWORKS, RANDOM FEATURES, NTK 3\n≤ /ba∇dbl(An−A)(yn−xn)/ba∇dblW+/ba∇dbl(An−A)xn/ba∇dblW\n≤CZ/ba∇dblyn−xn/ba∇dblZ+CXn−α/ba∇dblxn/ba∇dblX\n≤CZ/ba∇dblxn−yn/ba∇dblZ+cY\n2n−β.\nWe therefore have\n/ba∇dblxn−yn/ba∇dblZ≥cY\n2CZn−β=cY\n2CZ/parenleftbigg2CX\ncYtn/parenrightbigg−β\nα−β\n=/parenleftBigcY\n2/parenrightBigα\nα−β1\nCZCβ\nα−β\nXt−β\nα−βn.\nClearlytn→ ∞sinceα>β. For general t>0, taketn= inf{tk:k∈N,tk≥t}. Then\nρ(t)≥ρ(tn)\n≥(cY/2)α\nα−β\nCZCβ\nα−β\nXt−β\nα−βn\n≥(cY/2)α\nα−β\nCZCβ\nα−β\nX/parenleftbiggtn−1\ntn/parenrightbiggβ\nα−β\nt−β\nα−β\n=(cY/2)α\nα−β\nCZCβ\nα−β\nX/parenleftbiggn−1\nn/parenrightbiggβ\nt−β\nα−β.\nAst→ ∞, so doesn, and then-dependent term converges to 1. /square\nRemark 2.2.Generally elements like xn,ynmay not exist if the extremum is not attained.\nOtherwise, we can choose xnsuch that /ba∇dblxn−yn/ba∇dblZis sufficiently close to its infimum and\n/ba∇dbl(An−A)yn/ba∇dblis sufficiently close to its supremum. To simplify our presentation, we a ssume that\nthe supremum and infimum are attained.\nThe choice of xnas a minimizer is valid if\n(1)Xembeds into Zcompactly, so the minimum of the continuous function /ba∇dbl · −y/ba∇dblZis\nattained on the compact set {/ba∇dbl·/ba∇dblX≤tk}, or\n(2) theembedding X ֒−→Zmapsclosedboundedsetstoclosedsetsand Zadmitscontinuous\nprojections onto closed convex sets (for example, Zis uniformly convex).\nIn the applications below, the first condition will be met.\n2.2.Improved Estimate. In the previous section, we have shown by elementary means that\nthe estimate\nliminf\nt→∞/parenleftBigg\ntγsup\n/bardbly/bardbl≤1inf\n/bardblx/bardblX≤t/ba∇dblx−y/ba∇dblZ/parenrightBigg\n≥c>0\nholds for suitable γif a sequence of linear maps between Zand another Banach space Wbehaves\nvery differently on subspaces XandYofZ. So intuitively, on each scale t >0 there exists an\nelementyt∈BYsuch thatytis poorly approximable by elements in Xon this scale. In this\nsection, we establish that there exists a single point y∈Ywhich is poorly approximable across\ninfinitely many scales. This statement has applications in Wasserstein gradient flows for machine\nlearning which we discuss in a companion article [WE20].\nLemma 2.3. LetX,Y,Zbe Banach spaces such that X,Y ֒−→Z. Assume that An,A∈L(Z,W)\nare operators such that\n/ba∇dblAn−A/ba∇dblL(X,W)≤CXn−α,/ba∇dblAn−A/ba∇dblL(Y,W)≥cYn−β,/ba∇dblAn−A/ba∇dblL(Z,W)≤CZ\n4 WEINAN E AND STEPHAN WOJTOWYTSCH\nforβ <α\n2and constants CX,cY,CZ. Then there exists y∈BYsuch that for every γ >β\nα−βwe\nhave\nlimsup\nt→∞/parenleftbigg\ntγinf\n/bardblx/bardblX≤t/ba∇dblx−y/ba∇dblZ/parenrightbigg\n=∞.\nThe result is stronger than the previous one in that it fixes a single po intywhich is poorly\napproximable in infinitely many scales tnk. While in each scale tnthere exists a point ynwhich\nis poorly approximable, we only show that yis poorly approximable in infinitely many scales,\nnot in all scales.\nProof of Lemma 2.3. SinceY ֒−→Z,thereexistsaconstant CY>0suchthat /ba∇dblAn−A/ba∇dblY∗≤CY.\nDefinition of y.Choose sequences yn∈BYandw∗\nn∈BW∗such\nw∗\nn◦(An−A)(yn)≥cYn−β.\nConsider two sequences nk,mkof strictly increasing integers such that\n∞/summationdisplay\nk=11\nnk≤1.\nWe will impose further conditions below. Set\ny:=∞/summationdisplay\nk=0εk\nnkymk\nwhere the signs εk∈ {−1,1}are chosen inductively such that\nεK·w∗\nmk◦(AmK−A)/parenleftBiggK−1/summationdisplay\nk=1εk\nnkymk/parenrightBigg\n≥0.\nClearly\n/ba∇dbly/ba∇dblY≤∞/summationdisplay\nk=1|εk|\nnk/ba∇dblymk/ba∇dblY=∞/summationdisplay\nk=11\nnk= 1.\nTo shorten notation, define Lk=w∗\nmk◦(Amk−A)∈Z∗and note that the estimates for Amk−A\ntransfer to Lk. IfεK= 1 we have\nLky=Lk/parenleftBiggK−1/summationdisplay\nk=1εk\nnkymk/parenrightBigg\n+1\nnKLkymK+Lk/parenleftBigg∞/summationdisplay\nk=K+1εk\nnkymk/parenrightBigg\n≥0+1\nnKLkymK−CY∞/summationdisplay\nl=K+11\nnl\n≥1\nnK/parenleftBigg\ncYm−β\nK−CYnK∞/summationdisplay\nl=k+11\nnl/parenrightBigg\nwhere the infinite tail of the series is estimated by /ba∇dblLk/ba∇dblL(Y,W)≤CYand/ba∇dblymk/ba∇dblY≤1. Similarly\nifεK=−1 we obtain\nLky≤ −1\nnK/parenleftBigg\ncYm−β\nK−CYnK∞/summationdisplay\nl=K+11\nnl/parenrightBigg\n.\nSlow approximation rate. Choose\ntk:=cYmα−β\nk\n2CXnk, xk∈argmin/bardblx/bardblX≤tk/ba∇dblx−y/ba∇dblZ.\nKOLMOGOROV WIDTH: SHALLOW NETWORKS, RANDOM FEATURES, NTK 5\nThen\n1\nnk/parenleftBigg\ncYm−β\nk−CYnk∞/summationdisplay\nl=k+11\nnl/parenrightBigg\n≤/vextendsingle/vextendsingleLky/vextendsingle/vextendsingle\n≤/vextendsingle/vextendsingleLk(y−xk)/vextendsingle/vextendsingle+/vextendsingle/vextendsingleLkxk/vextendsingle/vextendsingle\n≤CZ/ba∇dbly−xk/ba∇dblZ+/ba∇dblAmk−A/ba∇dblX∗/ba∇dblxk/ba∇dblX\n≤CZ/ba∇dbly−xk/ba∇dblZ+CXm−α\nktk.\nSincetkwas chosen precisely such that\nCXm−α\nktk=cY\n2nkm−β\nk,\nwe obtain that\n(2.3)1\n2CZnk/parenleftBigg\ncYm−β\nk−2CYnk∞/summationdisplay\nl=k+11\nnl/parenrightBigg\n≤ /ba∇dbly−xk/ba∇dblZ= min\n/bardblx/bardblX≤tk/ba∇dblx−y/ba∇dblZ.\nFor this lower bound to be meaningful, the first term in the bracket h as to dominate the second\nterm. We specify the scaling relationship between nkandmkas\nmk=nk\nα−β\nk.\nIn this definition, mkis not typically an integer unless1\nα−βis an integer (or, to hold for a\nsubsequence, rational). In the general case, we choose the inte ger ˜mkclosest tomk. To simplify\nthe presentation, we proceed with the non-integer mkand note that the results are insensitive\nto perturbations of order 1.\nWe obtain\ntk=cY\n2CXmα−β\nk\nnk=cY\n2CXnk−1\nk,m−β\nk\nnk=n−βk\nα−β−1\nk=n−β(k−1)+α\nα−β\nk=/parenleftbigg2CX\ncYtk/parenrightbigg−β\nα−β−α\n(k−1)(α−β)\n.\nIn particular, note that tk→ ∞ask→ ∞. In order for\nnk∞/summationdisplay\nl=k+11\nnl\nto be small, we need nkto grow super-exponentially. Note thatβ\nα−β≤1 sinceβ≤α\n2. We\nspecifynk= 2(kk)and compute\n∞/summationdisplay\nl=k+11\nnl=∞/summationdisplay\nl=12−/parenleftbig\n(k+l)k(k+l)l/parenrightbig\n≤∞/summationdisplay\nl=12−(kk(k+l)l)=∞/summationdisplay\nl=1/parenleftbigg1\nnk/parenrightbigg/parenleftbig\n(k+l)l/parenrightbig\n≤2\nnk+1\nk≪n−βk\nα−β−1\nk=m−β\nk\nnk\nfor large enough k. Thus we can neglect the negative term on the left hand side of (2.3) at the\nprice of a slightly smaller constant. Thus\ncY\n4CZ/parenleftbigg2CX\ncYtk/parenrightbigg−β\nα−β−α\n(k−1)(α−β)\n=cY\n4CZnkm−β\nk≤min\n/bardblx/bardblX≤tk/ba∇dblx−y/ba∇dblZ.\nFinally, we conclude that for all γ >β\nα−βwe have\nlimsup\nt→∞/parenleftbigg\ntγinf\n/bardblx/bardblX≤t}/ba∇dblx−y/ba∇dblZ/parenrightbigg\n≥limsup\nk→∞/parenleftbigg\ntγ\nkinf\n/bardblx/bardblX≤tk}/ba∇dblx−y/ba∇dblZ/parenrightbigg\n6 WEINAN E AND STEPHAN WOJTOWYTSCH\n≥CX,Y,Zlimsup\nk→∞tγ−β\nα−β−α\n(k−1)(α−β)\nk\n=∞.\n/square\n3.Approximating Lipschitz Functions by Functions of Low Comp lexity\nIn this section, we apply Lemma 2.3 to the situation where general Lip schitz functions are\napproximated by functions in a space with much lower complexity. Exa mples include function\nspaces for infinitely wide neural networks with a single hidden layer an d spaces for deep ResNets\nof bounded width. For simplicity, we first consider uniform approxima tion and then modify the\nideas to also cover L2-approximation.\n3.1.Approximation in L∞.Consider the case where\n(1)Zis the space of continuous functions on the unit cube Q= [0,1]d⊂Rdwith the norm\n/ba∇dblφ/ba∇dblZ= sup\nx∈Qφ(x),\n(2)Yis the space of Lipschitz-continuous functions with the norm\n/ba∇dblφ/ba∇dblY= sup\nx∈Qφ(x)+sup\nx/ne}ationslash=y|φ(x)−φ(y)|\n|x−y|,and\n(3)Xis a Banach space of functions such that\n•Xembeds continuously into Z,\n•the Monte-Carlo estimate\nEXi∼Ld|Qiid/braceleftBigg\nsup\nφ∈BX/bracketleftBigg\n1\nnn/summationdisplay\ni=1φ(Xi)−ˆ\nQφ(x) dx/bracketrightBigg/bracerightBigg\n≤CX√n\nholds. Here and in the following when no other measure is specified, we assume that\nintegrals are taken with respect to Lebesgue measure.\nExamples of admissible spaces for Xare Barron space for two-layer ReLU networks and the\ncompositional function space for deep ReLU ResNets of finite width , see [EMW19a, EMW18,\nEMW19b]. A brief review of Barron space is provided in Appendix A. The Monte-Carlo es-\ntimate is proved by estimating the Rademacher complexity of the unit ball in the respective\nfunction space. For Barron space, CX= 2/radicalbig\n2 log(2d) and for compositional function space\nCX=e2/radicalbig\nlog(2d), see [EMW19b, Theorems 6 and 12].\nWe observe the following: If Xis a vector of iid random variables sampled from the uniform\ndistribution on Q, then\nsup\nφis 1-Lipschitz/parenleftBigg\n1\nnn/summationdisplay\ni=1φ(Xi)−ˆ\nQφ(x) dx/parenrightBigg\n=W1/parenleftBigg\nLd|Q,1\nnn/summationdisplay\ni=1δXi/parenrightBigg\nis the 1-Wasserstein distance between d-dimensional Lebesgue measure on the cube and the\nempirical measure generated by the random points – see [Vil08, Cha pter 5] for further details\non Wasserstein distances and the link between Lipschitz functions a nd optimal transport theory.\nThe distance on Rdfor which the Wasserstein transportation cost is computed is the s ame for\nwhichφis 1-Lipschitz.\nKOLMOGOROV WIDTH: SHALLOW NETWORKS, RANDOM FEATURES, NTK 7\nEmpirical measures converge to the underlying distribution slowly in h igh dimension [FG15],\nby which we mean that\nEX∼/parenleftbig\nLd|Q/parenrightbignW1/parenleftBigg\nLd|Q,1\nnn/summationdisplay\ni=1δXi/parenrightBigg\n≥cdn−1/d\nfor some dimension-dependent constant d. Observe that also\nsup\nφis 1-Lipschitz/parenleftBigg\n1\nnn/summationdisplay\ni=1φ(Xi)−ˆ\nQφ(x) dx/parenrightBigg\n= sup\nφis 1-Lipschitz ,φ(0)=0/parenleftBigg\n1\nnn/summationdisplay\ni=1φ(Xi)−ˆ\nQφ(x) dx/parenrightBigg\n≤/bracketleftbig\n1+diam(Q)/bracketrightbig\nsup\n/bardblφ/bardblY≤1,φ(0)=0/parenleftBigg\n1\nnn/summationdisplay\ni=1φ(Xi)−ˆ\nQφ(x) dx/parenrightBigg\n≤/bracketleftbig\n1+diam(Q)/bracketrightbig\nsup\n/bardblφ/bardblY≤1/parenleftBigg\n1\nnn/summationdisplay\ni=1φ(Xi)−ˆ\nQφ(x) dx/parenrightBigg\nwhere diam( Q) is a diameter of the d-dimensional unit cube with respect to the norm for which\nφis 1-Lipschitz. Here we used that replacing φbyφ+cforc∈Rdoes not change the difference\nof the two expectations, and that on the space of functions with φ(0) = 0 the equivalence\n[φ]Y:= sup\nx/ne}ationslash=y|φ(x)−φ(y)|\n|x−y|≤ /ba∇dblφ/ba∇dblY≤/parenleftbig\n1+diam(Q)/parenrightbig\n[φ]Y\nholds. Byωdwe denote the Lebesgue measure of the unit ball in Rdwith respect to the correct\nnorm.\nLemma 3.1. For everyn∈Nwe can choose npointsx1,...,xninQsuch that\nsup\nφ∈BY/bracketleftBigg\n1\nnn/summationdisplay\ni=1φ(xi)−ˆ\nQφ(x) dx/bracketrightBigg\n≥d\nd+11\n/bracketleftbig\n(d+1)ωd/bracketrightbig1\nd1\n1+diam(Q)n−1/d\nand\nsup\nφ∈BX/bracketleftBigg\n1\nnn/summationdisplay\ni=1φ(xi)−ˆ\nQφ(x) dx/bracketrightBigg\n≤CX\nn1/2.\nProof.First, we prove the following. Claim:Letx1,...,xnbe any collection of npoints inQ.\nThen\nW1/parenleftBigg\nLd|Q,1\nnn/summationdisplay\ni=1δxi/parenrightBigg\n≥d\nd+11\n/bracketleftbig\n(d+1)ωd/bracketrightbig1\ndn−1/d.\nProof of claim: Chooseε>0 and consider the set\nU=n/uniondisplay\ni=1Bεn−1/d(xi).\nWe observe that\nLd(U∩Q)≤ Ld(U)≤n/summationdisplay\ni=1Ld/parenleftbig\nBεn−1/d(xi)/parenrightbig\n=nωd/parenleftbig\nεn−1/d/parenrightbigd=ωdεd.\nSo any transport plan between Ld|Qand the empirical measure needs to transport mass ≥\n1−ωdεdby a distance of at least εn−1/d. We conclude that\nW1/parenleftBigg\nLd|Q,1\nnn/summationdisplay\ni=1δxi/parenrightBigg\n≥sup\nε∈(0,1)/parenleftbig\n1−ωdεd/parenrightbig\nεn−1/d.\n8 WEINAN E AND STEPHAN WOJTOWYTSCH\nThe infimum is attained when\n0 = 1−(d+1)ωdεd⇔ε=/bracketleftbig\n(d+1)ωd/bracketrightbig−1\nd⇒1−ωdεd= 1−1\nd+1=d\nd+1.\nThis concludes the proof of the claim.\nProof of the Lemma: Using the claim, any npointsx1,...,xnsuch that\nsup\nφ∈BX/bracketleftBigg\n1\nNN/summationdisplay\ni=1φ(xi)−ˆ\nQφ(x) dx/bracketrightBigg\n≤E/braceleftBigg\nsup\nφ∈BX/bracketleftBigg\n1\nNN/summationdisplay\ni=1φ(Xi)−ˆ\nQφ(x) dx/bracketrightBigg/bracerightBigg\n≤CX\nn1/2\nsatisfy the conditions. /square\nFor anyn, we fix such a collection of points xn\n1,...,xn\nnand define\nAn:Z→R, An(φ) =1\nnn/summationdisplay\ni=1φ(xn\ni), A:Z→R, A(φ) =ˆ\nQφ(x)dx.\nClearly\n|Aφ|,|Anφ| ≤ /ba∇dblφ/ba∇dblC0=/ba∇dblφ/ba∇dblZ.\nThus we can apply Lemma 2.3 with\nβ\nα−β=1\nd\n1\n2−1\nd=1\ndd−2\n2d=2\nd−2.\nCorollary 3.2. There exists a 1-Lipschitz function φonQsuch that\nlimsup\nt→∞/parenleftbigg\ntγinf\n/bardblf/bardblX≤t/ba∇dblφ−f/ba∇dblL∞(Q)/parenrightbigg\n=∞.\nfor allγ >2\nd−2.\n3.2.Approximation in L2.Point evaluation functionals are no longerwell defined if we choose\nZ=L2(Q). We therefore need to replace Anby functionals of the type\nAn(φ) =1\nnn/summationdisplay\ni=1 \nBεn(Xn\ni)φdx\nforsamplepoints Xn\niandfindabalancebetweentheradii εnshrinkingtoofast(causingthenorms\n/ba∇dblAn/ba∇dblZ∗to blow up) and εnshrinking too slowly (leading to better approximation properties on\nLipschitz functions).\nWe interpret Qas the unit cube for function spaces, but as a d-dimensional flat torus when\nconsidering balls. Namely the ball Bε(x) inQis to be understood as projection of the ball of\nradiusε>0 around [x] onRd/ZdontoQ. This allows us to avoid boundary effects.\nLemma 3.3. For everyn∈Nwe can choose npointsx1,...,xninQsuch that the estimates\nsup\nφ∈BX/bracketleftBigg\n1\nnn/summationdisplay\ni=1 \nBεn(xi)φdx−ˆ\nQφ(x) dx/bracketrightBigg\n≤3CX\nn1/2\nsup\nφ∈BY/bracketleftBigg\n1\nnn/summationdisplay\ni=1 \nBεn(xi)φdx−ˆ\nQφ(x) dx/bracketrightBigg\n≥cdn−1/d\nsup\nφ∈BZ/bracketleftBigg\n1\nnn/summationdisplay\ni=1 \nBεn(xi)φdx−ˆ\nQφ(x) dx/bracketrightBigg\n≤Cd\nhold.cd,Cdare dimension dependent constants and\nεn=γdn−1/d\nKOLMOGOROV WIDTH: SHALLOW NETWORKS, RANDOM FEATURES, NTK 9\nfor a dimension-dependent γd>0.\nProof of Lemma 3.3. L2-estimate. In all of the following, we rely on the interpretation of balls\nas periodic to avoid boundary effects. For a sample S= (X1,...,Xn) denote\nAS(φ) =1\nnn/summationdisplay\ni=1 \nBεn(Xi)φdx\nObserve that\nsup\n/bardblφ/bardblL2≤1(AS(φ)−A(φ)) = sup\n/bardblφ/bardblL2≤1/bracketleftBigg\n1\nnn/summationdisplay\ni=1 \nBεn(Xi)φdx−ˆ\nφdx/bracketrightBigg\n= sup\n/bardblφ/bardblL2≤1,´\nφ=01\nnn/summationdisplay\ni=1 \nBεn(Xi)φdx\n= sup\n/bardblφ/bardblL2≤1,´\nφ=01\nn|Bεn|ˆ/parenleftBiggn/summationdisplay\ni=11Bεn(Xi)/parenrightBigg\nφdx\n≤1\nnωdεdn/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddoublen/summationdisplay\ni=11Bεn(Xi)/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble\nL2.\nWe compute\n/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddoublen/summationdisplay\ni=11Bεn(xi)/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble2\nL2=n/summationdisplay\ni=1/vextenddouble/vextenddouble1Bεn(xi)/vextenddouble/vextenddouble2\nL2+/summationdisplay\ni/ne}ationslash=jˆ\n1Bεn(xi)1Bεn(xj)dx\n=nωd|εn|d+/summationdisplay\ni/ne}ationslash=j/vextendsingle/vextendsingleBεn(xi)∩Bεn(xj)/vextendsingle/vextendsingle (3.1)\nIt is easy to see that\nE(Xi,Xj)∼UQ×Q/vextendsingle/vextendsingleBεn(Xi)∩Bεn(Xj)/vextendsingle/vextendsingle=EX∼UQ/vextendsingle/vextendsingleBεn(X)∩Bεn(0)/vextendsingle/vextendsingle\n=ˆ\nB2εn/vextendsingle/vextendsingleBεn(x)∩Bεn(0)/vextendsingle/vextendsingledx\n=εd\nnˆ\nB2/vextendsingle/vextendsingleBεn/parenleftbig\nεnx/parenrightbig\n∩Bεn(0)/vextendsingle/vextendsingledx\n=εd\nnˆ\nB2εd\nn/vextendsingle/vextendsingleB1(x)∩B1(0)/vextendsingle/vextendsingledx\n=ε2d\nnωd2d1\nωd2dˆ\nB2/vextendsingle/vextendsingleB1(x)∩B1(0)/vextendsingle/vextendsingledx\n=ε2d\nn¯cd2dωd. (3.2)\nwhere\n¯cd:=1\nωd2dˆ\nB2/vextendsingle/vextendsingleB1(x)∩B1(0)/vextendsingle/vextendsingledx\nis a dimension-dependent constant. Thus combining (3.1) and (3.2) w e find that\nES∼(Ld|Q)n/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddoublen/summationdisplay\ni=11Bεn(Xi)/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble2\nL2=ωdεd\nn/bracketleftbig\nn+n(n−1)¯cd(2εn)d/bracketrightbig\n≤ωdnεd\nn/bracketleftbig\n1+¯cd2dnεd\nn/bracketrightbig\n.\n10 WEINAN E AND STEPHAN WOJTOWYTSCH\nThis allows us to estimate\nES/bracketleftbigg\nsup\n/bardblφ/bardblL2≤1/parenleftbig\nASφ−Aφ/parenrightbig/bracketrightbigg\n≤1\nnωdεdnES/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddoublen/summationdisplay\ni=11Bεn(Xi)/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble\nL2\n≤1\nnωdεdn\nES/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddoublen/summationdisplay\ni=11Bεn(Xi)/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble2\nL2\n1\n2\n≤1\nnωdεdn/radicalBig\nωdnεdn/bracketleftbig\n1+cd2dnεdn/bracketrightbig\n=/radicalBigg\n1+¯cd2dnεdn\nωdnεdn\n=/radicalBigg\n1+¯cd2dγd\nd\nωdγd\nd\nwhen we choose\nεn=γdn−1/d\nfor a dimension-dependent constant γd.\nLipschitz estimate. IfE⊂Rdis open and bounded, denote by UEthe uniform distribution\nonE. Note that\nW1/parenleftbig\nδ0,UBε/parenrightbig\n= \nBε|x|dx= \nB1ε|x|dx=ε \nB1|x|dx.\nSince the set of all transport plans between two measures given as convex combinations of\nmeasures is larger than the set of all plans which transport one ter m of the combination to\nanother, we find that\nW1/parenleftBigg\n1\nnn/summationdisplay\ni=1δxi,1\nnn/summationdisplay\ni=1UBεn(xi)/parenrightBigg\n= inf/braceleftBiggˆ\n|x−y|dπx,y/vextendsingle/vextendsingle/vextendsingle/vextendsingleπ∈ P/parenleftBigg\n1\nnn/summationdisplay\ni=1δxi,1\nnn/summationdisplay\ni=1UBεn(xi)/parenrightBigg/bracerightBigg\n≤inf/braceleftBiggˆ\n|x−y|dπx,y/vextendsingle/vextendsingle/vextendsingle/vextendsingleπ=1\nnn/summationdisplay\ni=1πi, πi∈ P/parenleftbig\nδxi,UBεn(xi)/parenrightbig/bracerightBigg\n=1\nnn/summationdisplay\ni=1W1(δxi,UBεn(xi))\n=εn \nB1|x|dx\n=γdn−1/d \nB1|x|dx.\nWe find by the triangle inequality that\nW1/parenleftBigg\nLd|Q,1\nnn/summationdisplay\ni=1UBεn(xi)/parenrightBigg\n≥W1/parenleftBigg\nLd|Q,1\nnn/summationdisplay\ni=1δxi/parenrightBigg\n−W1/parenleftBigg\n1\nnn/summationdisplay\ni=1δxi,1\nnn/summationdisplay\ni=1UBεn(xi)/parenrightBigg\n≥\nd\nd+11\n/bracketleftbig\n(d+1)ωd/bracketrightbig1\nd−γd \nB1|x|dx\nn−1/d\nKOLMOGOROV WIDTH: SHALLOW NETWORKS, RANDOM FEATURES, NTK 1 1\nWhen we choose γdsmall enough, we conclude as before that\nsup\nφis 1-Lipschitz/parenleftBigg\n1\nnn/summationdisplay\ni=1 \nBεn(xi)φdx−ˆ\nQφdx/parenrightBigg\n≥cdn−1/d\nfor some positive cd>0. Recall that this holds for allempirical measures, and that the Lipschitz\nconstant is an equivalent norm for our purposes.\nX-estimate. We compute that\nEXi∼Ld|Qiid/braceleftBigg\nsup\nφ∈BX/bracketleftBigg\n1\nnn/summationdisplay\ni=1ˆ\nBεn(Xi)φdx−ˆ\nQφ(x) dx/bracketrightBigg/bracerightBigg\n=EXi∼Ld|Qiid/braceleftBigg\nsup\nφ∈BX/bracketleftBigg\n1\nnn/summationdisplay\ni=1ˆ\nBεn(Xi)φdx−ˆ\nQφ(x) dx/bracketrightBigg/bracerightBigg\n=EXi∼Ld|Qiid/braceleftBigg\nsup\nφ∈BX/bracketleftBigg\n1\nnn/summationdisplay\ni=1ˆ\nBεn(Xi)/parenleftbigg\nφ(y)−ˆ\nQφ(x) dx/parenrightbigg\ndy/bracketrightBigg/bracerightBigg\n=EXi∼Ld|Qiid/braceleftBigg\nsup\nφ∈BX/bracketleftBiggˆ\nBεn1\nnn/summationdisplay\ni=1/parenleftbigg\nφ(Xi+y)−ˆ\nQφ(x) dx/parenrightbigg\ndy/bracketrightBigg/bracerightBigg\n≤EXi∼Ld|Qiid/braceleftBiggˆ\nBεnsup\nφ∈BX/bracketleftBigg\n1\nnn/summationdisplay\ni=1/parenleftbigg\nφ(Xi+y)−ˆ\nQφ(x) dx/parenrightbigg/bracketrightBigg\ndy/bracerightBigg\n=EXi∼Ld|Qiid/braceleftBigg\nsup\nφ∈BX/bracketleftBigg\n1\nnn/summationdisplay\ni=1/parenleftbigg\nφ(Xi)−ˆ\nQφ(x) dx/parenrightbigg/bracketrightBigg/bracerightBigg\n≤CX√n\nsinceXiandXi+yhave the same law (where Xi+yis interpreted as a shift on the flat torus).\nConclusion. Since the random variables\nsup\nφ∈BX/bracketleftBigg\n1\nnn/summationdisplay\ni=1/parenleftbigg\nφ(Xi)−ˆ\nQφ(x) dx/parenrightbigg/bracketrightBigg\nare non-negative, we find that by Chebyshev’s inequality that\n(Ld|Q)n/parenleftBigg/braceleftBigg\n(X1,...,Xn)/vextendsingle/vextendsingle/vextendsingle/vextendsinglesup\nφ∈BX[AX(φ)−A(φ)]>3CX√n/bracerightBigg/parenrightBigg\n≤1\n3\nand similarly\n(Ld|Q)n/parenleftBigg/braceleftBigg\n(X1,...,Xn)/vextendsingle/vextendsingle/vextendsingle/vextendsinglesup\nφ∈BZ[AX(φ)−A(φ)]>3γdn−1/d \nB1|x|dx/bracerightBigg/parenrightBigg\n≤1\n3.\nSince theY-estimate is satisfied for any empirical measure, we conclude that t here exists a set\nof pointsx1,...,xninQsuch thatAn:=A(x1,...,xn)satisfies the conditions of the theorem. /square\nCorollary 3.4. There exists a 1-Lipschitz function φonQsuch that\nlimsup\nt→∞/parenleftbigg\ntγinf\n/bardblf/bardblX≤t/ba∇dblφ−f/ba∇dbl2\nL2(Q)/parenrightbigg\n=∞.\nfor allγ >4\nd−2.\n12 WEINAN E AND STEPHAN WOJTOWYTSCH\nExample 3.5.In [EW20], ‘tree-like’ function spaces WLfor fully connected ReLU networks with\nLhidden layers and bounded path-norm are introduced. There it is sh own that the unit ball BL\ninWLsatisfies the Rademacher complexity estimate\nRad(BL,S)≤2L+1/radicalbigg\n2 log(d+2)\nn\non any sample set of nelements inside [ −1,1]d. According to [SSBD14, Lemma 26.2], we have\nE(x1,...,xn)∼Pn/bracketleftBigg\nsup\n/bardblh/bardblWL≤1/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleˆ\nQh(x)P(dx)−1\nnn/summationdisplay\ni=1h(xi)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/bracketrightBigg\n≤2E(x1,...,xn)∼PnRad/parenleftbig\nB,{x1,...,xn}/parenrightbig\n.\nfor any data distribution PonQ, soWLcan be chosen as Xin this section. A more detailed\nconsiderationof Rademachercomplexities in the case of Barronfun ctions (for different activation\nfunctions) canbe foundin Appendix Ain theproofofLemmaA.10. For ReLUactivation, Barron\nspace coincides with the tree-like function space W1with one hidden layer.\nCorollary 3.6. LetX=WL(Q)the tree-like function space, Y=C0,1(Q)the space of Lipschitz-\ncontinuous functions (with respect to the ℓ∞-norm on Rd) andZ=L2(Q). Then\nρ(t) =wdZ(BX\nt,BY\n1)≥2−1\nd/parenleftbig\ncd/2/parenrightbigd\nd−2\nCd/parenleftbigg\n3·2L+2/radicalBig\n2 log(2d+2)\nn/parenrightbigg2\nd−2t−2\nd−2= ¯cd2−L\nd−2t−2\nd−2\nfor a dimension-dependent constant ¯cd>0.\nIn particular, slightly deeper neural networks do not possess dra stically larger the approxima-\ntion power compared in the class of Lipschitz functions.\n4.Approximating Two-Layer Neural Networks by Kernel Methods\nA brief review of reproducing kernel Hilbert spaces and our notatio n is given in Appendix B.\nFor bounded kernels, the RKHS Hkembeds continuously into L2(P). We assume additionally\nthatHkembeds into L2(P) compactly. In Appendix B, we show that this assumption is met\nfor common random feature kernels and two-layer neural tangen t kernels. Compactness allows\nus to apply the Courant-Hilbert Lemma, which is often used in the eige nvalue theory of elliptic\noperators.\nLemma 4.1. [Dob10, Satz 8.39] LetHbe a real, separable infinite-dimensional Hilbert space\nwith two symmetric and continuous bilinear forms B,K:H×H→R. Assume that\n(1)Kis continuous in the weak topology on H,\n(2)K(u,u)>0for allu/ne}ationslash= 0, and\n(3)Bis coercive relative K, i.e.\nB(u,u)≥c/ba∇dblu/ba∇dbl2\nH−˜cK(u,u)\nfor constants c,˜c>0.\nThen the eigenvalue problem\nB(u,ui) =λiK(u,ui)∀u∈H\nhas countably many solutions (λi,ui). Every eigenvalue has finite multiplicity, and if sorted in\nascending order then\nlim\ni→∞λi= +∞.\nKOLMOGOROV WIDTH: SHALLOW NETWORKS, RANDOM FEATURES, NTK 1 3\nThe space spanned by the eigenvectors uiis dense in Hand the eigenvectors satisfy the orthog-\nonality relation\nK(ui,uj) =δij, B(ui,uj) =λiK(ui,uj) =λiδij.\nWe can expand the bilinear forms as\nB(u,v) =∞/summationdisplay\ni=1λiK(ui,u)K(ui,v), K(u,v) =∞/summationdisplay\ni=1K(ui,u)K(ui,v).\nThe pairs (λi,ui)are defined as solutions to the sequence of variational probl ems\nλi=B(ui,ui) = inf/braceleftbiggB(u,u)\nK(u,u)/vextendsingle/vextendsingle/vextendsingle/vextendsingleK(u,uj) = 0∀1≤j≤i−1/bracerightbigg\n.\nConsiderH=L2(P) and\nB(u,v) =/an}b∇acketle{tu,v/an}b∇acket∇i}htL2(P), K(u,v) =ˆ\nRd×Rdu(x)v(x′)k(x,x′)P(dx)P(dx′) =/an}b∇acketle{tKu,v/an}b∇acket∇i}htL2(P)\nwhere\nKu(x) =ˆ\nRdk(x,x′)u(x′)P(dx′).\nIt is easy to see that the assumptions of the Courant-Hilbert Lemm a are indeed satisfied. In\nparticular, note that\nK(u,ui) =µiB(u,ui)∀u∈L2(P)⇔Kui=µiui.\nBy definition, all eigenfunctions lie in the reproducing kernel Hilbert s pace.\nLetX=Hkbe a suitable RKHS, Y=B(P) Barron space (see Appendix A for a brief review),\nZ=W=L2(P). ThenX,Y ֒−→Z. Furthermore, consider the sequence of n-dimensional spaces\nXn= span{u1,...,un} ⊆X\nspanned by the first neigenfunctions of Kand the maps\nAn:Z→W, A n=PXn, A= idL2(P)\nwherePVdenotes the Z-orthogonal projection onto the subspace V. Due to the orthogonality\nstatement in the Courant-Hilbert Lemma, PVis alsotheX-orthogonalprojectionfor this specific\nsequence of spaces.\nLemma 4.2. IfP=Ld|[0,1]d, we have the estimates\n/ba∇dblAn−A/ba∇dblL(X,W)≤1\nλ1/2\nn+1,/ba∇dblAn−A/ba∇dblL(Y,W)≥c\ndn−1/d,/ba∇dblAn−A/ba∇dblL(Z,W)≤1\nwherec>0is a universal constant.\nProof of Lemma 4.2. X-estimate. Normalizetheeigenfunctions fiofktobeL2(P)-orthonormal.\nForf∈ Hkπwe have the expansion f=/summationtext∞\ni=1aifiand thus (A−An)f=/summationtext∞\ni=n+1aifisuch\nthat\n/ba∇dbl(An−A)f/ba∇dbl2\nW=∞/summationdisplay\ni=n+1|ai|2≤∞/summationdisplay\ni=n+1λi\nλn+1|ai|2=/ba∇dblAnf/ba∇dbl2\nX\nλn+1≤/ba∇dblf/ba∇dblX\nλn+1≤1\nλn+1\nY-estimate. See [Bar93, Theorem 6]. There it is shown that anysequence of n-dimensional\nspaces suffers from the curse of dimensionality when approximating a subset of Barron space.\nZ-estimate. The orthogonal projection A−An=PX⊥nhas norm one. /square\n14 WEINAN E AND STEPHAN WOJTOWYTSCH\nThus, if an RKHS has rapidly decreasing eigenvalues independently of the dimension of the\nambient space (which is favourable from the perspective of statist ical learning theory), then it\nsuffers from a slow approximation property.\nExample 4.3.In Appendix B, we give examples of random feature kernels and neur al tangent\nkernels for which λk≤cdk−1\n2+3\n2d. In Appendix A, we briefly discuss Barron space. Applying\nLemma 2.1 with\nα=1\n4−3\n4d, β=1\nd⇒β\nα−β=1\nd\nd\n4d−3\n4d−4\n4d=4\nd−7,\nwe see that the L2-width of the RKHS in Barron space is bounded from below by\nρ(t) := sup\n/bardblφ/bardblB(P)≤1inf\n/bardblψ/bardblHk,P≤t/ba∇dblφ−ψ/ba∇dblL2(P)≥cdt−β\nα−β=cdt−4\nd−10.\nDue to Lemma 2.3, there exists a function φin Barron space such that\nlimsup\nt→∞/parenleftBigg\ntγ·inf\n/bardblψ/bardblHk,P≤t/ba∇dblφ−ψ/ba∇dblL2(P)/parenrightBigg\n=∞\nfor allγ >4\nd−7.\n5.Discussion\nFrom the viewpoint of functional analysis and more precisely functio n spaces, a fundamental\ntask in machine learning is balancing the approximation and estimation e rrors of a hypothesis\nclass. In overly expressive function classes, it may be difficult to ass ess the performance of a\nfunction from a small data sample, whereas too restrictive functio n classes lack the expressivity\nto perform well in many problems. In this article, we made a first step in trying to quantify\nthe competition between estimation and approximation. Our results show that linear function\nclasses in which the estimation error is strongly controlled (including, but not limited to those\ndeveloped for infinitely wide neural networks), the approximation e rror must suffer from the\ncurse of dimensionality within the class of Lipschitz-functions. Addit ionally, we emphasize that\nkernel methods (including some neural tangent kernels) are subj ect to the curse ofdimensionality\nwhere adaptive methods like shallow neural networks are not.\nIn a companion article [WE20], we show that the Barron norm and the R KHS norm increase\nat most linearly in time during gradient flow training in the mean field scalin g regime. This\nmeans that the L2-population risk of a shallow neural network or kernel function can only decay\nliket−αdfor general Lipschitz or Barron target functions respectively, w hereαdis close to zero in\nhigh dimension. It is thereforeofcrucialimportanceto understan dthe function spacesassociated\nwith neural network architectures under the natural path norm s.\nWhile the theory of function spaces for low-dimensional analysis (So bolev, Besov, BV, BD,\netc.) is well studied, the spaces for high-dimensional (but not infinit e-dimensional) analysis is\nat its very beginning. To the best of our knowledge, the currently a vailable models for neural\nnetworks consider infinitely wide two-layer or multi-layer networks o r infinitely deep networks\nwith bounded width [EMW19c, EW20, EMW19b]. A different perspective on the approximation\nspaces of deep networks focussing on the number of parameters (but not their size) is developed\nin [GKNV19].\nEven for existing function spaces, it is hard to check whether a give n function belongs to the\nspace. Barron’s original work [Bar93] shows (in modern terms) tha t every sufficiently smooth\nfunction on an extension domain belongs to Barron space, where th e required degree of smooth-\nness depends on the dimension. More precisely, if fis a function on Rdsuch that its Fourier\nKOLMOGOROV WIDTH: SHALLOW NETWORKS, RANDOM FEATURES, NTK 1 5\ntransform ˆfsatisfies\nCf:=ˆ\nRd/vextendsingle/vextendsingleˆf(ξ)/vextendsingle/vextendsingle|ξ|dξ <∞,\nthen for every compact set K⊂Rdthere exists a Barron function gsuch thatf≡gonRd.\nIn particular, if f∈Hs(Rd) wheres >d\n2+ 1, thenCf<∞. In particular, if f∈Ck(Ω) for\nk >d\n2+1 and Ω has smooth boundary, then by standard extension result s we see that f∈ B.\nThis holds for networkswith anysigmoidal activation function or ReLU activation. The criterion\nis unsatisfying in two ways:\n(1) The function fhas to be defined on the whole space for Fourier-analytic considera tions\nto apply. Given a function defined on a compact set, one has to find a good extension\nto the whole space.\n(2) Theconstant CfmerelygivesanupperboundontheBarron-normofatwolayernetw ork.\nIff /∈C1, thenCf= +∞. Iff(x) =/summationtextm\ni=1aiReLU(wT\nix+bi) is a two-layer neural\nnetworkwith finitely manynodes, then fis onlyLipschitzcontinuousandnot C1-smooth\n(unless it is linear). Thus the criterion misses many functions of prac tical importance.\n5.1.Open Problems. Many questions in this field remain open.\n(1) The slow approximation property in L∞is based purely on the slow convergence of\nempirical measures, while the L2-construction also uses the translation-invariance of\nLebesgue measure for convenience. Does a ‘curse of dimensionality ’ type phenomenon\naffectL2-approximationwhen Phas a density with respect to Lebesgue measure, or more\ngenerally is a regular measure concentrated on or close to a high-dim ensional manifold\nin an even higher-dimensional ambient space?\n(2) We used Lipschitz functions for convenience, but we believe tha t a similar phenomenon\nholds forCkfunctions for any fixed kwhich does not scale with dimension. To apply\nthe same approach, we need to answer how quickly the 1-Wasserst ein-type distances\n/tildewiderWCk,λ(µ,ν) = sup/braceleftbiggˆ\nfµ(dx)−ˆ\nfν(dx)/vextendsingle/vextendsingle/vextendsingle/vextendsinglefis 1-Lipschitz, |Dkf|∞≤λ/bracerightbigg\ndecay in expectation when ν=µnis an empirical measure sampled iid from µ. Other\nconcepts of Wasserstein-type distance would lead to similar results .\n(3) To prove the curse of dimensionality phenomenon, we used a mult i-scale construction\nwith modifications on quickly diverging scales. The statement about t he upper limit is\nonly a ‘worst case curse of dimensionality’ and describes slow conver gence on an infinite\nset of vastly different scales. Replacing the upper limit in Theorem 2.3 b y a lower limit\nwould lead to a much stronger statement with more severe implication s for applications.\n(4) Our proof of slow approximation was purely functional analytic a nd abstract. Is it\npossible to give a concrete example of a Lipschitz function which is poo rly approximated\nby Barron functions of low norm in L2(P) for a suitable data measure P?\nMore generally, is it possible to find general criteria to establish how w ell a given\nLipschitz function (or even a given collection of data) can be approx imated by a certain\nnetwork architecture?\n(5) We proved that a Lipschitz function φexists for which\ninf\n/bardblf/bardblBarron≤t/ba∇dblf−φ/ba∇dblL2≥cdt−1/d\non a suitable collection of scales tk. Can we more generally characterize the class\nYα:=/braceleftbigg\ny∈Y/vextendsingle/vextendsingle/vextendsingle/vextendsinglelimsup\nt→∞/bracketleftbig\ntαdistZ(y,t·BX)/bracketrightbig\n<∞/bracerightbigg\n16 WEINAN E AND STEPHAN WOJTOWYTSCH\nforα >0,X= Barron space, Y= Lipschitz space and Z=L2? This question arises\nnaturallywhen consideringtrainingalgorithmswhich increasethe com plexity-controlling\nnorm only slowly. It is related, but not identical to considerations in t he theory of real\ninterpolation spaces.\n(6) The draw-back of the functional analytic approach of this art icle is that it does not\nencompass many common loss functionals such as logistic loss. Is it po ssible to show by\ndifferent means that they are subject to similar problems in high dimen sion?\n(7) We give relevant examples of reproducing kernel Hilbert spaces in which the eigenvalues\nof the kernel decay at a dimension-independent rate (at leading or der). To the best of\nour knowledge, a general perspective on the decay of eigenvalues of kernels in practical\napplications without strong symmetry assumptions is still missing.\nAppendix A.A Brief Review of Barron Space\nFor the convenience of the reader, we recall Barron space for tw o-layer neural networks as\nintroducedbyE,MaandWu[EMW19b,EMW18]. Wefocusonthefunct ionalanalyticproperties\nof Barron space, for results with a focus on machine learning we ref er the reader to the original\nsources. The same space is denoted as F1in [Bac17], but described from a different perspective.\nFor functional analytic notions, we refer the reader to [Bre11].\nLetPbe a probability measure on Rdandσa Lipschitz-continuous function such that either\n(1)σ= ReLU or\n(2)σis sigmoidal, i.e. lim z→±∞σ(z) =±1 (or 0 and 1).\nConsider the class Fmof two-layer networks with mneurons\nfΘ(x) =1\nmm/summationdisplay\ni=1aiσ/parenleftbig\nwT\nix+bi/parenrightbig\n,Θ ={(ai,wi,bi)∈Rd+2}m\ni=1.\nIt is well-known that the closure of F=/uniontext∞\nm=1Fmin the uniform topology is the space of\ncontinuous functions, see e.g. [Cyb89]. Barron space is a different c losure of the same function\nclass where the path-norm\n/ba∇dblfΘ/ba∇dblpath=1\nmm/summationdisplay\ni=1|ai|/bracketleftbig\n|wi|ℓq+|bi|/bracketrightbig\nremains bounded. Here we assume that data space Rdis equipped with the ℓp-norm and take\nthe dualℓq-norm onw. The concept of path norm corresponds to ReLU activation, a sligh tly\ndifferent path norm for bounded Lipschitz activation is discussed be low.\nThe same class is often discussed without the normalizing factor of1\nm. With the factor, the\nfollowing concept of infinitely wide two-layer networks emerges more naturally.\nDefinition A.1. Letπbe a Radon probability measure on Rd+2with finite second moments,\nwhich we denote by π∈ P2(Rd+2). We denote by\nfπ(x) =ˆ\nRd+2aσ(wTx+b)π(da⊗dw⊗db)\nthe two-layer network associated to π.\nIfσ= ReLU, it is clear that fπis Lipschitz-continuous on Rdwith Lipschitz-constant ≤\n/ba∇dblfπ/ba∇dblB(P), sofπlies in the space of (possibly unbounded) Lipschitz functions Lip( Rd). Ifσis\na bounded Lipschitz function, the integral converges in C0(Rd) without assumptions on the\nmoments of π. If the second moments of πare bounded, fπis a Lipschitz function also in the\ncase of bounded Lipschitz activation σ.\nKOLMOGOROV WIDTH: SHALLOW NETWORKS, RANDOM FEATURES, NTK 1 7\nFor technical reasons, we will extend the definition to distributions πfor which only the mixed\nsecond moments ˆ\nRd+2|a|/bracketleftbig\n|w|+ψ(b)/bracketrightbig\nπ(da⊗dw⊗db)\nare finite, where ψ=|b|in the ReLU case and ψ= 1 otherwise. Now we introduce the associated\nfunction space.\nDefinition A.2. We denote\n/ba∇dbl·/ba∇dblB(P): Lip(Rd)→[0,∞),/ba∇dblf/ba∇dblB(P)= inf\n{π|fπ=fP−a.e.}ˆ\nRd+2|a|/bracketleftbig\n|w|ℓq+|b|/bracketrightbig\nπ(da⊗dw⊗db)\nifσ= ReLU and\n/ba∇dbl·/ba∇dblB(P): Lip(Rd)→[0,∞),/ba∇dblf/ba∇dblB(P)= inf\n{π|fπ=fP−a.e.}ˆ\nRd+2|a|/bracketleftbig\n|w|ℓq+1/bracketrightbig\nπ(da⊗dw⊗db)\notherwise. In either case, we denote\nB(P) =/braceleftbig\nf∈Lip(Rd) :/ba∇dblf/ba∇dblB(P)<∞/bracerightbig\n.\nHere inf∅= +∞.\nRemarkA.3.It dependson the activationfunction σwhetherornot the infimum in thedefinition\nofthe normis attained. If σ= ReLU, this canbe shownto be true byusing homogeneity. Instead\nof a probability measure πon the whole space, one can use a signed measure µon the unit\nsphere to express a two layer network. The compactness theore m for Radon measures provides\nthe existence of a measure minimizer, which can then be lifted to a pro bability measure (see\nbelow for similar arguments). On the other hand, if σis a classical sigmoidal function such that\nlim\nz→−∞σ(z) = 0,lim\nx→∞σ(z) = 1,0<σ(z)<1∀z∈R,\nthen the function f(z)≡1 has Barron norm 1, but the infimum is not attained. This holds true\nforanydata distribution P.\nProof.\n(1) For any x∈spt(P) we have\n1 =f(x) =ˆ\nRd+2aσ(wTx+b)dπ≤ˆ\nRd+2|a|dπ≤ˆ\nRd+2|a|/bracketleftbig\n|w|+1/bracketrightbig\ndπ.\nTaking the infimum over all π, we find that 1 ≤ /ba∇dblf/ba∇dblB(P). For any measure π, the\ninequality above is strict since |σ|<1, so there is no πwhich attains equality.\n(2) We consider a family of measures\nπλ=δa=σ(λ)−1δw=0δb=λ⇒fπλ≡1 andˆ\nRd+2|a|/bracketleftbig\n|w|ℓq+1/bracketrightbig\ndπλ=1\nσ(λ)→1\nasλ→ ∞.\nThus/ba∇dblf/ba∇dblB(P)= 1, but there is no minimizing parameter distribution π.\nRemark A.4.The space B(P) does not depend on the measure P, but only on the system of null\nsets forP.\nWe note that the space B(P) is reasonably well-behaved from the point of view of functional\nanalysis.\nLemma A.5. B(P)is a Banach space with norm /ba∇dbl· /ba∇dblB(P). Ifspt(P)is compact, B(P)embeds\ncontinuously into the space of Lipschitz functions C0,1(sptP).\n18 WEINAN E AND STEPHAN WOJTOWYTSCH\nProof.Scalar multiplication. Letf∈ B(P). Forλ∈Randπ∈ P2(Rd+2), define the push-\nforward\nTλ♯π∈ P2(Rd+2) along Tλ:Rd+2→Rd+2, Tλ(a,w,b) = (λa,w,b).\nThen\nfTλ♯π=λfπ,ˆ\nRd+2|a|/bracketleftbig\n|w|+1/bracketrightbig\nTλ♯π(da⊗dw⊗db) =|λ|ˆ\nRd+2|a|/bracketleftbig\n|w|+1/bracketrightbig\nπ(da⊗dw⊗db)\nand similarly in the ReLU case. Thus scalar multiplication is well-defined in B(P). Taking the\ninfimum over π, we find that /ba∇dblλf/ba∇dblB(P)=|λ|/ba∇dblf/ba∇dblB(P).\nVector addition. Letg,h∈ B(P). Chooseπg,πhsuch thatg=fπgandh=fπh. Consider\nπ=1\n2/bracketleftbig\nT2♯πg+T2♯πh/bracketrightbig\nlike above. Then fπ=g+hand\nˆ\nRd+2|a|/bracketleftbig\n|w|+1/bracketrightbig\nπ(da⊗dw⊗db) =ˆ\nRd+2|a|/bracketleftbig\n|w|+1/bracketrightbig\nπg(da⊗dw⊗db)+ˆ\nRd+2|a|/bracketleftbig\n|w|+1/bracketrightbig\nπh(da⊗dw⊗db).\nTaking infima, we see that /ba∇dblg+h/ba∇dblB(P)≤ /ba∇dblg/ba∇dblB(P)+/ba∇dblh/ba∇dblB(P). The same holds in the ReLU case.\nPositivity and embedding. Recall that the norm on the space of Lipschitz functions on a\ncompact set Kis\n/ba∇dblf/ba∇dblC0,1(K)= sup\nx∈K|f(x)|+ sup\nx,y∈K,x/ne}ationslash=y|f(x)−f(y)|\n|x−y|.\nIt is clear that /ba∇dbl·/ba∇dblB(P)≥0. Ifσis a bounded Lipschitz function, then\n/ba∇dblfπ/ba∇dblL∞(sptP)≤ /ba∇dblfπ/ba∇dblB(P)/ba∇dblσ/ba∇dblL∞(R)\nlike in Remark A.3. If σ= ReLU, then\nsup\nx∈Rd|fπ(x)|\n1+|x|≤ /ba∇dblfπ/ba∇dblB(P).\nIn either case\n|fπ(x)−fπ(y)| ≤[σ]Lip/ba∇dblfπ/ba∇dblB(P)\nfor allx,yin spt(P). In particular, /ba∇dblf/ba∇dblB(P)>0 whenever f/ne}ationslash= 0 inB(P) and if spt( P) is compact,\nB(P) embeds into the space of Lipschitz functions on B(P). Ifσis a bounded Lipschitz function,\nB(P) embeds into the space of bounded Lipschitz functions also on unbo unded sets.\nCompleteness. Completeness is proved most easily by introducing a different repres entation\nfor Barron functions. Consider the space\nV=/braceleftbigg\nµ/vextendsingle/vextendsingle/vextendsingle/vextendsingleµ(signed) Radon measure on Rd+2s.t.ˆ\nRd+2|a|/bracketleftbig\n|w|+|b|/bracketrightbig\n|µ|(da⊗dw⊗db)<∞/bracerightbigg\nwhere|µ|is the total variation measure of µ. Equipped with the norm\n/ba∇dblµ/ba∇dblV=ˆ\nRd+2|a|/bracketleftbig\n|w|+|b|/bracketrightbig\n|µ|(da⊗dw⊗db),\nVis a Banach space when we quotient out measures supported on {|a|= 0}∪{|b|=|w|= 0}or\nrestrict ourselves to the subspace of measures such that |µ|({|a|= 0}) =|µ|({|b|=|w|= 0}) = 0.\nThe only non-trivial question is whether Vis complete. By definition µnis a Cauchy sequence in\nVif and only if νn:=|a|[|w|+|b|]·µnis a Cauchy sequence in the space of finite Radon measures.\nSince the space of finite Radon measures is complete, νnconverges (strongly) to a measure ν\nKOLMOGOROV WIDTH: SHALLOW NETWORKS, RANDOM FEATURES, NTK 1 9\nwhich satisfies ν({a= 0}∪ {(w,b) = 0}) = 0. We then obtain µ:=|a|−1[|w|+|b|]−1·ν. For\nµ∈Vwe write\nfµ(x) =ˆ\nRd+2aσ(wTx+b)µ(da⊗dw⊗db)\nand consider the subspace\nV0\nP={µ∈V|fµ= 0P−almost everywhere }.\nSince the map\nV/ma√sto→C0,1(sptP), µ/ma√sto→fµ\nis continuous by the same argument as before, we find that V0\nPis a closed subspace of V. In\nparticular,V/V0\nPis a Banach space. We claim that B(P) is isometric to the quotient space V/V0\nP\nby the map [ µ]/ma√sto→fµwhereµis any representative in the equivalence class [ µ].\nIt is clear that any representative in the equivalence class induces t he same function fµsuch\nthat themap iswell-defined. Considerthe Hahndecomposition µ=µ+−µ−ofµasthe difference\nof non-negative Radon measures. Set\nm±:=/ba∇dblµ±/ba∇dbl, π=1\n2/bracketleftbigg1\nm+T2m+♯µ++1\nm−T−2m−♯µ−/bracketrightbigg\n.\nThenπis a probability Radon measure such that fπ=fµand\nˆ\nRd+2|a|/bracketleftbig\n|w|+|b|/bracketrightbig\nπ(da⊗dw⊗db) =ˆ\nRd+2|a|/bracketleftbig\n|w|+|b|/bracketrightbig\n|µ|(da⊗dw⊗db).\nIn particular, /ba∇dblfµ/ba∇dblB(P)≤ /ba∇dblµ/ba∇dblV. Taking the infimum of the right hand side, we conclude that\n/ba∇dblfµ/ba∇dblB(P)≤ /ba∇dbl[µ]/ba∇dblV/V0\nP. The opposite inequality is trivial since every probability measure is in\nparticular a signed Radon measure.\nThusB(P) is isometric to a Banach space, hence a Banach space itself. We pre sented the\nargument in the context of ReLU activation, but the same proof ho lds for bounded Lipschitz\nactivation. /square\nA few remarks are in order.\nRemark A.6.The requirement that Phave compact support can be relaxed when we consider\nthe norm\n/ba∇dblf/ba∇dblC0,1(P)= sup\nx,y∈sptP,x/ne}ationslash=y|f(x)−f(y)|\n|x−y|+/ba∇dblf/ba∇dblL1(P)\non the space of Lipschitz functions. Since Lipschitz functions grow at most linearly, this is\nwell-defined for all data distributions Pwith finite first moments.\nRemark A.7.For general Lipschitz-activation σwhich is neither bounded nor ReLU, the Barron\nnorm is defined as\n/ba∇dblf/ba∇dblB(P)= inf\n{π|fπ=fP−a.e.}ˆ\nRd+2|a|/bracketleftbig\n|w|ℓq+|b|+1/bracketrightbig\nπ(da⊗dw⊗db).\nSimilar results hold in this case.\nRemark A.8.In general, B(P) for ReLU activation is not separable. Consider P=L1|[0,1]to be\nLebesgue measure on the unit interval in one dimension. For α∈(0,1) setfα(x) =σ(x−α).\nThen forβ >αwe have\n1 =β−α\nβ−α=(fα−fβ)(β)−(fα−fβ)(α)\nβ−α≤[fβ−fα]Lip≤ /ba∇dblfβ−fα/ba∇dblB(P).\nThus there exists an uncountable family of functions with distance ≥1, meaning that B(P)\ncannot be separable.\n20 WEINAN E AND STEPHAN WOJTOWYTSCH\nRemark A.9.In general, B(P) for ReLU activation is not reflexive. We consider Pto be the\nuniform measure on [0 ,1] and demonstrate that B(P) is the space of functions whose first deriv-\native is inBV(i.e. whose second derivative is a Radon measure) on [0 ,1]. The space of Radon\nmeasures on [0 ,1] is denoted by M[0,1] and equipped with the total variation norm.\nAssume that fis a Barron function on [0 ,1]. Then\nf(x) =ˆ\nR3aσ(wx+b)π(da⊗dw⊗db)\n=ˆ\n{w/ne}ationslash=0}a|w|σ/parenleftbiggw\n|w|x+b\n|w|/parenrightbigg\nπ(da⊗dw⊗db)+ˆ\n{w=0}aσ(b)π(da⊗dw⊗db)\n=ˆ\nRσ/parenleftBig\n−x+˜b/parenrightBig\nµ1(d˜b)+ˆ\nRσ(x+b)µ2(d˜b)+ˆ\n{w=0}aσ(b)π(da⊗dw⊗db)\nwhere\nµ1=T♯/parenleftbig\na|w|·π/parenrightbig\n, T:{w<0} →R, T(a,w,b) =b\n|w|.\nand similarly for µ2. The Barron norm is expressed as\n/ba∇dblf/ba∇dblB(P)= inf\nµ1,µ2/bracketleftbiggˆ\nR1+|˜b||µ1|(d˜b)+ˆ\nR1+|˜b||µ2|(d˜b)/bracketrightbigg\n+ˆ\n{w=0}|ab|π(da⊗dw⊗db).\nSinceσ′′=δ, we can formally calculate that\nf′′=µ1+µ2\nThis is easily made rigorous in the distributional sense. Since [0 ,1] is bounded by 1, we obtain\nin addition to the bounds on f(0) and the Lipschitz constant of fthat\n/ba∇dblf′′/ba∇dblM[0,1]≤2/ba∇dblf/ba∇dblB(P).\nOn the other hand, if fhas a second derivative, then\nf(x) =f(0)+f′(0)x+ˆx\n0ˆt\n0f′′(ξ)dξdt\n=f(0)+f′(0)x+ˆx\n0(x−t)f′′(t)dt\n=f(0)σ(1)+f′(0)σ(x)+ˆx\n0f′′(t)σ(x−t)dt\nfor allx∈[0,1]. This easily extends to measure valued derivatives and we conclude that\n/ba∇dblf/ba∇dblB(P)≤ |f(0)|+|f′(0)|+/ba∇dblf′′/ba∇dblM[0,1].\nWe can thus express B(P) =BV([0,1])×R×Rwith an equivalent norm\n/ba∇dblf/ba∇dbl′\nB(P)=|f(0)|+|f′(0)|+/ba∇dblf′′/ba∇dblM[0,1].\nThusB(P) is not reflexive since BVis not reflexive.\nFinally, we demonstrate that integration by empirical measures con verges quickly on B(P).\nAssume that p=∞,q= 1.\nLemma A.10. The uniform Monte-Carlo estimate\nEXi∼P/braceleftBigg\nsup\n/bardblφ/bardblB(P)≤1/bracketleftBigg\n1\nnn/summationdisplay\ni=1φ(Xi)−ˆ\nQφ(x)P(dx)/bracketrightBigg/bracerightBigg\n≤2L/radicalbigg\n2 log(2d)\nn\nholds for any probability distribution Psuch that spt(P)⊆[−1,1]d. HereLis the Lipschitz-\nconstant of σ.\nKOLMOGOROV WIDTH: SHALLOW NETWORKS, RANDOM FEATURES, NTK 2 1\nProof.A single data point may underestimate or overestimate the average integral, and the\nproof of convergence relies on these cancellations. A convenient t ool to formalize cancellation\nand decouple this randomness from other effects is through Radem acher complexity [SSBD14,\nChapter 26].\nWe denote S={X1,...,Xn}and assume that Sis drawn iid from the distribution P. We\nconsider an auxiliary random vector ξsuch that the entries ξiare iid (and independent of S)\nvariables which take the values ±1 with probability 1 /2. Furthermore, abbreviate by Bthe unit\nball inB(P). Furthermore\nRep(B,S) = sup\nφ∈B/bracketleftBigg\n1\nnn/summationdisplay\ni=1φ(Xi)−ˆ\nQφ(x)P(dx)/bracketrightBigg\nRad(B,S) =Eξsup\nφ∈B/bracketleftBigg\n1\nnn/summationdisplay\ni=1ξiφ(Xi)/bracketrightBigg\n.\nAccording to [SSBD14, Lemma 26.2], the Rademacher complexity Rad b ounds the represen-\ntativeness of the set Sby\nESRep(B,S)≤2ESRad(B,S).\nThe unit ball in Barron space is given by convex combinations of funct ions\nφw,b(x) =±σ(wTx+b)\n|w|+1or±σ(wTx+b)\n|w|+|b|\nrespectively, so for fixed ξ, the linear map φ/ma√sto→1\nn/summationtextn\ni=1ξiφ(Xi) at one of the functions in the\nconvex hull, i.e.\nRad(B,S) =Eξsup\n(w,b)/bracketleftBigg\n1\nnn/summationdisplay\ni=1ξiσ(wTXi+b)\n|w|+1/bracketrightBigg\n.\nAccording to the Contraction Lemma [SSBD14, Lemma 26.9], the Lipsc hitz-nonlinearity σcan\nbe neglected in the computation of the complexity. If Lis the Lipschitz-constant of σ, then\nEξsup\n(w,b)/bracketleftBigg\n1\nnn/summationdisplay\ni=1ξiσ(wTXi+b)\n|w|+1/bracketrightBigg\n≤LEξsup\n(w,b)/bracketleftBigg\n1\nnn/summationdisplay\ni=1ξiwTXi+b\n|w|+1/bracketrightBigg\n=LEξsup\nw∈RdwT\n|w|+11\nnn/summationdisplay\ni=1ξiXi\n=LEξsup\n|w|≤1wT1\nnn/summationdisplay\ni=1ξiXi\n≤Lsup\ni|Xi|∞/radicalbigg\n2 log(2d)\nn\nwhere we used [SSBD14, Lemma 26.11] for the complexity bound of th e linear function class and\n[SSBD14, Lemma 26.6] to eliminate the scalar translation. A similar comp utation can be done\nin the ReLU case. /square\nAppendix B.A Brief Review of Reproducing Kernel Hilbert Spaces, Random\nFeature Models, and the Neural Tangent Kernel\nFor an introduction to kernel methods in machine learning, see e.g. [ SSBD14, Chapter 16] or\n[CS09] in the context of deep learning. Let k:Rd×R→Rbe a symmetric positive definite\n22 WEINAN E AND STEPHAN WOJTOWYTSCH\nkernel. The reproducing kernel Hilbert space (RKHS) Hkassociated with kis the completion of\nthe collection of functions of the form\nh(x) =n/summationdisplay\ni=1aik(x,xi)\nunder the scalar product /an}b∇acketle{tk(x,xi),k(x,xj)/an}b∇acket∇i}htHk=k(xi,xj). Note that, due to the positive defi-\nniteness of the kernel, the representation of a function is unique.\nB.1.Random Feature Models. The random feature models considered in this article are\nfunctions of the same form\nf(x) =1\nmm/summationdisplay\ni=1aiσ/parenleftbig\nwT\nix+bi/parenrightbig\nas a two-layer neural network. Unlike neural networks, ( wi,bi) are not trainable variables and\nremain fixed after initialization. Random feature models are linear (so easier to optimize) but\nless expressive than shallow neural networks. While two-layer neur al networks of infinite width\nare modelled as\nf(x) =ˆ\nRd+2aσ(wTx+b)π(da⊗dw⊗db)\nfor variable probability measures π, an infinitely wide random feature model is given as\nf(x) =ˆ\nRd+1a(w,b)σ(wTx+b)π0(dw⊗db)\nfor a fixed distribution π0onRd+1. One can think of Barron space as the union over all random\nfeature spaces. It is well known that neural networks can repre sent the same function in different\nways. For ReLU activation, there is a degree of degeneracy due to the identity\n0 =x+α−(x+α) =σ(x)−σ(−x)+σ(α)−σ(−α)−σ(x+α)+σ(−x−α).\nThis can be generalized to higher dimension and integrated in αto show that the random\nfeature representation of functions where π0is the uniform distribution on the sphere has a\nsimilar degeneracy. For given π0, denote\nN=/braceleftbigg\na∈L2(π0)/vextendsingle/vextendsingle/vextendsingle/vextendsingleˆ\nRd+1a(w,b)σ(wTx+b)π0(dw⊗db) = 0P−a.e./bracerightbigg\nLemma B.1. [RR08, Proposition4.1] The space of random feature models is dense in the RKHS\nfor the kernel\nk(x,x′) =ˆ\nRd+1σ(wTx+b)σ(wTx′+b)π0(dw⊗db)\nand if\nf(x) =ˆ\nRd+1a(w,b)σ(wTx+b)π0(dw⊗db)\nfora∈L2(π0)/N, then\n/ba∇dblf/ba∇dblHk=/ba∇dbla/ba∇dblL2(π0)/N.\nNote that the proof in the source uses a different normalization. Th e result in this form is\nachieved by setting a=α/p,b=β/pin the notation of [RR08].\nCorollary B.2. Any function in the random feature RKHS is Lipschitz-contin uous and [f]Lip≤\n/ba∇dblf/ba∇dblHk. ThusHkembeds compactly into C0(sptP)by the Arzel` a-Ascoli theorem and a fortiori\nintoL2(P)for any compactly supported measure P.\nKOLMOGOROV WIDTH: SHALLOW NETWORKS, RANDOM FEATURES, NTK 2 3\nLetPbe a compactly supported data distribution on Rd. Then the kernel kacts onL2(P) by\nthe map\nK:L2(P)→L2(P),Ku(x) =ˆ\nRdu(x′)k(x,x′)P(dx′).\nComputing the eigenvalues of the kernel kfor a given parameter distribution π0and a given data\ndistribution Pis a non-trivial endeavor. The task simplifies considerably under the assumption\nof symmetry, but remains complicated. The following results are tak en from [Bac17, Appendix\nD], where more general results are proved for α-homogeneous activation for α≥0. We specify\nα= 1.\nLemma B.3. Assume that π0=P=α−1\nd·Hd|SdwhereSdis the Euclidean unit sphere, αdis\nits volume and σ=ReLU. Then the eigenfunctions of the kernel kare the spherical harmonics.\nThek-th eigenvalue λk(counted without repetition) occurs with the same multipli cityN(d,k)as\neigenfunctions to the k-th eigenvalue of the Laplace-Beltrami operator on the sphe re (the spherical\nharmonics). Precisely\nN(d,k) =2k+d−1\nk/parenleftbiggk+d−2\nd−1/parenrightbigg\nandλk=d−1\n2π2−kΓ(d/2)Γ(k−1)\nΓ(k/2)Γ(k+d+2\n2)\nfork≥2.\nWe can extract a decay rate for the eigenvalues counted with repe tition by estimating the\nheight and width of the individual plateaus of eigenvalues. Denote by µithe eigenvalues of k\ncounted as often as they occur, i.e.\nµi=λk⇔k−1/summationdisplay\nj=1N(d,j)<i≤k/summationdisplay\nj=1N(d,j).\nBy Stirling’s formula one can estimate that for fixed d\nλk∼k−d−3\n2\nask→ ∞where we write ak∼bkif and only if\n0<liminf\nk→∞ak\nbk≤limsup\nk→∞ak\nbk<∞.\nOn the other hand\nN(d,k) =d\nk/parenleftbiggk+d−1\nd/parenrightbigg\n=d\nk(k+d−1)...k\nd!∼(k+d)d−1\n(d−1)!.\nIn particular, if µi=λk, then\ni∼C(d)k/summationdisplay\nj=1jd−1∼C(d)ˆk+1\n1td−1dt∼C(d)kd.\nThus\nµi=λk∼k−d−3\n2∼i−d−3\n2d=i−1\n2+3\n2d.\nCorollary B.4. Consider the random feature model for ReLU activation when b oth parameter\nand data measure are given by the uniform distribution on the unit sphere. Then the eigenvalues\nof the kernel decay like i−1\n2+3\n2d.\nRemark B.5.It is easy to see that up to a multiplicative constant, all radially symme tric pa-\nrameter distributions π0lead to the same random feature kernel. This can be used to obtain a n\nexplicit formula in the case when π0is a standard Gaussian in ddimensions. Since konly de-\npends on the angle between xandx′, we may assume that x=e1andx′= cosφe1+sinφe2with\n24 WEINAN E AND STEPHAN WOJTOWYTSCH\nφ∈[0,π]. Now, one can use that the projection of the standard Gaussian o nto thew1w2-plane\nis a lower-dimensional standard Gaussian. Thus the kernel does no t depend on the dimension.\nAn explicit computation in two dimensions shows that\nk(x,x′) =π−φ\nπcosφ+sinφ\nπ,\nsee [CS09, Section 2.1].\nB.2.The Neural Tangent Kernel. The neural tangent kernel [JGH18] is a different model for\ninfinitely wide neural networks. For two layer networks, it is obtaine d as the limiting object in\na scaling regime for parameters which makes the Barron norm infinite . When training networks\non empirical risk, in a certain scaling regime between the number of da ta points, the number\nof neurons, and the initialization of parameters, it can be shown tha t parameters do not move\nfar from their initial position according to a parameter distribution ¯ π, and that the gradient\nflow optimization of neural networks is close to the optimization of a k ernel method for all times\n[DZPS18, EMW19d]. This kernel is called the neural tangent kernel (NTK) and is obtained\nas the sum of derivatives of the feature function with respect to a ll trainable parameters. It\nlinearizes the dynamics at the initial parameter distribution. For net works with one hidden layer\nthis is\nk(x,x′) =ˆ\nR×Rd×R∇(a,w,b)/parenleftbig\naσ(wTx+b)/parenrightbig\n·∇(a,w,b)/parenleftbig\naσ(wTx′+b)/parenrightbig\n¯π(da⊗dw⊗db)\n=ˆ\nR×Rd×Rσ(wTx+b)σ(wTx′+b)\n+a2σ′(wTx+b)σ′(wTx′+b)/bracketleftbig\n/an}b∇acketle{tx,x′/an}b∇acket∇i}ht+1/bracketrightbig\n¯π(da⊗dw⊗db)\n=kRF(x,x′)+ˆ\nR×Rd×Ra2σ′(wTx+b)σ′(wTx′+b)(x,1)·(x,1)¯π(da⊗dw⊗db)\nwherekRFdenotes the random feature kernel with distribution P(w,b),♯¯π. The second term is\nobtained on the right hand side is a positive definite kernel in itself. Th is can be seen most easily\nby recalling that\n/summationdisplay\ni,jcicjˆ\nRd+2∇(w,b)/parenleftbig\naσ(wTxi+b)/parenrightbig\n·∇(w,b)/parenleftbig\naσ(wTxj+b)/parenrightbig\n¯π(da⊗dw⊗db)\n=ˆ\nRd+2∇(w,b)a2/parenleftBigg/summationdisplay\niciσ(wTxi+b)/parenrightBigg\n·∇(w,b)\n/summationdisplay\njcjσ(wTxj+b)\n¯π(da⊗dw⊗db)\n=ˆ\nRd+2/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle∇(w,b)a2/parenleftBigg/summationdisplay\niciσ(wTxi+b)/parenrightBigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle2\n¯π(da⊗dw⊗db)\n≥0.\nOn the other hand, if we assume that |a|=a0and|(w,b)|= 1 almost surely, we find that\n/summationdisplay\ni,jcicjˆ\nR×Rd×Ra2σ′(wTxi+b)σ′(wTxj+b)(xi,1)I(xj,1)T¯π(da⊗dw⊗db)\n=ˆ\nR×Rd×Ra2/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/summationdisplay\niciσ′(wTxi+b)/parenleftbigg\nxi\n1/parenrightbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle2\n¯π(da⊗dw⊗db)\n≤a2\n0ˆ\nR×Rd×R/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/summationdisplay\niciσ′(wTxi+b)/parenleftbiggxi\n1/parenrightbigg\n·/parenleftbiggw\nb/parenrightbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle2\n¯π(da⊗dw⊗db)\nKOLMOGOROV WIDTH: SHALLOW NETWORKS, RANDOM FEATURES, NTK 2 5\n=a2\n0ˆ\nR×Rd×R/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/summationdisplay\niciσ′(wTxi+b)(wTxi+b)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle2\n¯π(da⊗dw⊗db)\n=a2\n0ˆ\nR×Rd×R/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/summationdisplay\niciσ(wTxi+b)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle2\n¯π(da⊗dw⊗db)\nsinceσis positively one-homogeneous. Thus the NTK satisfies\nkRF≤k≤(1+|a0|2)kRF\nin the senseofquadraticforms. In particular, the eigenvaluesoft he NTKand the randomfeature\nkernel decay at the same rate. Clearly, in exchange for larger con stants it suffices to assume that\n(a,w,b) are bounded. In practice, the intialization of ( w,b) is Gaussian, which concentrates close\nto the Euclidean sphere of radius√\ndinddimensions.\nThe neural tangent kernel is also defined for deep networks, see for example [ADH+19,\nLXS+19, Yan19, DLL+18, EMWW19].\nReferences\n[ADH+19] S. Arora, S. S. Du, W. Hu, Z. Li, R. R. Salakhutdinov, and R. Wang. On exact computation with an\ninfinitely wide neural net. In Advances in Neural Information Processing Systems , pages 8139–8148,\n2019.\n[Bac17] F. Bach. Breaking the curse of dimensionality with c onvex neural networks. The Journal of Machine\nLearning Research , 18(1):629–681, 2017.\n[Bar93] A. R. Barron. Universal approximation bounds for su perpositions of a sigmoidal function. IEEE\nTransactions on Information theory , 39(3):930–945, 1993.\n[Bre11] H. Brezis. Functional analysis, Sobolev spaces and partial differenti al equations . Universitext.\nSpringer, New York, 2011.\n[CS09] Y. Cho and L. K. Saul. Kernel methods for deep learning . InAdvances in neural information pro-\ncessing systems , pages 342–350, 2009.\n[Cyb89] G. Cybenko. Approximation by superpositions of a si gmoidal function. Mathematics of control,\nsignals and systems , 2(4):303–314, 1989.\n[DLL+18] S. S. Du, J. D. Lee, H. Li, L. Wang, and X. Zhai. Gradient des cent finds global minima of deep\nneural networks. arXiv:1811.03804 [cs.LG] , 2018.\n[Dob10] M. Dobrowolski. Angewandte Funktionalanalysis: Funktionalanalysis, Sob olev-R¨ aume und elliptis-\nche Differentialgleichungen . Springer-Verlag, 2010.\n[DZPS18] S. S. Du, X. Zhai, B. Poczos, and A. Singh. Gradient d escent provably optimizes over-parameterized\nneural networks. arXiv:1810.02054 [cs.LG] , 2018.\n[EMW18] W. E, C. Ma, and L. Wu. A priori estimates of the popula tion risk for two-layer neural networks.\nComm. Math. Sci. , 17(5):1407 – 1425 (2019), arxiv:1810.06397 [cs.LG] (2018 ).\n[EMW19a] W. E, C. Ma, and Q. Wang. A priori estimates of the pop ulation risk for residual networks.\narXiv:1903.02154 [cs.LG] , 2019.\n[EMW19b] W. E, C. Ma, and L. Wu. Barron spaces and the composit ional function spaces for neural network\nmodels. arXiv:1906.08039 [cs.LG] , 2019.\n[EMW19c] W. E, C. Ma, and L. Wu. Machine learning from a contin uous viewpoint. arxiv:1912.12777\n[math.NA] , 2019.\n[EMW19d] W. E, C. Ma, and L. Wu. A comparative analysis of opti mization and generalization properties of\ntwo-layer neural network and random feature models under gr adient descent dynamics. Sci. China\nMath., https://doi.org/10.1007/s11425-019-1628-5, arXiv:19 04.04326 [cs.LG] (2019).\n[EMWW19] W. E, C. Ma, Q. Wang, and L. Wu. Analysis of the gradie nt descent algorithm for a deep neural\nnetwork model with skip-connections. arXiv:1904.05263 [cs.LG] , 2019.\n[EW20] W. E and S. Wojtowytsch. On the Banach spaces associat ed with multi-layer ReLU networks of\ninfinite width. arXiv:2007.15623 [stat.ML] , 2020.\n[FG15] N. Fournier and A. Guillin. On the rate of convergence in Wasserstein distance of the empirical\nmeasure. Probability Theory and Related Fields , 162(3-4):707–738, 2015.\n[GKNV19] R. Gribonval, G. Kutyniok, M. Nielsen, and F. Voigt laender. Approximation spaces of deep neural\nnetworks. arXiv:1905.01208 [math.FA] , 2019.\n26 WEINAN E AND STEPHAN WOJTOWYTSCH\n[Hor91] K. Hornik. Approximation capabilities of multilay er feedforward networks. Neural networks ,\n4(2):251–257, 1991.\n[JGH18] A. Jacot, F. Gabriel, and C. Hongler. Neural tangent kernel: Convergence and generalization in\nneural networks. In Advances in neural information processing systems , pages 8571–8580, 2018.\n[Lor66] G. Lorentz. Approximation of Functions . Holt, Rinehart and Winston, New York, 1966.\n[LXS+19] J. Lee, L. Xiao, S. Schoenholz, Y. Bahri, R. Novak, J. Sohl -Dickstein, and J. Pennington. Wide\nneural networks of any depth evolve as linear models under gr adient descent. In Advances in neural\ninformation processing systems , pages 8570–8581, 2019.\n[RR08] A. Rahimi and B. Recht. Uniform approximation of func tions with random bases. In 2008 46th\nAnnual Allerton Conference on Communication, Control, and Computing , pages 555–561. IEEE,\n2008.\n[SSBD14] S. Shalev-Shwartz and S. Ben-David. Understanding machine learning: From theory to algorithms .\nCambridge university press, 2014.\n[Vil08] C. Villani. Optimal transport: old and new , volume 338. Springer Science & Business Media, 2008.\n[WE20] S. Wojtowytsch and W. E. Can shallow neural networks b eat the curse of dimensionality? A mean\nfield training perspective. arXiv:2005.10815 [cs.LG] , 2020.\n[Yan19] G. Yang. Scaling limits of wide neural networks with weight sharing: Gaussian process behavior,\ngradient independence, and neural tangent kernel derivati on.arXiv:1902.04760 [cs.NE] , 2019.\nWeinan E, Department of Mathematics and Program in Applied an d Computational Mathematics,\nPrinceton University, Princeton, NJ 08544, USA\nEmail address :[email protected]\nStephanWojtowytsch, Programin Applied and ComputationalM athematics, Princeton University,\nPrinceton, NJ 08544\nEmail address :[email protected]",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "JIrTLYVfb_h",
"year": null,
"venue": "CoRR 2020",
"pdf_link": "http://arxiv.org/pdf/2012.05420v3",
"forum_link": "https://openreview.net/forum?id=JIrTLYVfb_h",
"arxiv_id": null,
"doi": null
}
|
{
"title": "On the emergence of tetrahedral symmetry in the final and penultimate layers of neural network classifiers",
"authors": [
"Weinan E",
"Stephan Wojtowytsch"
],
"abstract": "A recent numerical study observed that neural network classifiers enjoy a large degree of symmetry in the penultimate layer. Namely, if $h(x) = Af(x) +b$ where $A$ is a linear map and $f$ is the output of the penultimate layer of the network (after activation), then all data points $x_{i, 1}, \\dots, x_{i, N_i}$ in a class $C_i$ are mapped to a single point $y_i$ by $f$ and the points $y_i$ are located at the vertices of a regular $k-1$-dimensional standard simplex in a high-dimensional Euclidean space. We explain this observation analytically in toy models for highly expressive deep neural networks. In complementary examples, we demonstrate rigorously that even the final output of the classifier $h$ is not uniform over data samples from a class $C_i$ if $h$ is a shallow network (or if the deeper layers do not bring the data samples into a convenient geometric configuration).",
"keywords": [],
"raw_extracted_content": "arXiv:2012.05420v3 [cs.LG] 4 Jun 2021ON THE EMERGENCE OF SIMPLEX SYMMETRY IN THE FINAL AND\nPENULTIMATE LAYERS OF NEURAL NETWORK CLASSIFIERS\nWEINAN E AND STEPHAN WOJTOWYTSCH\nAbstract. A recent numerical study observed that neural network class ifiers enjoy a large degree\nof symmetry in the penultimate layer. Namely, if h(x) =Af(x)+bwhereAis a linear map and\nfis the output of the penultimate layer of the network (after a ctivation), then all data points\nxi,1,...,x i,Niin a class Ciare mapped to a single point yibyfand the points yiare located\nat the vertices of a regular k−1-dimensional standard simplex in a high-dimensional Eucl idean\nspace.\nWe explain this observation analytically in toy models for h ighly expressive deep neural net-\nworks. In complementary examples, we demonstrate rigorous ly that even the final output of the\nclassifier his not uniform over data samples from a class Ciifhis a shallow network (or if the\ndeeper layers do not bring the data samples into a convenient geometric configuration).\n[2020]68T07, 62H30 Classification problem, deep learning, neural co llapse, cross entropy, geom-\netry within layers, simplex symmetry\n1.Introduction\nA recent empirical study [PHD20] took a first step towards investig ating the inner geometry of\nneural networks close to the output layer. In classification proble ms, the authors found that the\ndata in the final and penultimate layers enjoy a high degree of symme try. Namely, a neural network\nfunction hL:Rd→RkwithLlayers can be understood as a composition\n(1.1) hL(x) =AfL(x)+b\nwherefL:Rd→Rmis (the composition of a componentwise nonlinearity with) a neural ne twork\nwithL−1 layers, b∈RkandA:Rm→Rkis linear. In applications where hLwas trained by\nstochastic gradient descent to minimize softmax-crossentropylo ss to distinguish elements in various\nclassesC1,...,C k, the authors observed that the following became approximately tr ue in the long\ntime limit.\n•fLmaps all elements in a class Cito a single point yi.\n•The distance between the centers of mass of different classes in th e penultimate layer /ba∇dblyi−\nyj/ba∇dbldoes not depend on i/ne}ationslash=j.\n•LetM=1\nk/summationtextk\ni=1yibe the center of mass of the data distribution in the penultimate cen ter\n(normalizing the weight of data classes). Then the angle between yi−Mandyj−Mdoes\nnot depend on i/ne}ationslash=j.\n•Thei-th row of Ais parallel to yi−M.\nIn less precise terms, hLmaps the classes Cito the vertices of a regular standard simplex in a\nhigh-dimensional space. This phenomenon is referred to as ‘neural collapse’ in [PHD20]. In this\nnote, we consider the toy model where fLis merely a bounded measurable function and prove that\nDate: June 7, 2021.\n1\n2 WEINAN E AND STEPHAN WOJTOWYTSCH\nunder certain assumptions such simplex geometries are optimal. An in vestigation along the same\nlines has been launched separately in [MPP20].\nConversely,we showthat even the output hL(Ci) ofashallowneuralnetwork hLoveradata class\nCidoes not approach a single value ziwhen the parameters of hLare trained by continuous time\ngradient descent. Since a deep neural network is the composition o f a slightly less deep network and\na shallow neural network containing the output layer, these result s suggest that the hLcannot be\nexpected to be uniform over a data class unless a convenient geome tric configuration has already\nbeen reached two layers before the output.\nWe make the following observations.\n(1) Overparametrized networks can fit random labels at data point s [Coo18] and can be effi-\nciently optimized for this purpose in certain scaling regimes, see e.g. [D LL+18, DZPS18,\nEMW20]. The use of the class L∞(P;Rm) := (L∞(P))mas a proxy for very expressive deep\nneural networks thus can be justified heuristically from the static perspective of energy\nminimization (but not necessarily from the dynamical perspective of training algorithms).\nInpractice,thedatadistribution Pisestimatedonafinitesetofsamplepoints {x1,...,x N}\nand an empirical distribution PN=1\nN/summationtextN\ni=1δxi. A function fL∈L∞(P;Rm) determined\nby its values at the points x1,...,x N. A class of sufficiently complex neural networks\nwhich can fit any given set of outputs {y1,...,y N}for inputs {x1,...,x N}coincides with\nLp(P;Rm) for any 1 ≤p≤ ∞. The same is true for many other function models.\nIfPN=1\nN/summationtextN\ni=1δxior more generally, if all classes C1,...,C khave a positive distance\nto each other, a function f∈Lp(P;Rm) which is constant on every class can be extended\nto aC∞-function on Rd. Thus in realistic settings, all functions below can be taken to be\nfairly regular.\n(2) As the softmax cross-entropy functional does not have minim izers in sufficiently expressive\nscaling-invariant function classes, we need to consider norm bound ed classes.\nIn the hypothesis class given by the ball of radius RinL∞(P;Rm), the optimal map h\nsatisfiesh(x) =zifor allxin a data class Ciand the values ziform the vertices of a regular\nsimplex. More precisely, the statement is valid under the constraint /ba∇dblh(x)/ba∇dblℓp≤Rfor all\np∈(1,∞), but the precise location of the vertices depends on p. We refer to this as final\nlayer geometry.\nIfh:Rd→Rkis given by h(x) =Af(x) forf∈L∞(P;Rm) and a linear map A:\nRm→Rk, the following holds: If /ba∇dblA/ba∇dblL(ℓ2,ℓ2)≤1 and/ba∇dblf(x)/ba∇dblℓ2≤Rfor allx∈Rd, then\nany energy minimizer satisfies f(x) =yifor allx∈Ciwhere the outputs yiform the\nvertices of a regular standard simplex in a high-dimensional ambient s pace. We refer to\nthis as penultimate layergeometry. We note that similar results were obtained in a different\nframework in [LS20].\n(3) Considerations on the final layer geometry are generally indepe ndent of the choice of norm\nonRkwithin the classof ℓp-norms, while the penultimate layergeometryappears to depend\nspecifically on the use of the Euclidean norm. While the coordinate-wis e application of a\none-dimensional activation function is not hugely compatible with Euc lidean geometry (or\nat least no more compatible than with ℓp-geometry for any p∈[1,∞]), the transition from\nthe penultimate layer to the final layer is described by a single affine ma py/mapsto→Ay+b.\nIfAandbare initilized from a distribution compatible with Euclidean geometry (e.g . a\nrotation-invariant Gaussian) and optimized by an algorithm such as g radient descent which\nis based on the Euclidean inner product, then the use of Euclidean ge ometry for ( A,b) is\nwell justified.\nSIMPLEX SYMMETRY IN NEURAL NETWORK CLASSIFIERS 3\nIn deeper layers, the significance of Euclidean geometry becomes m ore questionable.\nEven for the map f:Rd→Rm, it is unclear whether the Euclidean norm captures the\nconstraints on fwell.\n(4) Ifh(x) =/summationtextm\ni=1aiσ(wT\nix+bi) is a shallow neural network classifier and the weights\n(ai,wi,bi) are optimized by gradient descent, then in general hdoesnotconverge to a\nclassifier which is constant on different data classes (although the h ypothesis class contains\nfunctions with arbitrarily low risk which are constant on the different classesCi). This is\nestablished in different geometries:\n(a) Inthefirstcase, σistheReLUactivationfunctionandtheclassesarelinearlyseparable .\nUnder certain conditions, gradient descent approaches a maximum margin classifier,\nwhich can be a linear function and thus generally non-constant over the data classes.\n(b) In the second case, σis constant for large arguments and there are three data points\nx1,x2,x3on a line where x1,x3belong to the same class, but the middle point x2\nbelongs to a different class. Then the values of hatx1,x2,x3cannot be chosen inde-\npendently due to the linear structure of the first layer, and the he uristic behind the\ntoy model does not apply.\nNote that his of the form h=Af, butf(x) =σ(Wx) is not sufficiently expressive for the\nanalysis of the penultimate layer to apply.\nThe theoretical analysis raises further questions. As the expres sivity of the hypothesis class\nand the ability to set values on the training set with little interaction be tween different point\nevaluationsseemscrucialtothe‘neuralcollapse’phenomenon, we mustquestionwhetherthissimple\ngeometric configuration is in fact desirable, or merely the optimal co nfiguration in a hypothesis\nclass which is too large to allow any statistical generalization bounds. Such concerns were already\nraised in [ESA20]. While the latter possibility is suggested by the theore tical analysis, it should\nbe emphasized that in the numerical experiments in [PHD20] solutions with good generalization\nproperties are found. This compatibility could be explained by conside ring a hypothesis class which\nis not as expressive as L∞(P;Rm), but contains a function which attains a desired set of values on\na realistic data set.\nIt should be noted that the final layer results apply to any sufficient ly expressive function class,\nnot just neural networks. The results for the penultimate layer a pply to classes of classifiers which\narecompositionsofalinearfunctionandafunctioninaveryexpress ivefunctionclass. Inbothcases,\nwe consider (norm-constrained) energy minimizers, not training dy namics. If the norm constraints\nare meaningful for a function model and an optimization algorithm ca n find the minimizers, the\nanalysis applies in the long time limit, but the dynamics would certainly dep end on the precise\nfunction model. This coincides with the situation considered by [PHD20 ], in which the cross-\nentropy is close to zero after significant training.\nIfh=Afandfis not sufficiently expressive (as in two-layer neural networks), we observe that\nclassifier collapse does not occur, even in the final layer. Whether t here are further causes driving\nclassifier collapse in deep neural networks remains to be seen.\nWe believe that further investigation in this direction is needed to und erstand the following:\nIs neural collapse observed on random data sets or real data set s with randomly permuted labels?\nDoes it occuralsoontest dataorjust trainingdata? Isneural colla pseobservedfor ReLUactivation\nfunctions, oronly foractivation functions which tend to a limit at pos itive and negativeinfinity? Do\nthe outputs over different classes yiattain a regular simplex configuration also if the weights of the\ndifferent data classes are vastly different? Is neural collapse obse rved if a parameter optimization\nalgorithmis used which doesnot respectEuclidean geometry(e.g. an algorithmwith coordinatewise\n4 WEINAN E AND STEPHAN WOJTOWYTSCH\nlearning rates such as ADAM)? The question when neural collapse oc curs and whether it helps\ngeneralization in deep learning remains fairly open.\nThe article is structured as follows. In Section 2, we rigorously intro duce the problem we will\nbe studying and obtain some first properties. In Sections 3 and 4, w e study a toy model for the\ngeometry of the output layer and penultimate layer of a neural net work classifier respectively. In\nSection 5, we present analytic examples in simple situations where neu ral network classifiers behave\nmarkedly differently and where the toy model analysis does not apply .\n1.1.Notation. We consider classifiers h:Rd→Rkin a hypothesis class H. Often, hwill be\nassumed to be a general function on a finite set with norm-bounded output, or the composition of\nsuch a function f:Rd→Rmand a linear map A:Rm→Rkfor some m≥1. Variables in Rd,Rm\nandRkare denoted by x,yandzrespectively.\n2.Preliminaries\n2.1.Set-up. A classification problem is made up of the following ingredients:\n(1) Adata distribution , i.e. a probability measure PonRd.\n(2) Alabel function , i.e. aP-measurable function ξ:Rd→ {e1,...,e k} ⊂Rk. We refer to the\nsetsCi=ξ−1({ei}) as the classes.\n(3) Ahypothesis class , i.e. a class Hof functions h:Rd→Rkford≫1 andk≥2.\n(4) Aloss function ℓ:Rk×Rk→[0,∞).\nWe always assume that H ⊆L1(P;Rk) and often even H ⊆L∞(P;Rk). These ingredients are\ncombined in the risk functional\n(2.1) R:H →[0,∞),R(h) =/integraldisplay\nRdℓ/parenleftbig\nh(x),ξx/parenrightbig\nP(dx),\nwhich is approximated by the empirical risk functional\n/hatwideRn(h) =1\nnn/summationdisplay\ni=1ℓ/parenleftbig\nh(xi),ξi/parenrightbig\nwherexiare samples drawn from the distribution Pandξi=ξxi. Since we can write\n/hatwideRn(h) =/integraldisplay\nRdℓ/parenleftbig\nh(x),ξx/parenrightbig\nPn(dx),Pn=1\nnn/summationdisplay\ni=1δxi,\nwe do not differentiate between empirical risk and (population) risk in this article. This allows us\nto organically incorporate that all results are independent of the n umber of data points. We focus\non thesoftmax cross entropy risk functional associated to the loss function\n(2.2) ℓ/parenleftbig\nh,y/parenrightbig\n=−log/parenleftigg\nexp(h·y)\n/summationtextk\ni=1exp(h·ek)/parenrightigg\n.\nThis lossfunction allowsthe followingprobabilisticinterpretation: For givenagivenclassifier h∈ H\nand data point x∈Rd, the vector πwith entries\nπi(x) :=exp(h(x)·ei)\n/summationtextk\nj=1exp(h(x)·ej)\nSIMPLEX SYMMETRY IN NEURAL NETWORK CLASSIFIERS 5\nis a counting density on the set of labels {1,...,k}, depending on the input x. The function\nΦ :Rk→Rk,Φ(h) =/parenleftigg\nexp(h·e1)\n/summationtextk\ni=1exp(h·ei),...,exp(h·ek)\n/summationtextk\ni=1exp(h·ei)/parenrightigg\nwhich converts a k-dimensional vector into a counting density is referred to as the so ftmax function\nsince it approximates the maximum coordinate function of hfor large inputs. The cross-entropy\n(Kullback-Leibler divergence) of this distribution with respect to th e distribution ¯ π(x) which gives\nthe correct label with probability 1 is precisely\n−k/summationdisplay\nj=1log/parenleftbiggπj(x)\n¯πj(x)/parenrightbigg\n¯π(x) =−log/parenleftigπi(x)\n1/parenrightig\n·1 =−log/parenleftigg\nexp(h(x)·ξx)\n/summationtextk\ni=1exp(h(x)·ek)/parenrightigg\nsince ¯πj=δj,i(x)and 0·log(∞) = 0 in this case by approximation. The risk functional thus is the\naverageintegral of the pointwise cross-entropy of the softmax counting densities with respect to the\ntrue underlying distribution.\nNote the following: ℓ >0, but if his such that h(x)·ξx>maxe1,...,ek/negationslash=ξxh(x)·eiforP-almost\neveryx, then\nlim\nλ→∞R(λh) = lim\nλ→∞−/integraldisplay\nRdlog/parenleftigg\nexp(λh(x)·ξx)\n/summationtextk\ni=1exp(λh(x)·ek)/parenrightigg\nP(dx) = 0.\nThus the cross-entropy functional does not have minimizers in suit ably expressive function classes\nwhich are cones (i.e. f∈ H,λ >0⇒λf∈ H). So to obtain meaningful results by energy\nminimization, we must consider\n(1) a dynamical argument concerning a specific optimization algorith m, or\n(2) a restricted hypothesis class with meaningful norm bounds, or\n(3) a higher order expansion of the risk.\nWe follow the first line of inquiry for shallow neural networks in Section 5 and the second line of\ninquiry for toy models for deep networks in Sections 3 and 4.\n2.2.Convexity of the loss function. For the following, we note that the softmax cross entropy\nloss function has the following convexity property.\nLemma 2.1. The function\nΦj:Rk→R,Φ(z) =−log/parenleftigg\nexp(zj)\n/summationtextk\ni=1exp(zi)/parenrightigg\n= log/parenleftiggk/summationdisplay\ni=1exp(zi)/parenrightigg\n−zj\nis convex for any 1≤j≤kand strictly convex on hyperplanes Hαof the form\nHα=\n\nz∈Rk:k/summationdisplay\nj=1zj=α\n\n.\nForthesakeofcompleteness, weprovideaproofintheAppendix. S inceΦ/parenleftbig\nz+λ(1,...,1)/parenrightbig\n= Φ(z)\nfor allλ∈R, we note that Φ jisnotstrictly convex on the whole space Rd.\n6 WEINAN E AND STEPHAN WOJTOWYTSCH\n3.Heuristic geometry: final layer\n3.1.Collapse to a point. In this section, we argue that the output h(Ci) of the classifier should\nbe a single point for all classes Ci,i= 1,...,kif the hypothesis class is sufficiently expressive. We\nwill discuss the penultimate layer below.\nLemma 3.1. Leth∈ Hand set\nzi:=1\n|Ci|/integraldisplay\nCih(x′)P(dx′),¯h(x) =zifor allx∈Ci.\nThenR(¯h)≤ R(h)and equality holds if and only if there exists a function λ∈L1(P)such that\nh−¯h=λ(1,...,1)P-almost everywhere.\nThe reasoning behind the Lemma is that\n/integraldisplay\nCiΦi(h(x))P(dx)≈/integraldisplay\nCiΦi(zi)+∇Φi(zi)·/parenleftbig\nh(x)−zi/parenrightbig\n+1\n2/parenleftbig\nh(x)−zi/parenrightbigTD2Φi(zi)/parenleftbig\nh(x)−zi/parenrightbig\nP(dx)\n=/integraldisplay\nCiΦi(zi)P(dx)+∇Φi(zi)·/integraldisplay\nCih(x)−ziP(dx)\n+1\n2/integraldisplay\nCi/parenleftbig\nh(x)−zi/parenrightbigTD2Φi(zi)/parenleftbig\nh(x)−zi/parenrightbig\nP(dx)\n≥/integraldisplay\nCiΦi(zi)P(dx)\nsince the first order term vanishes. A summation over iestablishes the result. A rigorous proof\nusing Jensen’s inequality can be found in the appendix.\nThus if a class Cjis mapped to a set h(Cj)⊆Rkwith a prescribed center of mass, different\nclasses are mapped to the same centers of mass, it is energetically f avorable to reduce the variance\nto the point that h(Cj) is a single point. Whether or not this is attainable depends primarily on\nthe hypothesis class H, but a very expressive class like deep neural networks is likely to allow this\ncollapse to a single point.\nCorollary 3.2. IfH=L∞(P;V)is the class of bounded P-measurable functions which take values\nin a compact convex set V⊂Rk, then a minimizer horRinHcan be taken to map the class Ci\nto a single point zi∈Vfor alli= 1,...,k, and all other minimizer differ from honly in direction\n(1,...,1).\n3.2.Simplex configuration. In this section, we discuss the emergence of the simplex configura-\ntionunder theassumptionthat theeveryclassgetsmappedtoasin glepoint zi∈Rk, orequivalently\nthat each class consists of a single data point. Again, we consider th elast layer problem : Assume\nthat\n• X={x1,...,x d},\n• His the class of functions from Xto the Euclidean ball BR(0) inRk.\nLetPbe a probability measure on Xandpi:=P({xi}). We wish to solve the minimization problem\nh∗∈argminh∈HR(h) where\nR(h) =/integraldisplay\nX−log/parenleftigg\nexp(h(x)·ξx)\n/summationtextd\ni=1exp(h(x)·ei)/parenrightigg\nP(dx) =−d/summationdisplay\ni=1pilog/parenleftigg\nexp(h(xi)·ei)\n/summationtextd\nj=1exp(h(xi)·ej)/parenrightigg\n.\nSIMPLEX SYMMETRY IN NEURAL NETWORK CLASSIFIERS 7\nDue to our choice of hypothesis class, there is no interaction betwe enh(xi) andh(xj), so we can\nminimize the sum term by term:\nzi:=h(xi)∈argmin\nz∈BR(b)/parenleftigg\n−log/parenleftigg\nexp(z·ei)\n/summationtextd\nj=1exp(z·ej)/parenrightigg/parenrightigg\n= min\nz∈BR(b)Φi(z)\nwhere Φ i(z) = log/parenleftig/summationtextk\nj=1exp(zk)/parenrightig\n−ziis as in Lemma 2.1.\nLemma 3.3. For every ithere exists a unique minimizer ziofΦiinBR(0)andzi=αei+β/summationtext\nj/negationslash=iej\nforα,β∈Rwhich do not depend on i.\nSince Φ/parenleftbig\nz+λ(1,...,1)/parenrightbig\n= Φ(z) for allλ∈R, the same result holds for the ball BR/parenleftbig\nλ(1,...,1)/parenrightbig\nwith any λ∈R. We can determine the minimizers by exploiting the relationships\nα2+(k−1)β2=R2, α+(k−1)β= 0\nwhich are obtained from the Lagrange-multiplier equation (A.1) in the proof of Lemma 3.3. The\nequations reduce to\nα= (k−1)/radicalbigg\nR2−α2\nk−1=/radicalbig\n(k−1)(R2−α2)⇒α2= (k−1)(R2−α2)\nand ultimately\n(3.1) α2=k−1\nkR2⇒α=/radicalbigg\nk−1\nkR, β=−1\nk−1α=−R/radicalbig\nk(k−1).\nRemark 3.4.Lemma 3.1 remains true when BR(0) is the ball of radius R >0 with respect to an\nℓp-norm on Rkfor 1< p <∞(with different values for αandβ) – see appendix for further details.\nCorollary 3.5. IfHis the unit ball in L∞(P;Rk)whereRkis equipped with the ℓp-norm for\n1< p <∞, then any minimizer hofRinHsatisfies that h(Ci)is a single point zifor all\ni= 1,...,kand the points ziform the vertices of a regular standard simplex.\nRemark 3.6.A major simplification in our analysis was the restriction to one-point c lasses and\ngeneral functions on the finite collection of points or more generally to bounded P-measurable func-\ntions. In other hypothesis classes, the point values h(xi) andh(xj) cannot be chosen independently.\nIt is therefore no longer possible to minimize all terms in the sum individu ally, and trade-offs are ex-\npected. In particular, while our analysis was independent of the weig htpi=P(Ci) of the individual\nclasses, these are expected to influence trade-offs in real applica tions.\nNevertheless, we record that simplex configurations are favored for hypothesis classes Hwith the\nfollowing two properties:\n(1)His expressive enough to collapse classes to single points and to choos e the values on\ndifferent classes almost independently, and\n(2) functions in Hrespect the geometry of Rkequipped with an ℓp-norm in a suitable manner.\n4.Heuristic geometry: penultimate layer\nAbove, we obtained rigorous results for the final layer geometry u nder heuristic assumptions. In\nthis section, we consider a hypothesis class Hin which functions can be decomposed as\nhf,A,b(x) =Af(x)+bwheref:Rd→Rm, A:Rm→Rk, b∈Rk\nand we are interested in the geometry of fandA. Typically, we imagine the case that m≫k.\n8 WEINAN E AND STEPHAN WOJTOWYTSCH\n4.1.Collapse to a point. We have given a heuristic proof above that it is energetically favorab le\nto contract h(Ci) to a single point zi∈Rkunder certain conditions. Since A:Rm→Rkhas a\nnon-trivial kernel for m > k, this is a weaker statement than claiming that fmapsCito a single\npointyi∈Rm. We note the following: Vi= (A·+b)−1(zi) is anm−k-dimensional affine subspace\nofRm. In particular, a strictly convex norm (e.g. an ℓp-norm for 1 < p <∞) has a unique minimum\nyi∈Vi. Thus if we subscribe to the idea that fis constrained by an ℓp-norm, it is favorable for f\nto collapse Cito a single point yi∈Rm.\nHeuristically, this situation arises either if it is more expensive to incre ase the norm of fthan\nchange its direction, or if ( A,b) evolve during training and it is desirable to bring f(x) towards the\nminimum norm element of ( A·b)−1(zi) to increase the stability of training. The first consideration\napplies when A,bare fixed while the second relies on the variability of ( A,b). Their relative\nimportance could therefore be assessed numerically by initializing the final layer variables in a\nsimplex configuration and making them non-trainable.\nIfσis a bounded activation function, the direction of the final layer out put depends on the\ncoefficients of all layers in a complicated fashion, while its magnitude mo stly depends on the final\nlayer coefficients. We can imagine gradient flows as continuous time ve rsions of the minimizing\nmovements scheme\nθn+1∈argmin\nθ1\n2η/ba∇dblθn−θ/ba∇dbl2+R/parenleftbig\nh(θn,·)/parenrightbig\nwhereh(θ,·) is a parameterized function model. Using the unweighted Euclidean n orm for the\ngradient flow, we allow the same budget to adjust final layer and dee p layer coefficients. It may\ntherefore be easier to adjust the direction of the output than th e norm. For ReLU activation on the\nother hand, the magnitude of the coefficients in all layers combines t o an output in a multiplicative\nfashion. It may well be that neural collapse is more likely to occur for activation functions which\ntend to a finite limit at positive and negative infinity.\nIn section 5.2, we present examples which demonstrates that if all d ata points are not collapsed\nto a single point in the penultimate layer, they may not collapse to a sing le point in the final layer\neither when the weights of a neural network are trained by gradien t descent. This is established in\ntwo different geometries for different activation functions\n4.2.Simplex configuration. We showed above that anyℓp-geometry leads to simplex configura-\ntions in the last layer for certain toy models. When considering the ge ometry of the penultimate\nlayer, we specifically consider ℓ2-geometry. This is justified for A,bsince the parameters are typi-\ncally initialized according to a normal distribution (which is invariant und er general rotations) and\noptimized by (stochastic) gradient descent, an algorithm based on the Euclidean inner product. For\ncompatibility purposes, also the output of the preceding layers fshould be governed by Euclidean\ngeometry.\nAgain, as a toy model we consider the case of one-point classes. To simplify the problem, we\nfurthermore suppress the bias vector of the last layer. Let\n(1)X={x1,...,x k} ⊂Rd,\n(2)f:X →BR(0)⊆Rm, and\n(3)A:Rm→Rklinear.\nAs before BR(0) denotes the Euclidean ball of radius R >0 centered at the origin in Rm. We\ndenoteh(x) =Af(x),yi:=f(xi)∈Rmandzi:=h(xi)∈Rk. As we suppressed the bias of the last\nlayer, we could normalize the center of mass in the penultimate layer t o be1\nk/summationtextk\ni=1yi= 0. Instead,\nwe make the (weaker) assumption that yi∈BR(0) for some R >0 and all i= 1,...,k.\nSIMPLEX SYMMETRY IN NEURAL NETWORK CLASSIFIERS 9\nWe assume that the outputs h(xi) are in the optimal positions in the last layer and show that if\nAhas minimal norm, also the outputs f(xi) in the penultimate layer are located at the vertices of\na regular standard simplex. Denote by\n/ba∇dblA/ba∇dblL(ℓ2,ℓ2)= max\n/bardblx/bardblℓ2≤1/ba∇dblAx/ba∇dblℓ2\n/ba∇dblx/ba∇dblℓ2\nthe operator norm of the linear map Awith respect to the Euclidean norm on both domain and\nrange.\nLemma 4.1. Letm≥k−1andyi∈BR(0)⊆RmandA:Rm→Rklinear such that Ayi=zi\nwhereziare the vertices of the regular standard simplex described i n Lemma 3.1 and (3.1). Then\n(1) the center of mass of outputs yioffis1\nk/summationtextk\ni=1yi= 0,\n(2)/ba∇dblA/ba∇dblL(ℓ2,ℓ2)≥1, and\n(3)/ba∇dblA/ba∇dblL(ℓ2,ℓ2)= 1if and only if\n(a)Ais an isometric embedding of the k−1-dimensional subspace spanned by {y1,...,y k}\nintoRkand\n(b)yiare vertices of a regular standard simplex with the same side lengths.\nThe proof is given in the appendix. We conclude the following.\nCorollary 4.2. For any m≥k−1, consider the hypothesis class\nH=/braceleftbigg\nh:Rd→Rk/vextendsingle/vextendsingle/vextendsingle/vextendsingleh=Afwheref:Rd→RmisP−measurable ,/ba∇dblf(x)/ba∇dblℓ2≤RP−a.e.\nA:Rm→Rkis linear, /ba∇dblA/ba∇dblL(ℓ2,ℓ2)≤1/bracerightbigg\n.\nThen a minimizer h∈ HofRsatisfies h=Afwhere\n(1) there exist values yi∈Rmsuch that f(x) =yifor almost every x∈Ci,\n(2) the points yiare located at the vertices of a regular k−1-dimensional standard simplex in\nRm,\n(3) the center of mass of the points yi(with respect to the uniform distribution) is at the origin,\nand\n(4)Ais an isometric embedding of the k−1-dimensional space spanned by {y1,...,y k}into\nRk.\nRemark 4.3.The restriction to the Euclidean case is because in Euclidean geometr y, anyk−1-\ndimensional subspace of Rdis equipped with the Euclidean norm in a natural way. For other\nℓp-spaces, the restriction of the ℓp-norm is not a norm of ℓq-type and we cannot apply Lemma 3.3.\nThus, we conclude that a simplex geometryis desirable alsoin the penu ltimate layerof afunction\nh(x) =Af(x) if\n(1) the function class Fin which fis chosen and the linear matrix class in which Ais chosen\nrespect the Euclidean geometry of Rm,\n(2)Fis sufficiently expressive to collapse all data points in the class Cito a single point yiand\n(3)Fis so expressive that yiandyjcan be chosen mostly independently.\n5.Caveats: Binary classification using two-layer neural net works\nIn this section we consider simple neural network classifier models an d data sets on which we can\nshow that the classes are notcollapsed into single points when the model parameters are trained b y\ngradient descent, despite the fact that the function class is suffic iently expressive. This is intended\n10 WEINAN E AND STEPHAN WOJTOWYTSCH\nas a complementary illustration that the heuristic considerations of Sections 3 and 4 may or may\nnot be valid, depending on factors which are yet to be understood.\nDeep neural networks with many nonlinearities can be a lot more flexib le than shallow neural\nnetworks, and the intuition we built up above does not quite apply her e. However, we emphasize\nthat a deep neural network hcan be decomposed as h=g◦fwheref:Rd→Rkis a deep neural\nnetwork and g:Rk→Ris a shallow neural network. All results should therefore be conside red also\nvalid in deep classification models where only the outermost two layers are trained. This is a more\nrealistic assumption in applications where large pretrained models are used to preprocess data and\nonly the final layers are trained for a specific new task. Similarly, we n ote that this indicates that\nif data is non-collapsed two layers before the output, then it may no t collapse in the output layer\neither.\nThe examples we consider concern binaryclassification, i.e. all functions take values in Rrather\nthan a higher-dimensional space. The label function x/mapsto→ξxtakes values in {−1,1}instead of the\nset of basis vectors. For the sake of convenience, the data below are assumed to be one-dimensional,\nbut similarresultsareexpected toholdwhen datain ahigh-dimensiona lspaceiseitherconcentrated\non a line or classification only depends on the projection to a line.\n5.1.Two-layer ReLU-networks in the mean field scaling. Consider the mean field scaling\nof shallow neural networks, where a network function is described as\nf(x) =1\nmm/summationdisplay\ni=1aiσ(wT\nix+bi) rather than f(x) =m/summationdisplay\ni=1aiσ(wT\nix+bi).\nIn this regime, it is easy to take the infinite width limit\n(5.1) f(x) =/integraldisplay\nRk×Rd×Raσ(wTx+b)π(da⊗dw⊗db)\nwith general weight distributions πonRk+d+1. We denote the functions as represented in (5.1)\nbyhπ. Finite neural networks are a special case in these considerations with distribution π=\n1\nm/summationtextm\ni=1δ(ai,wi,bi). We recall the following results.\nProposition 5.1. [CB18]All weights (ai,wi,bi)evolve by the gradient flow of\n(ai,wi,bi)m\ni=1/mapsto→ R/parenleftigg\n1\nmm/summationdisplay\ni=1aiσ(wT\nix+bi)/parenrightigg\nin(Rk+d+1)mif and only if the empirical distribution π=1\nm/summationtextm\ni=1δ(ai,wi,bi)evolves by the Wasser-\nstein gradient flow of\n(5.2) π/mapsto→ R(hπ)\n(up to time rescaling).\nConsider specifically σ(z) = max{z,0}andk= 1 with the risk functional\nR(h) =−/integraldisplay\nRdlog/parenleftbiggexp(−h(x)·ξx)\nexp(h(x))+exp( −h(x))/parenrightbigg\nP(dx).\nThe following result applies specifically to the Wasserstein gradient flo w of certain continuous\ndistributions, which can be approximated by finite sets of weights.\nSIMPLEX SYMMETRY IN NEURAL NETWORK CLASSIFIERS 11\nProposition 5.2. [CB20]Assume that π0is such that |a|2≤ |w|2+|b|2almost surely and such\nthat\nπ0({(w,b)∈Θ})>0\nfor every open cone ΘinRd+1. Letπtevolve by the Wasserstein gradient flow of (5.2)with initial\ncondition π0. Then (under additional technical conditions), the follow ing hold:\n(1)ξxhπt(x)→+∞forP-almost every x.\n(2) There exist\n(5.3) π∗∈argmax/braceleftbigg\nmin\nx∈sptP/parenleftbig\nξx·hπ(x)/parenrightbig/vextendsingle/vextendsingle/vextendsingle/vextendsingleπs.t./integraldisplay\nRd+2|a|/bracketleftbig\n|w|+|b|/bracketrightbig\ndπ≤1/bracerightbigg\nand a normalizing function µ: [0,∞)→(0,∞)such that µ(t)hπt→hπ∗locally uniformly\nonRd.\nRemark 5.3.We callh∗themaximum margin classifier in Barron space. Both the normalization\ncondition in (5.3) and the normalizing function µare related to the Barron norm or variation\nnorm of classifier functions. The existence of a minimizer in (5.3) is gua ranteed by compactness.\nExistence of a limit of πtin some weak sense has to be assumed a priori in [CB18].\nRemark5.4.The open cone condition is satisfied for example if π0is a normal distribution on Rd+1,\nwhich is a realistic distribution. This property ensures a diversity in th e initial distribution, which\nis required to guarantee convergence. The smallness condition on ais purely technical and required\nto deal with the non-differentiability of the ReLU activation function , see also [Woj20]. The same\nresult holds without modification for leaky-ReLU activation. With som e additional modifications,\nit is assumed to also extend to smooth and bounded activation funct ions.\nRemark 5.5.The divergence ξxhπt(x)→+∞is expected to be logarithmic in time, which can\nalmost be considered bounded in practice. The convergence hπt→h∗is purely qualitative, without\na rate.\nConsider a binary classification problem in RwhereC−1= [−2,−1] andC1= [1,2].\nLemma 5.6. Consider a binary classification problem in Rwhere one class C−1with label ξ=−1\nis contained in [−2,−1]and the other class C1with label ξ= +1is contained in [1,2]. Assume that\n−1∈C−1,1∈C1and that both classes contain at least one additional point.\nThe classification problem admits a continuum of maximum mar gin classifiers\nfb(x) =1\n2[1+b]\n\nx+b x > b\n2x−b < x < b\nx−b x <−b\nparametrized by b∈[0,1].\nIn particular, we expect that hπtis not constant on either of the classes [1 ,2] or [−2,−1]. The\nproof is postponed until the appendix.\nRemark 5.7.We described the mean field setting in its natural scaling. However, t he same re-\nsults are true (with a different time rescaling) if fis represented in the usual fashion as f(x) =/summationtextm\ni=1aiσ(wT\nix+bi) without the normalizing factor1\nm, assuming that the weights are initialized\nsuch that ai,wi,bi∼m−1/2.\n12 WEINAN E AND STEPHAN WOJTOWYTSCH\n5.2.Two-layer networks with non-convex input classes. Assume that\nP=p1δ−1+p2δ0+p3δ1, p 1,p2,p3≥0, p 1+p2+p3= 1\nand that ξ−1=ξ1= 1 and ξ0=−1. We consider the risk functional\nR(h) =/integraldisplay\nRexp/parenleftbig\n−ξxh(x)/parenrightbig\nP(dx) =p1exp/parenleftbig\n−h(−1)/parenrightbig\n+p2exp/parenleftbig\nh(0)/parenrightbig\n+p3exp/parenleftbig\n−h(1)/parenrightbig\n,\nwhich is similar to cross-entropy loss in its tails since\n−log/parenleftbiggexp(ξxh(x))\nexp(ξxh(x))+exp( −ξxh(x))/parenrightbigg\n=−log/parenleftbigg1\n1+exp(−2ξxh(x))/parenrightbigg\n≈1−1\n1+exp(−2ξxh(x))\n=exp(−2ξxh(x))\n1+exp(−2ξxh(x))\n≈exp(−2ξxh(x))\nifξxh(x) is large. Further assume that the classifier is a shallow neural netw ork with three neurons\nh(x) =3/summationdisplay\ni=1aiσ(wix+bi).\nTo make life easier, we consider a simplified sigmoid activation function σ:R→Rwhich satisfies\nσ(z) = 0 for z≤0 andσ(z) = 1 for z≥1, and we assume that the parameters ( ai,wi,bi) are\ninitialized such that\n(5.4) h(x) =a1σ(−x)−a2σ(x+1)+a3σ(x)\nIn particular, σ′(wix+bi) = 0 for P-almost every xat initialization and all i= 1,2,3. This implies\nthat (wi,bi) are constant along gradient descent training, so only a1,a2,a3evolve. We can write\nR/parenleftbig\n−a1σ(x)+a2σ(x+1)−a3σ(x−1)/parenrightbig\n=p1exp(−a1)+p2exp(−a2)+p3exp(a2−a3).\nLemma 5.8. Leth=ha1,a2,a3be as in (5.4)fora1,a2,a3∈R. Assume that a1,a2,a3evolve by\nthe gradient flow of F(a1,a2,a3) =R(ha1,a2,a3). Then\nlim\nt→∞/bracketleftbig\nh(t,1)−h(t,−1)/bracketrightbig\n= 0 ⇔p3= 2p1\nindependently of the initial condition (a1,a2,a3)(0).\nIn general, assume that h=f◦gwherefis a shallow neural network. Assume that there are\ntwo classes Ci,Cjsuch that the convex hull of g(Ci) intersects g(Cj). Then it is questionable that\nclasses can collapse to a single point in the final layer. While this does no t imply that g(Ci) and\ng(Cj) should concentrate around the vertices of a regular standard s implex, it suggests that simple\ngeometries are preferred already beforethe penultimate layer if the his to collapse Cito a single\npoint.\nThe proof of Lemma 5.8 is given in the appendix.\nRemark 5.9.We note that the probabilities of the different data points crucially en ter the analysis,\nwhile considerationsabovein Lemma3.3wereentirely independent oft he weight ofdifferent classes.\nThe toy model does not capture interactions between the functio n values at different data points,\nwhich is precisely what drives the dynamics here.\nSIMPLEX SYMMETRY IN NEURAL NETWORK CLASSIFIERS 13\nReferences\n[CB18] L. Chizat and F. Bach. On the global convergence of gra dient descent for over-parameterized models using\noptimal transport. In Advances in neural information processing systems , pages 3036–3046, 2018.\n[CB20] L. Chizat and F. Bach. Implicit bias of gradient desce nt for wide two-layer neural networks trained with\nthe logistic loss. arxiv:2002.04486 [math.OC] , 2020.\n[Coo18] Y. Cooper. The loss landscape of overparameterized neural networks. arXiv:1804.10200 [cs.LG] , 2018.\n[DLL+18] S. S. Du, J. D. Lee, H. Li, L. Wang, and X. Zhai. Gradient des cent finds global minima of deep neural\nnetworks. arXiv:1811.03804 [cs.LG] , 2018.\n[DZPS18] S. S. Du, X. Zhai, B. Poczos, and A. Singh. Gradient d escent provably optimizes over-parameterized\nneural networks. arXiv:1810.02054 [cs.LG] , 2018.\n[EMW20] W. E, C. Ma, and L. Wu. A comparative analysis of optim ization and generalization properties of two-\nlayer neural network and random feature models under gradie nt descent dynamics. Sci. China Math. ,\nhttps://doi.org/10.1007/s11425-019-1628-5, 2020.\n[ESA20] M.Elad, D.Simon, and A. Aberdam. Another step towar d demystifying deep neural networks. Proceedings\nof the National Academy of Sciences , 117(44):27070–27072, 2020.\n[LS20] J. Lu and S. Steinerberger. Neural collapse with cros s-entropy loss. arxiv: 2012.08465 [cs.LG] , 2020.\n[MPP20] D. G. Mixon, H. Parshall, and J. Pi. Neural collapse w ith unconstrained features. arXiv:2011.11619\n[cs.LG], 2020.\n[PHD20] V. Papyan, X. Han, and D. L. Donoho. Prevalence of neu ral collapse during the terminal phase of deep\nlearning training. Proceedings of the National Academy of Sciences , 117(40):24652–24663, 2020.\n[Woj20] S. Wojtowytsch. On the global convergence of gradie nt descent training for two-layer Relu networks in\nthe mean field regime. arXiv:2005.13530 [math.AP] , 2020.\n14 WEINAN E AND STEPHAN WOJTOWYTSCH\nAppendix A.Proofs\nA.1.Proof from Section 2. We prove the convexity property of the loss function.\nProof of Lemma 2.1. Without loss of generality j= 1 and we abbreviate Φ = Φ 1. We compute\n∇Φ(z) =−e1+k/summationdisplay\nj=1exp(zj)\n/summationtextk\ni=1exp(zi)ej\n∂j∂lΦ(z) =exp(zj)\n/summationtextk\ni=1exp(zi)δjl−exp(zj)exp(zl)\n/parenleftig/summationtextk\ni=1exp(zi)/parenrightig2\n=pjδjl−pjpl\nwherepj=exp(zj)/summationtextk\ni=1exp(zi). Thus\natD2Φa=k/summationdisplay\ni=1a2\nipi−k/summationdisplay\ni,j=1aiajpipj\n=k/summationdisplay\ni=1a2\nipi−/parenleftiggk/summationdisplay\ni=1aipi/parenrightigg2\n=/ba∇dbla/ba∇dbl2\nℓ2(p)−/ba∇dbla/ba∇dbl2\nℓ1(p)\n≥0\nsincepis a counting density on {1,...,k}. Sincepis a vector with strictly positive entries, equality\nis attained if and only if ais a multiple of (1 ,...,1). Since the Hessian of Φ is positive semi-definite,\nthe function is convex. /square\nA.2.Proofs from Section 3. The rigorous proof that it is advantageous to collapse the output\nof a classifier to the center of mass over a class goes as follows.\nProof of Lemma 3.1. Denote by P♯the orthogonal projection of honto the orthogonal complement\nof the space spanned by the vector (1 ,...,1) and observe that ℓ(P♯h,ξ) =ℓ(h,ξ) for allh,ξ.\nWe compute by the vector-valued Jensen’s inequality that\nR(h) =R(P♯h)\n=−/integraldisplay\nRdlog/parenleftigg\nexp(P♯h(x)·ξx)\n/summationtextk\ni=1exp(P♯h(x)·ei)/parenrightigg\nP(dx)\n=−k/summationdisplay\nj=1/integraldisplay\nCjlog/parenleftigg\nexp(P♯h(x)·ej)\n/summationtextk\ni=1exp(P♯h(x)·ei)/parenrightigg\nP(dx)\n=k/summationdisplay\nj=1|Cj|1\n|Cj|/integraldisplay\nCjΦj(P♯h(x))P(dx)\n≥k/summationdisplay\nj=1|Cj|Φj/parenleftigg\n1\n|Cj|/integraldisplay\nCjP♯h(x)P(dx)/parenrightigg\nSIMPLEX SYMMETRY IN NEURAL NETWORK CLASSIFIERS 15\n=−/integraldisplay\nRdlog/parenleftigg\nexp/parenleftbig\nP♯h(x)·ξx/parenrightbig\n/summationtextk\ni=1exp/parenleftbig\nP♯h(x)·ei/parenrightbig/parenrightigg\nP(dx)\nand note that the inequality is strict unless P♯h(x) =P♯h(x) forP-almost every xsinceℓstrictly\nconvex on the orthogonal complement of (1 ,...,1). This is the case if and only if h(x)−¯h(x)∈\nspan{(1,...,1)}for almost all x. /square\nWe proceed to show the optimality of a simplex configuration in the toy problem.\nProof of Lemma 3.3. Step 1. Existence of the minimizer. Due to the convexity of Φ iis convex\non the compact convex set BR(0), Φihas a minimizer ziinBR(0).\nStep 2. Uniqueness of the minimizer. By the Lagrange multiplier theorem, there exists\nλi∈Rsuch that\n0 =/parenleftbig\n∇Φi/parenrightbig\n(zi)−λzi\n=\nk/summationdisplay\nj=1exp(zi·ej)\n/summationtextk\nl=1exp(zi·el)ek\n−ei−λizi\n=k/summationdisplay\nj=1/bracketleftigg\nexp(zi·ej)\n/summationtextk\nl=1exp(zi·el)−δij−λi(zi·ej)/bracketrightigg\nej. (A.1)\nAll coefficients in the basis expansion have to vanish separately, so in particular\n0 =k/summationdisplay\nj=1/bracketleftigg\nexp(zi·ej)\n/summationtextd\nl=1exp(zi·el)−δij−λi(zi·ek)/bracketrightigg\n= 1−1−λk/summationdisplay\nj=1(zi·ej) =−λk/summationdisplay\nj=1(zi·ej),\nmeaning that either λi= 0 or/summationtextk\nj=1(zi·ej) = 0. Sinceexp(zi·ej)/summationtextk\nl=1exp(zi·el)−δij/ne}ationslash= 0 for any i,jand\nchoice of zi, we find that λi/ne}ationslash= 0 and thus zi∈∂BR(0) and\n0 =k/summationdisplay\ni=1(zi·ek) = (1,...,1)·zi.\nSince Φ iisstrictlyconvex in the hyperplane H={z∈Rk: (1,...,1)·z= 0}by Lemma 2.1, we\nfind that the minimizer zi∈BR(0)∩His unique.\nStep 3. Symmetry. Since the minimizer ziis unique and Φ i(z1,...,zk) is invariant under the\npermutation of the coordinate entries zjof its argument for j/ne}ationslash=i, we find that also the minimizer\nzimust have this invariance, i.e.\nzi=αiei+βi/summationdisplay\nj/negationslash=iej.\nUsing symmetry, we find that αi≡α,βi≡βindependently of i. /square\nRemark A.1.The first and third step of the proof go through for general ℓp-norms since also these\nnorms are invariant under the rearrangement of coordinates. Th e second step requires slightly\ndifferent reasoning. Still, the Lagrange-multiplier equation\n0 =k/summationdisplay\nj=1/bracketleftigg\nexp(zi·ej)\n/summationtextk\nl=1exp(zi·el)−δij−λi/vextendsingle/vextendsinglezi·ej/vextendsingle/vextendsinglep−2(zi·ej)/bracketrightigg\nej\n16 WEINAN E AND STEPHAN WOJTOWYTSCH\ncan be used to conclude λi/ne}ationslash= 0 and thus that any minimizer zimust lie in the boundary of BR(0).\nNow assume that there are multiple minimizers zi,1andzi,2. Then Φ icannot be uniformly convex\nalong the connecting line between zi,1andzi,2. Therefore zi,2−zi,1/ba∇dbl(1,...,1). Since the ball\nBR(0) is strictly convex and Φ iis constant along the connecting line, this is a contradiction to the\nfact that the minimum is only attained on the boundary.\nThe equations which determine α >0,β <0 become\n|α|p+(k−1)|β|p=Rp,|α|p−2α+(k−1)|β|p−2β= 0\nwhich is solved by\nα=/parenleftigg\n(k−1)1\np−1\n1+(k−1)1\np−1/parenrightigg1\np\nR, β =−\n1−(k−1)1\np−1\n1+(k−1)1\np−1\nk−1\n1\np\nR.\nIfp∈ {1,∞}, the unit spheres in Rkhave straight segments and singularities, and the Lagrange-\nmultiplier theorem no longer applies. However, we note that the face ts of the ℓ∞-unit ball are never\nparallel to (1 ,...,1), and that the same statement is expected to hold. The same is tr ue for the\nℓ1-unit ball close to points of the form αei+β/summationtext\nj/negationslash=iejifk >2.\nA.3.Proofs from Section 4. Now we show that the simplex symmetry is optimal under certain\nconditions.\nProof of Lemma 4.1. We have\n/ba∇dblA/ba∇dblℓ2= sup\n/bardbly/bardbl≤R/ba∇dblAy/ba∇dbl\n/ba∇dbly/ba∇dbl≥max\n1≤i≤k/ba∇dblzi/ba∇dbl\n/ba∇dblyi/ba∇dbl= max\n1≤i≤kR\n/ba∇dblyi/ba∇dbl≥1.\nIn particular /ba∇dblA/ba∇dblℓ2≥1 and if/ba∇dblA/ba∇dblℓ2= 1, then /ba∇dblyi/ba∇dbl=Rfor all 1≤i≤k.\nWe observe that the collection {z1,...,z k−1}spans the k−1-dimensional hyperplane H=\n{z∈Rk: (1,...,1)·z= 0}inRk. Consequently, the collection {y1,...,y k−1}must be linearly\nindependent in Rm, i.e. the basis of a k−1-dimensional subspace. The map Ais therefore injective\nand uniquely determined by the prescription zi=Ayifori= 1,...,k−1. Since\n0 =k/summationdisplay\nj=1zj=k/summationdisplay\nj=1(Ayj) =A\nk/summationdisplay\nj=1yj\n,\nwe conclude by injectivity that/summationtextk\nj=1yj= 0. After a rotation, we may assume without loss of\ngenerality that m=k−1. Since rotations are Euclidean isometries, also Rk−1is equipped with the\nℓ2-norm. Assume that /ba∇dblA/ba∇dblℓ2= 1. Then\n(1)/ba∇dblyj/ba∇dbl=Rfor allj= 1,...,kand\n(2)/summationtextk\nj=1yj= 0.\nThis implies that for every i= 1,...,kwe have\nk/summationdisplay\nj=1/ba∇dblyj−yi/ba∇dbl2=k/summationdisplay\nj=1/bracketleftbig\n/ba∇dblyj/ba∇dbl2+/ba∇dblyi/ba∇dbl2+2/an}b∇acketle{tyi,yj/an}b∇acket∇i}ht/bracketrightbig\n= 2kR2+2/angbracketleftigg\nyi,k/summationdisplay\nj=1yj/angbracketrightigg\n= 2kR2.\nThe sum on the left is a sum of only k−1 positive terms since yi−yi= 0, so there exists j/ne}ationslash=i\nsuch that /ba∇dblyi−yj/ba∇dbl2≥2k\nk−1R2. On the other hand, we know that zi,zjcoincide in all but two\nSIMPLEX SYMMETRY IN NEURAL NETWORK CLASSIFIERS 17\ncoordinates, so by (3.1) we find that\n/ba∇dblzi−zj/ba∇dbl2= 2(α−β)2= 2/bracketleftigg/radicalbigg\nk−1\nk−1/radicalbig\nk(k−1)/bracketrightigg\nR2= 2k−1\nkR2/bracketleftbigg\n1−1\nk−1/bracketrightbigg2\n= 2k\nk−1R2.\nIn particular, since /ba∇dblA/ba∇dbl= 1 we find that\n(A.2) 2k\nk−1R2≤ /ba∇dblyi−yj/ba∇dbl2≤ /ba∇dblA(yi−yj)/ba∇dbl2=/ba∇dblzi−zj/ba∇dbl2= 2k\nk−1R2.\nSince strict inequality cannot hold, we find that A.2 must hold for all 1 ≤i/ne}ationslash=j≤kand thus\n/ba∇dblyi−yj/ba∇dbl2=/ba∇dblzi−zj/ba∇dbl2. This in particular implies that /an}b∇acketle{tyi,yj/an}b∇acket∇i}ht=/an}b∇acketle{tzi,zj/an}b∇acket∇i}htfor alli,j= 1,...,k. Since\n{y1,...,y k−1}is a basis of Rk−1, we conclude that Ais an isometric embedding. /square\nA.4.Proofs from Section 5. We begin by proving that the maximum margin classifier in the\nproblem under discussion is in fact f(x) =x\n2.\nProof of Lemma 5.6. Note that ¯f(x) =f(x)−f(−x)\n2satisfies\nξx¯f(x) =ξxf(x)+ξ−xf(−x)\n2≥min/braceleftbig\nξxf(x), ξ−xf(−x)/bracerightbig\nforP-almostevery x. Wecanthereforeassumethatthemaximummarginclassifierisaodd function.\nThe function class under consideration therefore is the convex hu ll of the family\nH◦=/braceleftbiggaσ(wx+b)−aσ(b−wx)\n2|a|[|w|+|b|]:a/ne}ationslash= 0,(w,b)/ne}ationslash= 0/bracerightbigg\n.\nConsider the map\nF: conv(H◦)→R, F(h) =h(1)\nwhich bounds the maximum margin functional from above: min x∈sptP/parenleftbig\nξxh(x)/parenrightbig\n≤1·h(1). Since F\nis linear, it attains its maximum at the boundary of the class, i.e. there exist (w,b) such that\nσ(w+b)−σ(b−w)\n2[|w|+|b|]=F/parenleftbiggσ(wx+b)−σ(b−wx)\n2[|w|+|b|]/parenrightbigg\n= max\nh∈conv(H◦)F(h)\nand thus\nmax\nh∈conv(H◦)min\nx∈sptP/parenleftbig\nξxh(x)/parenrightbig\n= max\nw,bσ(w+b)−σ(b−w)\n2[|w|+|b|]≤σ(w+b)\n2[|w|+|b|]≤1\n2.\nThe bound is realized precisely if and only if w > b > 0, i.e. due to the positive homogeneity of\nReLU if and only if\nh(x) =σ(x+b)−σ(b−x)\n2/bracketleftbig\n1+|b|/bracketrightbig=1\n2/bracketleftbig\n1+|b|/bracketrightbig\n\nx+b x > b\n2x−b < x < b\nx−b x <−b\nforb∈[0,1]. /square\nFinally, we prove the non-collapse result in the three neuron model.\n18 WEINAN E AND STEPHAN WOJTOWYTSCH\nProof of Lemma 5.8. The gradient flow equation is the ODE\n˙a1\n˙a2\n˙a3\n=\np1exp(−a1)\np2exp(−a2)−p3exp(a2−a3)\np3exp(a2−a3)\n\nThe first equation is solved easily explicitly since\nd\ndtexp(a1) = exp(a1) ˙a1=p1⇒a1(t) = log/parenleftig\nea1(0)+p1t/parenrightig\n.\nThe second equation can be reformulated as\nd\ndtexp(a2) = exp(a2) ˙a2=p2−p3exp(2a2−a3),\nwhich leads us to consider\nd\ndtexp(2a2−a3) = exp(2 a2−a3)/bracketleftbig\n2 ˙a2−˙a3/bracketrightbig\n= exp(2a2−a3)/bracketleftbig\n2p2exp(−a2)−2p3exp(a2−a3)−p3exp(a2−a3)/bracketrightbig\n= exp(2a2−a3)/bracketleftbig\n2p2−3p3exp(2a2−a3)/bracketrightbig\nexp(−a2).\nDenotef(t) = exp(2 a2−a3). The differential equation\n(A.3) f′=f(2p2−3p3f) exp(−a2)\nimplies that f≡2p2\n3p3iff(0) =2p2\n3p3. The same is true for long times and arbitrary initialization\n(anticipating that the integral of exp( −a2) diverges). If the equality is satisfied exactly, we find\nthat\nd\ndtexp(a2) =p2−p3exp(2a2−a3) =p2−p32p2\n3p3=p2\n3⇒a2(t) = log/parenleftig\nea2(0)+p2\n3t/parenrightig\nand thus\nexp(2a2−a3) =2p2\n3p3⇒exp(a3) =3p3\n2p2exp(2a2)⇒a3= log/parenleftbigg3p3\n2p2exp(2a2)/parenrightbigg\n= log/parenleftbigg3p3\n2p2/parenrightbigg\n+2a2\nThe question is whether all data points in the same class are mapped t o the same value. This is\nonly a relevant question for the ‘outer’ class where\nf(t,−1) =a1(t)\n= log/parenleftig\nea1(0)+p1t/parenrightig\nf(t,1) = (a3−a2)(t)\n= log/parenleftbigg3p3\n2p2/parenrightbigg\n+a2(t)\n= log/parenleftbigg3p3\n2p2/parenrightbigg\n+log/parenleftig\nea2(0)+p2\n3t/parenrightig\nIn particular\nf(t,1)−f(t,−1) = log/parenleftbigg3p3\n2p2/parenrightbigg\n+log/parenleftig\nea2(0)+p2\n3t/parenrightig\n−log/parenleftig\nea1(0)+p1t/parenrightig\n= log/parenleftbigg3p3\n2p2/parenrightbigg\n+log/parenleftigg\nea2(0)+p2\n3t\nea1(0)+p1t/parenrightigg\nSIMPLEX SYMMETRY IN NEURAL NETWORK CLASSIFIERS 19\n→log/parenleftbigg3p3\n2p2/parenrightbigg\n+log/parenleftbiggp2\n3p1/parenrightbigg\n= log/parenleftbigg3p3\n2p2p2\n3p1/parenrightbigg\n= log/parenleftbiggp3\n2p1/parenrightbigg\nThus the difference between f(t,1) andf(t,−1) goes to zero if and only if p3= 2p1.\nFinally, we remark that if exp(2 a2−a3) =2p2\n3p3is not satisfied exactly at time t= 0, then by\n(A.3), we find that it is approximately satisfied at a later time t0≫1. Since the influence of the\ninitial condition goes to zero, we find that the conclusion is almost sat isfied by considering dynamics\nstarting at ( a1,a2,a3)(t0). This argument can easily be made quantitative. /square\nWeinan E, Department of Mathematics and Program for Applied a nd Computational Mathematics,\nPrinceton University, Princeton, NJ 08544\nEmail address :[email protected]\nStephan Wojtowytsch, Program for Applied and Computational Mathematics, Princeton University,\nPrinceton, NJ 08544\nEmail address :[email protected]",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "p1gjv898IkR",
"year": null,
"venue": "CoRR 2020",
"pdf_link": "http://arxiv.org/pdf/2006.05982v2",
"forum_link": "https://openreview.net/forum?id=p1gjv898IkR",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Representation formulas and pointwise properties for Barron functions",
"authors": [
"Weinan E",
"Stephan Wojtowytsch"
],
"abstract": "We study the natural function space for infinitely wide two-layer neural networks with ReLU activation (Barron space) and establish different representation formulae. In two cases, we describe the space explicitly up to isomorphism. Using a convenient representation, we study the pointwise properties of two-layer networks and show that functions whose singular set is fractal or curved (for example distance functions from smooth submanifolds) cannot be represented by infinitely wide two-layer networks with finite path-norm. We use this structure theorem to show that the only $C^1$-diffeomorphisms which Barron space are affine. Furthermore, we show that every Barron function can be decomposed as the sum of a bounded and a positively one-homogeneous function and that there exist Barron functions which decay rapidly at infinity and are globally Lebesgue-integrable. This result suggests that two-layer neural networks may be able to approximate a greater variety of functions than commonly believed.",
"keywords": [],
"raw_extracted_content": "arXiv:2006.05982v2 [stat.ML] 4 Jun 2021REPRESENTATION FORMULAS AND POINTWISE PROPERTIES FOR\nBARRON FUNCTIONS\nWEINAN E AND STEPHAN WOJTOWYTSCH\nAbstract. We study the natural function space for infinitely wide two-l ayer neural networks\nwith ReLU activation (Barron space) and establish different representation formulae. In two\ncases, we describe the space explicitly up to isomorphism.\nUsing a convenient representation, we study the pointwise p roperties of two-layer networks\nand show that functions whose singular set is fractal or curv ed (for example distance functions\nfrom smooth submanifolds) cannot be represented by infinite ly wide two-layer networks with\nfinite path-norm. We use this structure theorem to show that t he only C1-diffeomorphisms\nwhich Barron space are affine.\nFurthermore, we show that every Barron function can be decom posed as the sum of a\nbounded and a positively one-homogeneous function and that there exist Barron functions\nwhich decay rapidly at infinity and are globally Lebesgue-in tegrable. This result suggests that\ntwo-layer neural networks may be able to approximate a great er variety of functions than\ncommonly believed.\nContents\n1. Introduction 1\n2. Different representations of Barron functions 4\n3. Properties of Barron space 14\n4. Special cases 19\n5. Structure of Barron functions 23\nAcknowledgements 32\nReferences 32\n1.Introduction\nAtwo-layer neural network withmneurons is a function f:Rd→Rrepresented as\n(1.1) f(x) =m/summationdisplay\ni=1aiσ(wT\nix+bi) or f(x) =1\nmm/summationdisplay\ni=1aiσ(wT\nix+bi)\nwhereσ:R→Ris the (nonlinear) activation function ,ai,bi∈Randwi∈Rdare parameters\n(weights) of the network. In this article, we mostly focus on the ca se whereσis the rectified\nlinear unit (ReLU), i.e. σ(z) = max{z,0}. We denote the class of all two-layer neural networks\nwith at most mneurons by Fm. Naturally Fmis not a vector space, but Fm+Fm⊆ F2m. Both\nthe sum and average sum representation induce the same function spaces under natural norm\nbounds or bounds on the number of parameters.\nDate: June 7, 2021.\n2020 Mathematics Subject Classification. 68T07, 46E15, 26B35, 26B40.\nKey words and phrases. Barronspace, two-layer neuralnetwork, infinitely widenet work, singularset, pointwise\nproperties, representation formula, mean field training.\n1\n2 WEINAN E AND STEPHAN WOJTOWYTSCH\nUnder very mild conditions on σ, any continuous function on a compact set can be approxi-\nmated arbitrarily well in the uniform topology by two-layer neural ne tworks [Cyb89, LLPS93],\ni.e. theC0(K)-closure of F∞:=/uniontext\nm∈NFmis the entire space C0(K) for any compact K⊆Rd.\nThis result is of fundamental importance to the theory of artificial neural networks, but of little\nimpact in practicalapplications. In general, the number ofneurons mεto approximatea function\nfto accuracy ε>0 scales like ε−dand thus quickly becomes unmanageable in high dimension.\nOn the other hand, Andrew Barron showed in 1993 that there exist s a large function class X\nsuch that only O(ε−2) neurons are required to approximate f∗∈Xto accuracy εinL2(P) for\nanyBorelprobabilitymeasure P[Bar93]. Heuristically, this means that non-linearapproximation\nby neural networks, unlike any linear theory, can evade the curse of dimensionality in some situ-\nations. The result holds for the same class Xfor any compactly supported probability measure\nPand any continuous sigmoidal activation function σ, by which we mean that lim z→−∞σ(z) = 0\nand limz→∞σ(z) = 1. It also holds for the nowadays more popular ReLU activation fu nction\nσ(z) = max{z,0}sinceσ(z+1)−σ(z) is sigmoidal.\nFurthermore, the coefficients of the network representing fcan be taken to be bounded on\naverage in the sense that\nm/summationdisplay\ni=1|ai| ≤2Cf∗ror1\nmm/summationdisplay\ni=1|ai| ≤2Cf∗r∀f∗∈X,\ndepending on the normalization of the representation. Here Cfis a norm on X,r >0 is such\nthat the support of Pis contained in Br(0), and we assume σto be sigmoidal. This result has\nfundamental significance. In applications, we train the function to approximate values yiat\ndata points xiby minimizing an appropriate risk functional. In the simplest case, yi=f∗(xi)\nfor a target function f∗. Iff(xi) is close to yibut the coefficients of fare very large, we rely\non cancellations between different terms in the sum. Thus fis the difference of two functions\n(the partial sums for which ai>0/ai<0 respectively) which are potentially several orders of\nmagnitude larger than f. Then for any point xwhich is not one of the data samples xi,f(x)\nandf∗(x) may be vastly different. We say that fdoes not generalize well.\nThe analysis was extended to networks with ReLU activation in [Bre9 3], where such networks\nare referred to as ‘hinge functions’ because a single neuron activa tions are given by hyperplanes\nmeeting along a lower-dimensional ‘hinge’. If σ= ReLU, then σ(λz) =λσ(z) for allλ>0. The\nhomogeneity (and unboundedness) sets ReLU activation and sigmo idal activation apart, and the\ncoefficient bound for ReLU activation is\nm/summationdisplay\ni=1|ai|/bracketleftbig\n|wi|+|bi|/bracketrightbig\n≤4Cfror1\nmm/summationdisplay\ni=1|ai|/bracketleftbig\n|wi|+|bi|/bracketrightbig\n≤4Cfr.\nThe function class Xis characterizedin [KB18] as f∈L1(Rd) such that the Fourier transform\nˆfsatisfies\nCf:=/integraldisplay\nRd|ˆf|(ξ)/bracketleftbig\n1+|ξ|2/bracketrightbig\ndξ<∞.\nXis a Banach space with norm /ba∇⌈blf/ba∇⌈blX=Cf. The criterion that Cf<∞is a non-classical\nsmoothness criterion. If we replaced |ˆf|with|ˆf|2in the integral, we would obtain the H1/2-\nSobolev semi-norm. For the weighted L1-norm, the interpretation is not as easy. However, if we\nmultiply by 1 = (1+ |x|2s)1/2(1+|x|2s)−1/2, use H¨ older’s inequality and Parseval’s identity, we\nsee like in [Bar93, Section IX, point 15] that Cf≤cd,s/ba∇⌈blf/ba∇⌈blHsforf∈Hs(Rd) withs>d\n2+2.\nIffis smooth, but only defined on a suitable compact subset of Rd, we can apply extension\nresults like [Dob10, Satz 6.10] show that f∈X. More precisely, if Ω is a domain in Rdwith\nsmooth boundary ∂Ω∈Ck−1,1for an integer k >d\n2+ 2, then every Hk-functionfon Ω can\nBARRON FUNCTIONS: REPRESENTATION AND POINTWISE PROPERTIE S 3\nbe extended to a compactly supported Hk-function ¯fonRdsuch that /ba∇⌈bl¯f/ba∇⌈blHk(Rd)≤C/ba∇⌈blf/ba∇⌈blHk(Ω).\nIn particular C¯f<∞. If we know that elements of Xcan be approximated efficiently in L2(P)\nwith respect to a probability measure Pon Ω, the same is therefore true for f∈Hk(Ω).\nOn the other hand, all functions fwhich satisfy Cf<∞areC1-smooth since ∂if(−x) =\n/hatwiderξiˆf(ξ). For ReLU-activated networks, any finite sum of neurons is eithe r linear or non-smooth,\nand many functions with discontinuous derivatives can be approxima ted, for example\nf(x) = max/braceleftbig\n1−|x|,0/bracerightbig\n=σ(x−1)−2σ(x)+σ(x+1),\nis not inXsince\nˆf(ξ) =2−2 cos(ξ)√\n2πξ2.\nThusXmisses large parts of the approximable function class and Cfmay significantly overesti-\nmate the number and size of parameters required to approximate a given function. In fact, the\ncriterion is not expected to be sharp since the class Xis insensitive to the choice of activation\nfunctionσand data distribution P.\nIn [EMW18], E, Ma and Wu introduced the correct function space for ReLU-activated two-\nlayer neural networks and named it Barron space. It can be seen a s the closure of F∞with\nrespect to the path-norm\n(1.2) /ba∇⌈blf/ba∇⌈blpath=m/summationdisplay\ni=1|ai|/bracketleftbig\n|wi|+|bi|/bracketrightbig\nor/ba∇⌈blf/ba∇⌈blpath=1\nmm/summationdisplay\ni=1|ai|/bracketleftbig\n|wi|+|bi|/bracketrightbig\ninstead of the uniform norm, where ( ai,wi,bi) are the non-zero weights of f∈ F∞as in (1.1).\nFurther background is given in [EMW19a, EMW19b, EW21]. A related cla ss of functions is\nalso considered from a different perspective in [Bac17], where it is ref erred to as F1. One of\nthe motivation for the present paper is to study these two differen t perspectives. [Bac17] uses\nthe signed Radon measure representation and establishes bounds on Rademacher complexity\nand generalization gap. [EMW18, EMW19a] characterize the Barron functions using generalized\nRidgelet transforms and focus on a priori error bounds for the ge neralizationerror. Related ideas\ncan also be found in [KB16].\nBarron space is the largest function space which is well approximate d by two-layer neural\nnetworks with appropriately controlled parameters. Target func tions outside of Barron space\nmay be increasingly difficult to approximate by even infinitely wide two-la yer networks as di-\nmension increases and gradient descent parameter optimization ma y become very slow in high\ndimension, see e.g. [EW21, WE20]. A better understanding of the fun ction-spaces associated\nwith classes of neural networks is therefore imperative to unders tand the function classes which\ncan be approximated efficiently. We describe Banach spaces associa ted to multi-layer networks\nin [EW20].\nIn this article, we provide a comprehensive view of the pointwise and f unctional analytic\nproperties of ReLU-Barron functions in the case of two-layer net works. We discuss different\nrepresentations, the structural properties of their singular se t, and their behavior at infinity. In\nheuristic terms, we show that the singular set of a Barron function is a countable union of affine\nsubspaces of dimension ≤d−1 ofRd.\nUnderstanding which functions are and are not in Barron space is a fi rst step towards under-\nstanding which kind of functions can be approximated efficiently by tw o-layer neural networks\nwhile maintaining meaningful statistical learning bounds. We demonst rate that contrary to\npopular belief, this includes functions which decay rapidly at infinity, w hile providing an easy\nto check criterion that a Lipschitz function cannot be represente d as a two-layer network. For\nexample, the distance function from a curved k-dimensional manifold in Rdis not in Barron\n4 WEINAN E AND STEPHAN WOJTOWYTSCH\nspace. Understanding which functions can be approximated by neu ral networks of a given depth\nwithout the curse of dimensionality is essential to choose the simples t sufficient neural network\narchitecturefor a givenpurpose. Choosing the simplest sufficient n etworkmodel can significantly\nreduce the difficulty and energy consumption of training.\nSome previous works approach the infinite neuron limit of two-layer n eural networks from the\nperspective of statistical mechanics where neurons are viewed as exchangeable particles accessed\nmostly through their distribution [CB18, MMN18, RVE18, SS20]. In a p art of this work, we\npresent an alternative description in which the particles are indexed , as is the case in practical\napplications. The approach is conceptually easy and convenient fro m the perspective of gradient\nflow training, but not suited for variational arguments. The mean fi eld gradient flow of neural\nnetworks is described by a Banach-space valued ODE in this setting r ather than a PDE like in\nthe usual picture. A similar approach was developed for the mean fie ld dynamics of multi-layer\nnetworks in [NP20] under the name of ‘neuronal embeddings’. In th e language of that article, we\nshow that every Barron function can be represented via a neuron al embedding, whereas [NP20]\nfocusses on the training of parameters from a given initial distribut ion.\nFurthermore, we explore their relationship to classicalfunction sp aces in terms of embeddings,\nand to two-layer neural networks in terms of direct and inverse ap proximation theorems. To\nprovide a comprehensive view of a relatively new class of function spa ces, parts of this article\nreview material from previous publications in a more functional-analy tic fashion.\nThe article is structured as follows. In Section 2, we describe differe nt ways to describe\nfunctions in Barron space, which are convenient for dynamic or var iational purposes respectively,\nor philosophically interesting. Following Section 2, only the represent ation of Section 2.3 is used\nin this work. Section 3 is devoted to the relationship of Barron space to classical function spaces\non the one hand and to finite two-layer neural networks on the oth er. In two special cases, we\ncharacterize Barron space exactly in Section 4. We conclude by est ablishing some structural and\npointwise properties for Barron function in Section 5.\n1.1.Notation. IfX,Yare measurable space, f:X→Yis measurable and µis a measure on\nX, then we denote by f♯µthe push-forward measure on Y, i.e.f♯µ(A) =µ(f−1(A)). Ifµis a\nmeasure and ρ∈L1\nlocµ, we denote by ρ·µthe measure which has density ρwith respect to µ.\nAll measures will be assumed to be finite and Borel. Since all spaces co nsidered are finite-\ndimensional vector spaces or manifolds, they are in particular Polish and thus therefore all\nmeasures considered below are Radon measures.\nThe norm on the space of Radon measures is /ba∇⌈blµ/ba∇⌈bl= supU,Vµ(U)−µ(V) whereU,Vare\nmeasurable sets. On compact subsets of Rd, the norm is induced by duality with the space\nof continuous functions. We observe that /ba∇⌈blf♯µ/ba∇⌈bl ≤ /ba∇⌈blµ/ba∇⌈blin general and /ba∇⌈blf♯µ/ba∇⌈bl=/ba∇⌈blµ/ba∇⌈blifµis\nnon-negative.\nSee [Bre11] and [EG15] for background information and further te rminology in functional\nanalysis and measure theory respectively.\n2.Different representations of Barron functions\nThere are many equivalent ways to represent Barron functions wit h different advantages in\ndifferent situations. In this section, we discuss eight of them, some of which have previously been\nconsidered in [EMW18, EMW19a]. The main novel contributions of this a rticle are collected in\nSections 2.3, 2.5 and 2.6.\nTo simplify notation, we identify x∈Rdwith (x,1)∈Rd+1and abbreviate ( w,b)∈Rd+1as\nw. In particular, by an abuse of notation, wTx=wTx+b.\nLetPbe a probability measure on Rd(orRd× {1}respectively). We will refer to Pas the\ndata distribution and assume that Phas finite first moments. We assume that we are given a\nBARRON FUNCTIONS: REPRESENTATION AND POINTWISE PROPERTIE S 5\nnorm|·|on data space Rd+1(i.e. in the x-variables) and consider the dual norm (also denoted\nby| · |) onRd+1for thew-variables such that |wTx| ≤ |w||x|. It will be obvious from context\nwhich norm is used where. Usually, we imagine that |·|=|·|ℓ2is the Euclidean norm on both\ndata and parameter space or that |x|=|x|ℓ∞and|w|=|w|ℓ1, but the analysis only depends on\nduality, not the exact pairing.\n2.1.Representation by parameter distribution. A two-layer network with mneurons can\nbe written as a normalized sum\nf(x) =1\nmm/summationdisplay\ni=1aiσ(wT\nix) =/integraldisplay\naσ(wTx)πm(da⊗dw)\nwhereπm=1\nm/summationtextm\ni=1δ(ai,wi)is the empirical parameter distribution of the network. A natural\nway to extend this to infinitely wide networks is to allow anyparameter distribution πon the\nright hand side. For a general Radon probability measure πonR×Rd+1, we set\nfπ(x) =/integraldisplay\nR×Rd+1aσ(wTx)π(da⊗dw).\nThe parameter distribution πrepresenting a function fis never unique since fπ≡0 for any\nπwhich is invariant under the reflection T(a,w) = (−a,w). For ReLU activation, a further\ndegeneracy stems from the fact that z=σ(z)−σ(−z) for anyz∈Rand thus\n0 =x+α−x−α=σ(x+α)−σ/parenleftbig\n−(x+α)/parenrightbig\n−σ(x)+σ(−x)−σ(α)+σ(−α)∀α∈R.\nFinally, we list a two-dimensional degeneracy. Recall that we can rep resent\nx2= 2/integraldisplay\nR1{t>0}σ(x−t)dtforx>0.\nIn two dimensions, the function f(x1,x2) =x2\n1+x2\n2on the unit disk can therefore be represented\nby a parameter distribution which is concentrated on the coordinat e-axes. Due to rotational\ninvariance, the same is true for any other orthonormal basis, or t he parameters could be chosen\nin a rotationally symmetric fashion.\nThe Barron norm is the generalization of the path-norm in (1.2). To c ompensate for the\nnon-uniqueness in representation, we define\n/ba∇⌈blf/ba∇⌈blB(P)= inf/braceleftbigg/integraldisplay\nR×Rd+1|a||w|π(da⊗dw)/vextendsingle/vextendsingle/vextendsingle/vextendsingleπRadon probability measure s.t. fπ=fP-a.e./bracerightbigg\n.\nThe infimum of the empty set is considered as + ∞. Clearly\n|fπ(x)−fπ(y)| ≤/integraldisplay\nRd+2|a||wT(x−y)|π(da⊗dw)≤ /ba∇⌈blf/ba∇⌈blB(P)|x−y|,\nso in particular fπgrows at most linearly at infinity. Since Phas finite first moments, this means\nthatfπ∈L1(P). We introduce Barron space as\nB(P) ={f∈L1(P) :/ba∇⌈blf/ba∇⌈blB(P)<∞}.\nApproaching Barron space through the parameter distribution πis natural from the point of\nview that we know the parameters ( ai,wi) in applications better than the induced function. It\nis also useful, especially when considering dynamics. Namely, let Θ = ( ai,wi)m\ni=1and\n(2.1) fΘ(x) =1\nmm/summationdisplay\ni=1aiσ(wT\nix).\n6 WEINAN E AND STEPHAN WOJTOWYTSCH\nThen the parameters Θ evolve by the Euclidean gradient flow of a risk functional\n(2.2) R(Θ) =/integraldisplay\nRd×{1}|fΘ−f∗|2(x)P(dx)\nif and only if their empirical distribution evolves by the 2-Wasserstein gradient flow of the\nextended risk functional\n(2.3) R(π) =/integraldisplay\nRd×{1}|fπ−f∗|2(x)P(dx)\n(up to time rescaling). The heuristic reason behind this connection is that the Wasserstein-\ndistance is ‘horizontal’ and that measure is ‘transported’ along cur ves in optimal transport dis-\ntances ratherthan ‘teleported’ as in ‘vertical’distances like L2. In a moremathematically precise\nfashion, the ‘particles’ ( ai,wi) follow trajectories which can be viewed as the characteristics of a\ncontinuity equation\n˙ρ= div(ρ∇V)\nwith feedback between the particles and the transport vector fie ld∇V. The connection stems\nfrom the observation that the gradient of the risk functional in (2 .2) is given by\n∇(ai,wi)R(Θ) =2\nm∇(ai,wi)/integraldisplay\nRd/parenleftigg\n1\nmm/summationdisplay\ni=1ajσ(wT\njx)−f∗(x)/parenrightigg\naiσ(wT\nix)P(dx)\nwhich coincides with the gradient of a potential, evaluated at the pos ition of the particle:\n∇(ai,wi)R(Θ) =2\nm∇V(ai,wi;Θ), V(a,w;Θ) =/integraldisplay\nRd/parenleftbig\nfΘ−f∗/parenrightbig\n(x)aσ(wTx)P(dx).\nThe passage to the limit\nV(a,w;π) =/integraldisplay\nRd/parenleftbig\nfπ−f∗/parenrightbig\n(x)aσ(wTx)P(dx)\nis easy on the formal level, and eliminating the factor1\nmmerely corresponds to a rescaling of\ntime. The link between Wasserstein gradient flows and continuity equ ations has been exposed\nfirst in the seminal article [JKO98]. For details in the context of machin e learning, see [CB18,\nProposition B.1] or [Woj20, Appendix A]. The result is not specific to L2-risk and holds much\nmore generally. The Wasserstein-distance is computed with respec t to the Euclidean distance on\nparameter space here.\nThe parameter distribution picture is available for general two-laye r networks regardless of\nthe activation function.\nRemark 2.1.In practice, the weights ( ai,wi)m\ni=1of a neural network are initialized randomly\naccording to a distribution π0in such a way that ( ai,wi) and (−ai,wi) are equally likely. Such\nan initialization gives the network the flexibility to develop features in a ll relevant directions\nduring training. In the continuum limit, fπ≡0 at initial time, but\n0</integraldisplay\nRd+2|a||w|π(da⊗dw).\nThe upper bound for the Barron norm\n/ba∇⌈blfΘ/ba∇⌈blB(P)≤1\nmm/summationdisplay\ni=1|ai||wi|\nis therefore easy to compute, but not assumed to be particularly t ight (at least in the infinite\nwidth limit).\nBARRON FUNCTIONS: REPRESENTATION AND POINTWISE PROPERTIE S 7\n2.2.Spherical graph representation. By the positive 1-homogeneity of the ReLU activation\nwe have\nfπ(x) =/integraldisplay\nR×{w/ne}ationslash=0}a|w|σ/parenleftbiggw\n|w|T\nx/parenrightbigg\nπ(da⊗dw)\n=/integraldisplay\nR×Sd˜aσ(˜wTx)˜π(d˜a⊗d˜w)\nwhere ˜π=T♯πis the push-forward of πalong the map ( a,w)/ma√sto→(a|w|,w/|w|). Since ˜πis a\nRadon measure, we can apply [ABM14, Theorem 4.2.4] to decompose ˜ πinto a marginal ˆ πonSd\nand conditional probabilities πwonRsuch that\n/integraldisplay\nR×Sdf(˜a,˜w)˜π(d˜a⊗d˜w) =/integraldisplay\nSd/parenleftbigg/integraldisplay\nRf(a,w)πw(da)/parenrightbigg\nˆπ(dw)\nfor every ˜π-measurable function f. In particular, the function\nw/ma√sto→/integraldisplay\nRf(a,w)πw(da)\nis ˆπ-measurable. For a neural network function f(a,w) =aσ(wTx), we find that\nfπ(x) =/integraldisplay\nSd/parenleftbigg/integraldisplay\nRaπw(da)/parenrightbigg\nσ(wTx)ˆπ(dw)\n=:/integraldisplay\nSdˆa(w)σ(wTx)ˆπ(dw)\nwith\nˆa(w) =/integraldisplay\nRaπw(da).\nWe have thus written fπas a graph over the unit sphere, which we denote by fˆπ,ˆa. Since\n/ba∇⌈blˆa/ba∇⌈blL1(ˆπ)=/integraldisplay\nSd/vextendsingle/vextendsingleˆa(w)/vextendsingle/vextendsingleˆπ(dw)\n≤/integraldisplay\nSd/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplay\nRaπw(da)/vextendsingle/vextendsingle/vextendsingle/vextendsingleˆπ(dw)\n≤/integraldisplay\nSd/integraldisplay\nR|a|πw(da)ˆπ(dw)\n=/integraldisplay\nSd|a|˜π(da⊗dw)\n=/integraldisplay\nSd|a||w|π(da⊗dw),\nwe note that\ninf\n(ˆa,ˆπ) s.t.fˆπ,ˆa=fP-a.e./ba∇⌈blˆa/ba∇⌈blL1(ˆπ)≤inf\nπs.t.f=fπP-a.e./integraldisplay\nR×Rd+1|a||w|π(da⊗dw)\nThe inverse inequality is obtained by considering the distribution π=ψ♯ˆπwhereψ(w) =\n(ˆa(w),w) which satisfies\n/integraldisplay\nRd+2aσ(wTx)π(da⊗dw) =/integraldisplay\nSdˆa(w)σ(wTx)ˆπ(dw),/integraldisplay\nRd+2|a||w|π(da⊗dw) =/integraldisplay\nSd|ˆa(w)|ˆπ(dw)\nby the definition of the push-forward. Taking the infimum, we find th at\n/ba∇⌈blf/ba∇⌈blB(P)= inf/braceleftbigg\n/ba∇⌈blˆa/ba∇⌈blL1(ˆπ)/vextendsingle/vextendsingle/vextendsingle/vextendsingleˆπRadon probability measure on Sd,ˆa∈L1(ˆπ), f=fˆπ,ˆaP−a.e./bracerightbigg\n.\n8 WEINAN E AND STEPHAN WOJTOWYTSCH\nRemark 2.2.Without loss of generality, we can absorb all variation into the measu re ˆπand have\n|ˆa|=/ba∇⌈blf/ba∇⌈blB(P)almost everywhere. More specifically, note that fˆa,ˆπ=fˆa/ρ,ρ·ˆπfor any function ρ\nsuch thatρ>0 if ˆa>0 and/integraldisplay\nSdρ(w)ˆπ(dw) = 1.\nWe specify\nρ=|ˆa|\n/ba∇⌈blˆa/ba∇⌈blL1(ˆπ)⇒ˆa\nρ= sign(ˆa)/ba∇⌈blˆa/ba∇⌈blL1(ˆπ).\nIn particular, the Barron norm can equivalently be written as inf ˆπ,ˆa/ba∇⌈blˆa/ba∇⌈blLp(ˆπ)for anyp∈[1,∞].\nThe spherical graph representation is specific to positively homoge neous activation functions.\nIt is not per se useful to us directly, but it provides a link to the repr esentation of Barron\nfunctions by signed measures (Section 2.3). Note that different dis tributionsπmay give rise to\nthe amplitude function ˆ aand spherical measure ˆ π, so the link to dynamics through Wasserstein\ngradient flows is lost in this description and all following ones that are d erived from it.\n2.3.Signed measure on the sphere. As in Remark 2.2, all relevant information about the\ntuple (ˆa,ˆπ) is contained in the signed measure ˆ µ= ˆa·ˆπ. We set\nfˆµ(x) =/integraldisplay\nSdσ(wTx) ˆµ(dw)\nand observe that\n/ba∇⌈blˆµ/ba∇⌈blM=/integraldisplay\nSd1|ˆµ|(dw) =/integraldisplay\nSd|ˆa(w)|ˆπ(dw).\nHere/ba∇⌈bl · /ba∇⌈blMis the total variation norm on the space of Radon measures. Taking the infimum\nfirst on the left and then on the right, we find that\ninf\nˆµs.t.fˆµ=fP-a.e./ba∇⌈blˆµ/ba∇⌈blM≤ /ba∇⌈blf/ba∇⌈blB(P).\nOn the other hand, given a signed Radon measure ˆ µ/\\⌉}atio\\slash= 0 onSd, we define\nˆπ:=|ˆµ|\n/ba∇⌈blˆµ/ba∇⌈blM,ˆa=dˆµ\ndˆπ\nwhere the Radon-Nikodym derivative of ˆ µwith respect to ˆ πis well-defined since both parts ˆ µ±\nof the Hahn-decomposition of ˆ µare absolutely continuous with respect to |ˆµ|– see e.g. [Kle06,\nSections 7.4 and 7.5] for the relevant definitions and properties. Th en\nˆµ(U) =/integraldisplay\nUˆa(w)ˆπ(dw),/integraldisplay\nSdf(ˆw) ˆµ(dw) =/integraldisplay\nSdf(ˆw)ˆa(w)ˆπ(dw)\nfor all measurable sets U⊆Sdand all measurable functions f:Sd→R. In particular\nfˆπ,ˆa(x) =/integraldisplay\nSdσ(wTx)ˆa(w)ˆπ(dw) =/integraldisplay\nSdσ(wTx) ˆµ(dw) =fˆµ(x)\nfor allxand/integraldisplay\nSd|ˆa(w)|ˆπ(dw) =/integraldisplay\nSd1|ˆµ|(dw) =/ba∇⌈blˆµ/ba∇⌈blM.\nTaking the infimum on the left shows that\n/ba∇⌈blf/ba∇⌈blB(P)≤ /ba∇⌈blˆµ/ba∇⌈blM\nfor any admissible ˆ µ. As a consequence\n/ba∇⌈blf/ba∇⌈blB(P)= inf/braceleftbig\n/ba∇⌈blˆµ/ba∇⌈blM: ˆµsigned Radon measure on Sd, f=fˆµP−a.e./bracerightbig\n.\nBARRON FUNCTIONS: REPRESENTATION AND POINTWISE PROPERTIE S 9\nThis perspective is particularly convenient with an eye towards varia tional analysis. Com-\npactness results in the space of Radon measures are much strong er here since we can restrict\nourselves to the compact parameter space Sd+1. Barron space is isometric to the quotient of the\nspace of Radon measures on the sphere Mby the closed subspace\nNP:={ˆµ∈ M|fˆµ= 0P−a.e.}.\nThus this perspective establishes an otherwise nontrivial result au tomatically.\nTheorem 2.3. B(P)/tildewide=M/NPis a Banach space.\nOn the other hand, the link to gradient flow dynamics is lost in this pictu re. Directly optimiz-\ning the measure µm=/summationtextm\ni=1aiδwirather than the weights ( ai,wi)m\ni=1was considered in [Bac17]\n(for more general activation functions with homogeneity α≥0) and found to be computationally\nunfeasible. The advantage of this perspective on optimization is tha t the mapµ→fµis linear,\nso common risk functionals are convex. The disadvantage is that op timization in a space of\nRadon measures is difficult in practice.\nAlso this perspective is most useful for homogeneous activation fu nctions.\n2.4.Signed measure on parameter space. We can generalize the parameter distribution\nrepresentation of Section 2.1 by allowing general signed Radon meas uresµin the place of ρ, i.e.\nfµ(x) =/integraldisplay\nR×Rd+1aσ(wTx)µ(da⊗dw).\nUnlike in Section 2.3, µis a signed measure on the whole space Rd+2here. This representation\ndoes not rely on the homogeneity of ReLU activation. We define a nor m\n/ba∇⌈blf/ba∇⌈bl′\nP= inf\n{µ|fµ=fP-a.e.}/integraldisplay\nR×Rd+1|a||w||µ|(da⊗dw)\nwhere|µ|=µ++µ−isthetotalvariationmeasureof µ. Itisimmediatetoseethat /ba∇⌈blf/ba∇⌈bl′\nP≤ /ba∇⌈blf/ba∇⌈blB(P)\nby comparisonwith the representationfor a signed measure on the sphere where µis restricted to\nthe seta=|w|= 1. We provethe oppositeinequalityby comparingtothe parameter distribution\nrepresentation.\nForλ∈R, denoteTλ:R×Rd+1→R×Rd+1,Tλ(a,w) = (λa,w). For a map ψ:X→Y\nbetween sets and a signed measure νonX, denote by ψ♯νthe push-forward measure on Y. Note\nthat/ba∇⌈blψ♯ν/ba∇⌈bl ≤ /ba∇⌈blν/ba∇⌈bland that equality holds for positive measures. Thus\nπ:=1\n2/bracketleftbigg1\n/ba∇⌈blµ+/ba∇⌈blT2/bardblµ+/bardbl\n♯µ++1\n/ba∇⌈blµ−/ba∇⌈blT−2/bardblµ−/bardbl\n♯µ−/bracketrightbigg\nsatisfiesfπ=fµand\n/integraldisplay\nR×Rd+1|a||w|π(da⊗dw) =/integraldisplay\nR×Rd+1|a||w||µ|(da⊗dw).\nTaking the infimum first on the left and then on the right, we obtain th e inverse inequality\n/ba∇⌈blf/ba∇⌈blB(P)≤ /ba∇⌈blf/ba∇⌈bl′\nP.\n2.5.Indexed particle perspective I. All representations of two-layer networks discussed\nabove were invariant under the natural symmetry\nf(x) =1\nmm/summationdisplay\ni=1asiσ(wT\nsix)\n10 WEINAN E AND STEPHAN WOJTOWYTSCH\nwheres∈Smis a permutation of the indices. We say that the particles are exchangable .\nNevertheless, in all practical applications particles ( a,w) are indexed by i∈ {1,...,m}. We now\ndevelop a parametrized perspective of neural networks. Note th at\nf(x) =1\nmm/summationdisplay\ni=1aiσ(wT\nix) =/integraldisplay1\n0aθσ(wT\nθx)dθ\nwhereaθ=akandwθ=wkfork−1\nm≤θ <k\nm. Using scaling invariance on finite networks, we\nmay assume that |w| ≡1 and obtain a uniform L1-bound ona. More generally, for a∈L1(0,1)\nandw∈L∞/parenleftbig\n(0,1);Sd/parenrightbig\n(ora,w∈L2) we define\nf(a,w)(x) =/integraldisplay1\n0aθσ(wT\nθx)dθ\nand\n/ba∇⌈blf/ba∇⌈blB′(P)= inf\n{(a,w):f=fa,w}/integraldisplay1\n0|aθ||wθ|dθ,B′(P) ={f∈C0,1\nloc(Rd) :/ba∇⌈blf/ba∇⌈blB′(P)<∞}.\nThis perspective is fundamentally different from the previous ones, and it is not immediately\nclear whether the spaces B(P) andB′(P) coincide. We prove this as follows.\nAssume that f∈ B′(P). Then\nf(x) =/integraldisplay1\n0aθσ(wT\nθx)dθ=/integraldisplay\nR×Rd+1aσ(wTx)/parenleftbig\n(¯a,¯w)♯L1|(0,1)/parenrightbig\n(da⊗dw)\nwhere (¯a,¯w)♯L1|(0,1)denotes the push-forward of one-dimensional Lebesgue measur e on the unit\ninterval along the map (¯ a,¯w) : (0,1)→Rd+2. ThusB′(P) is a subspace of B(P).\nBefore we prove the opposite inclusion, we recall an auxiliary result.\nLemma 2.4. (1) There exists a bijective measurable map φ: [0,1]d→[0,1].\n(2) Let¯πbe any probability measure on R. Then there exists a measurable map ψ: [0,1]→R\nsuch that ¯π=ψ♯L1.\nProof.Claim 1. For 1≤i≤d, we can write xi=/summationtext∞\nk=1αi\nk10−kwithαi\nk∈ {0,...,9}. The map\nai\nk:Q→ {0,...,9}ai\nk(x) =αi\nksatisfies\n(ai\nk)−1({α}) = [0,1]i−1×/uniondisplay\nβ1,...,βk−1∈{0,...,9}\nk−1/summationdisplay\nj=1βj10−j+α10−k,k−1/summationdisplay\nj=1βj10−j+(α+1)10−k\n×[0,1]d−i\nand is therefore measurable. Thus also the maps\nφm:Q→[0,1], φm(x) =m/summationdisplay\nk=0d/summationdisplay\ni=1αi\nk+1(x)10−(kd+i)\nand their pointwise limit\nφ:Q→[0,1], φ(x) =∞/summationdisplay\nk=0d/summationdisplay\ni=1αi\nk+1(x)10−(kd+i)\nare measurable. They are also bijective since each point is represen ted uniquely by its decimal\nrepresentation (since we excluded trailing 9s). If φ(x) =φ(y), then all coordinates of xandy\nhave the same decimal expansion and thus are the same point. On th e other hand, for z∈[0,1],\nit is easy to define x=φ−1(z).\nClaim 2. This is a well-known result in probability theory and used in numerical imp le-\nmentations to create random samples from distributions by drawing a random sample from the\nBARRON FUNCTIONS: REPRESENTATION AND POINTWISE PROPERTIE S 11\nuniform distribution on (0 ,1) and applying a suitable transformation. The map ψ=χ−1for\nχ:R→[0,1],χ(z) = ¯π(−∞,z] satisfies the conditions. χis monotone increasing, but usually\nnot strictly. In this case, we choose the left-continuous version o f the derivative. For details, see\ne.g. [Kle06, Satz 1.104] and its proof. /square\nNow assume that f∈ B(P). Then we can describe fas a spherical graph, i.e. f(x) =/integraltext\nSda(w)σ(wTx)π(dw) for a probability measure πand an amplitude function a∈L1(π). We\nmay assume that ais defined on the whole space (e.g. by a≡0 outsideSd). Denote ˜φ=\nφ◦[1\n2(·+1)] : [−1,1]d→[0,1] and ¯π=˜φ♯π, ˆw: [0,1]→Q, ˆwθ=˜φ−1(θ). By definition we have\n/integraldisplay\n[0,1]a(ˆwθ)σ(ˆwθx)¯π(dθ) =/integraldisplay\n[−1,1]da(w)σ(wTx)π(dw)\n=/integraldisplay\nSda(w)σ(wTx)π(dw).\nThe measure ˆ πis highly concentrated and we use the second claim from Lemma 2.4 to n ormalize\nit. Namely, take ψ: [0,1]→[0,1] as described. Then\nf(x) =/integraldisplay\n[0,1]a(ˆwθ)σ(ˆwθx)¯π(dθ)\n=/integraldisplay\n[0,1]a(ˆwθ)σ(ˆwθx)ψ♯L1(dθ)\n=/integraldisplay1\n0a(ˆwψ(θ))σ(ˆwT\nψ(θ)x)dθ.\nIn particular, we note that ˆ wψ(θ)= (˜φ−1◦ψ)(θ)∈Sdalmost surely and set aθ=a(˜φ−1◦ψ(θ)),\nwθ=˜φ−1◦ψ(θ). We thus find that B(P)⊆ B′(P). The same argument implies that /ba∇⌈bl·/ba∇⌈blB(P)=\n/ba∇⌈bl·/ba∇⌈blB′(P).\nRemark 2.5.Similarly as in Remark 2.2, we can reparametrize the maps a,wby\n˜aθ=aρ(θ)ρ′(θ),˜wθ=wρ(θ)\nfor any diffeomorphism ρ: (0,1)→(0,1) and in particular achieve that |˜a|is constant.\nRemark 2.6.While elementary, the construction made use of highly discontinuous measurable\nmaps and reparametrizations. If we allowed general probability mea sures on [0,1], we could fix\nthe mapwinstead to be a (H¨ older-continuous) space-filling curve in Sd.\nThe indexed particle representation is easy to understand, but ha s clear drawbacks from the\nvariational perspective. The norm does not control the regularit y of the map w, which means\nthat at most, we obtain weak compactness for wunder norm bounds. After applying σ, we\ncannot pass to the limit in fan,wneven in weak norms, and variational results like the Inverse\nApproximation Theorem 3.7 cannot be obtained in the indexed particle representation. Much\nlike the parameterdistributionrepresentation, indexedparticlesa reonthe otherhand convenient\nfrom a dynamic perspective.\nLemma 2.7. Letf(x) =1\nm/summationtextm\nk=1akσ(wT\nkx)where the parameters Θ ={ak,wk}m\nk=1evolve\nunder the time-rescaled gradient flow\n˙Θ =−m∇R(Θ),R(Θ) =/integraldisplay\nℓ(fΘ(x),y)P(dx⊗dy)\nfor a sufficiently smooth and convex loss function ℓ. Then the functions\na(t,θ) =ak(t)andw(t,θ) =wk(t)fork−1\nm≤θ<k\nm\n12 WEINAN E AND STEPHAN WOJTOWYTSCH\nevolve by the L2-gradient flow of\nR(a,w) =/integraldisplay\nℓ(f(a,w)(x),y)P(dx⊗dy).\nWe assume that the gradient flow for finitely many parameters exist s. This can be established\nby the Picard-Lindel¨ off theorem if σis sufficiently smooth. For ReLU activation, existence of a\nclassical gradient flow is guaranteed if Pis a suitable population risk measure, see [Woj20].\nProof.All functions lie in L2for all times since they are given by finite step functions. Let\nψ∈L2[0,1]. We compute that\nd\ndε/vextendsingle/vextendsingle/vextendsingle/vextendsingle\nt=0R(a+εψ,w) =/integraldisplay\n(∂1ℓ)(f(a,w)(x),y)d\ndε/vextendsingle/vextendsingle/vextendsingle/vextendsingle\nε=0f(a+εψ,w)P(dx⊗dy)\n=/integraldisplay\n(∂1ℓ)(f(a,w)(x),y)f(ψ,w)P(dx⊗dy)\n=/integraldisplay/parenleftbigg/integraldisplay\n(∂1ℓ)(f(a,w)(x),y)σ(wT\nθx)P(dx⊗dy)/parenrightbigg\nψ(θ)dθ (2.4)\nδaR(a,w) =/integraldisplay\n(∂1ℓ)(f(a,w)(x),y)σ(wT\nθx)P(dx⊗dy)\nsincef(a,w)depends on alinearly. Fork−1\nm≤θ <k\nm, this is precisely mtimes the gradient of\nR(Θ) with respect to ak. The same result holds for wk. The key point is that the gradient flow\nis defined entirely pointwise and the only interaction between a(θ1),a(θ2) forθ1/\\⌉}atio\\slash=θ2(orw, etc.)\nis through the function f(a,w). /square\nThe gradient flow has no smoothing effect and preserves step func tions for all time. Note that\nthe normalization |w| ≡1 is not preserved under the gradient flow.\n2.6.Indexed particle perspective II. Wechosetheunitinterval(0 ,1)equippedwithLebesgue\nmeasure as an ‘index space’ for particles and showed that it is expre ssive enough to support any\nBarron function. We also demonstrated that this perspective can be linked to neural network\ntraining. In this section, we sketch a different indexed particle appr oach where the index space\nmay depend on the Barron function, but gradient flow training can b e incorporated in a very\nnatural fashion.\nLetπ0be a probability distribution on Rd+2and (¯a,¯w) :Rd+2→R×Rd+1measurable\nfunctions. Then we can define\nfπ0;¯a,¯w(x) =/integraldisplay\nRd+2¯a(a,w)σ/parenleftbig\n¯w(a,w)Tx/parenrightbig\nπ0(da⊗dw),\ni.e. we consider particles (¯ a,¯w) indexed by ( a,w). If ¯a(a,w) =aand ¯w(a,w) =w, this merely\nrecoverstheparameterdistributionperspective. Whileuninteres tingfromthestaticalperspective\nof function representation, it allows a different view of gradient flow training. Namely, for fixed\nπ0we can consider the following ODEs in L2(π0;Rd+2):\n(2.5)/braceleftbiggd\ndt/parenleftbig\n¯a,¯w/parenrightbig\n(a,w;t) =−∇(¯a,¯w)R/parenleftbig\n¯a(t),¯w(t)/parenrightbig\nt>0\n(¯a,¯w) = (a,w) t= 0\nwhere\nR(¯a,¯w) =/integraldisplay\nRd/vextendsingle/vextendsinglefπ0;¯a,¯w−f∗/vextendsingle/vextendsingle2(x)P(dx)\nand∇(¯a,¯w)describes the variational gradient\n/parenleftbig\n∇(¯a,¯w)R(¯a,¯w)/parenrightbig\n(a,w) =/integraldisplay\nRd+2/parenleftbig\nfπ0;¯a,¯w−f∗/parenrightbig\n(x)/parenleftbigg\nσ/parenleftbig\n¯w(a,b)Tx/parenrightbig\n¯a(a,w)σ′(¯w(a,b)Tx/parenrightbig\nx/parenrightbigg\nP(dx)\nBARRON FUNCTIONS: REPRESENTATION AND POINTWISE PROPERTIE S 13\nanalogous to (2.4).\nLemma 2.8. Letπ0be a probability distribution with finite second moments on Rd+2. Let\n(1)(¯a,¯w)be a solution to (2.5)for fixedπ0and\n(2)πbe a solution of the Wasserstein gradient flow\n(2.6) ˙ π= div/parenleftbig\nρ∇V/parenrightbig\n, V(a,w;π) =/integraldisplay\nRd/parenleftbig\nfπ−f∗/parenrightbig\n(x)aσ(wTx)P(dx)\nof(2.3)with initial condition π0.\nThenπ(t) = (¯a,¯w)(t)♯π0for alltand in particular fπ(t)=fπ0;¯a,¯w.\nThe content of (2.8) is used in the proofs of [CB18] and [Woj20], but n ot stated explicitly.\nProof.Let (¯a,¯w) be a solution to (2.5) and define\nπ(t) = (¯a,¯w)(t)♯π0, X(a,w;π) :=∇(a,w)/integraldisplay\nRd/parenleftbig\nfπ−f∗/parenrightbig\n(x)aσ(wTx)P(dx).\nNote thatfπ=fπ0;¯a,¯wby definition of the push-forward. By [Amb08, Proposition 4], we se e\nthatπsolves the continuity equation (2.6), which coincides with the Wasser stein gradient flow\nofπ[CB18, Appendix B]. /square\nRemark 2.9 (Eulerian vs Lagrangian descriptions) .The different perspectives on the gradient\nflow training of infinitely wide two-layer neural networks have an ana logue in classical fluid\nmechanics:\n(1)Wasserstein gradient flow. The parameter space Rd+2remains fixed, particles are re-\nferred to by their current position ( a,w). The distribution of particles in space πevolves\nover time. This is an Eulerian perspective.\n(2)L2-gradient flow. Particles (¯a,¯w) are specified by their initial position (¯ a,¯w) = (a,w)\nand tracked over time. This is the Lagrangian perspective.\nRemark 2.10.According to Lemma 2.8, the gradient flow training of infinitely wide neu ral\nnetworks can be studied in terms of L2-gradient flows as well as Wasserstein gradient flows. The\nsecond perspective is conceptually simpler and has been used succe ssfully to prove the major\nconvergence results on gradient flow training in [CB18] and [Woj20 ] (which were formulated\nin the second framework). The link between the two perspectives is through the method of\ncharacteristics for the continuity equation.\nWhile the interpretation of gradient flow training as a Wasserstein gr adient flow is appealing,\nwe note that the link to optimal transport theory has not been exp loited in depth. The crucial\nmathematical analysis so far has been conducted on the level of th e ODE (2.5) and interpreted\nthrough the PDE (2.6).\nRemark 2.11.A similar approach has been pursued in [NP20] for multi-layer network s.\n2.7.Summary. We briefly summarize the different waysto parametrizeBarronfunc tions which\nwe described above.\n14 WEINAN E AND STEPHAN WOJTOWYTSCH\nPerspective Parametrizing object Optimization\nParameter distribution probability distribution πonRd+2Wasserstein gradient flow\nSpherical graph coefficient function ˆ a:Sd→Rsee below\nfirst layer distribution ˆ πonSd\nSigned measure signed Radon measure ˆ µonSdunrelated to gradient flows\norµRd+2\nIndexed particles Coefficient functions L2-gradient flow\n(a,w,b) : (Ω,A,¯π)→Rd+2\nThe signed measure representation is crucial in our derivation of glo bal and pointwise prop-\nerties of Barron functions in Sections 4 and 5. It is, however, incon venient from the perspective\nof gradient-flow based parameter optimization. A natural approa ch to optimization in this per-\nspective via the Frank-Wolfe algorithm has been discussed in [Bac17].\nGradient-flow based optimization of finite neural networks has dire ct analogues in the ‘pa-\nrameter distribution’ and ‘indexed particle’ perspectives.\nAnaturaloptimizationalgorithmforsphericalgraphsis the L2-gradientflowforthe coefficient\nfunction ˜awhich leaves the coefficients of the first layer frozen. This algorithm recovers random\nfeature models rather than two-layer neural networks. Without proof we claim that if the first-\nlayer distribution π0follows a Wasserstein gradient flow on the sphere, this correspond s to a\ngradient flow in which the first layer is norm-constrained. If the dist ributionπ0were allowed to\nevolve on the entire parameter space Rd+1of (w,b), then we conjecture we could recovera mixed\nperspective between indexed particles (second layer) and parame ter distribution (first layer).\nIn the indexed particle perspective, we only considered the case th at the index space was a\nRd+2with a parameter distribution π0or the unit interval equipped with Lebesgue measure.\nWe conclude with a summary of Barronspace from the perspective o f function approximation.\nIn all cases, the object in parameter space which represents a giv en function is non-unique and\nan infimum has to be taken in the definition of the norm. The probability integrals (i.e. the\nintegrals in the first, second and last line) can be written as expecta tions.\nPerspective Representation Barron-norm\nParameter distribution/integraltext\nRd+2aσ(wTx)π(da⊗dw)/integraltext\nRd+2|a||w|π(da⊗dw)\nSpherical graph/integraltext\nSdˆa(w)σ(wTx)ˆπ(dw)/ba∇⌈blˆa/ba∇⌈blLp(ˆπ), p∈[1,∞]\nSigned measure/integraltext\nSdσ(wTx) ˆµ(dw) /ba∇⌈blˆµ/ba∇⌈blM(Sd)/integraltext\nRd+1σ(wTx)µ(dw)/integraltext\nRd+1|w|µ(dw)\nIndexed particles/integraltext\nΩaθσ(wT\nθx)¯π(dθ)/integraltext\nΩ|aθ||wθ|¯π(dθ)\n3.Properties of Barron space\nBy Theorem 2.3, we know that B(P) is a Banach space. In Section 4, we will characterize\nB(P) up to isometry in two special cases and conclude that generally, B(P) is neither reflexive nor\nseparable. Here we will discuss the relationship of Barron space with classical function spaces on\nthe one hand and finite two-layer networks on the other. Most res ults in this section are known\nin other places; some are reproved to illustrate the power of differe nt parametrizations.\n3.1.Relationship with other function spaces. We briefly explore the relationship of Barron\nspace and more classical function spaces. We begin by the relations hip to the Barron class X\nBARRON FUNCTIONS: REPRESENTATION AND POINTWISE PROPERTIE S 15\ndiscussed in the introduction. Denote by ˆfthe Fourier transform of a function fand\n/ba∇⌈blf/ba∇⌈blX=/integraldisplay\nRd/vextendsingle/vextendsingleˆf(ξ)/vextendsingle/vextendsingle/bracketleftbig\n1+|ξ|2/bracketrightbig\ndξ, X =/braceleftbig\nf∈L1\nloc(Rd) :/ba∇⌈blf/ba∇⌈blX<∞/bracerightbig\n.\nClearly,Xis a Banach space. The exponent in the weight |ξ|2cannot be lowered as shown in\n[CPV20, Proposition 7.4]. We recall a classical result in modern terms, which can be found in\n[Bar93, Section IX, point 15] and [KB18, Theorem 2].\nTheorem 3.1. (1) Lets>d\n2+2. ThenHs(Rd)embeds continuously into X.\n(2) Assume that spt(P)is bounded. Then Xembeds continuously in B(P)with constant\n4 supx∈spt(P)|x|.\nThe second statement is only implicit in the proof of [Bar93, Propositio n 1], which proceeds\nby showing that\nX⊆conv(F∞) =B(P).\nSo every sufficiently smooth function is Barron, see also [Woj20, App endix B.3] for the proof in\nthe context of fractional Sobolev spaces.\nRemark3.2.ThesmoothnessrequiredtoshowthatafunctionisBarronincreas eswithdimension.\nDepending on the purpose, the space Hs(Rd) can be fairly large when s>d\n2+2. For example,\nthere areHs-functions for which Sard’s theorem fails. Thus we conclude that Sa rd’s theorem\ndoes not hold in Barron space.\nOn the other hand, every Barron function is at least Lipschitz cont inuous.\nTheorem 3.3. Assume that Pis a Borel probability measure with finite first moment. Denot e\nbyC0,1(P)the space of (possibly unbounded) Lipschitz functions with the norm\n/ba∇⌈blf/ba∇⌈blC0,1(P)=/ba∇⌈blf/ba∇⌈blL1(P)+sup\nx/ne}ationslash=y|f(x)−f(y)|\n|x−y|.\nThenB(P)embeds continuously into C0,1(P).\nProof.We represent f=fµby a signed Radon measure on Sd. Then\n|f(x)−f(y)|=/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplay\nSdσ(wTx)−σ(wTy)µ(dw)/vextendsingle/vextendsingle/vextendsingle/vextendsingle\n≤/integraldisplay\nSd|wT(x−y)||µ|(dw)\n≤ |x−y|/ba∇⌈blµ/ba∇⌈blM.\nBy taking the infimum over µ, we find that\nsup\nx/ne}ationslash=y|f(x)−f(y)|\n|x−y|≤ /ba∇⌈blf/ba∇⌈blB(P)\nfor allf∈ B(P). Note that fµis defined on the whole space, so\n|fµ(0)|=/integraldisplay\nSdσ(wd+1)µ(dw)≤ /ba∇⌈blf/ba∇⌈blB(P)\nis well-defined even if 0 /∈sptP. Thus\n/ba∇⌈blf/ba∇⌈blL1(P)≤/integraldisplay\nRd|fµ(0)|+|fµ(x)−fµ(0)|P(dx)\n≤ /ba∇⌈blf/ba∇⌈blB(P)/bracketleftbigg\n1+/integraldisplay\nRd|x|P(dx)/bracketrightbigg\n.\n/square\n16 WEINAN E AND STEPHAN WOJTOWYTSCH\nRemark 3.4.IfPhas bounded support, C0,1(P) =C0,1(sptP) with equivalent norms, where\n/ba∇⌈blf/ba∇⌈blC0,1(K)=/ba∇⌈blf/ba∇⌈blL∞(K)+sup\nx/ne}ationslash=y|f(x)−f(y)|\n|x−y|\nfor any compact set K⊂Rd. Even if Phas unbounded support, we can consider an equivalent\nnorm\n/ba∇⌈blf/ba∇⌈bl′\nC0,1=|f(a)|+sup\nx/ne}ationslash=y|f(x)−f(y)|\n|x−y|\nfor any fixed a∈spt(P).\nRemark 3.5.Barron space also embeds into the compositional (or ‘flow-induced’) function class\nfor infinitely deep ResNets [EMW19a, Theorem 9]. In the statement o f [EMW19a, Theorem\n9], a suboptimal inequality of the form /ba∇⌈blf/ba∇⌈blcomp≤2/ba∇⌈blf/ba∇⌈blB+1 is proved. Note that this can be\nimproved to /ba∇⌈blf/ba∇⌈blcomp≤2/ba∇⌈blf/ba∇⌈blBby a simple scaling argument.\nRemark 3.6.B(P) has favorable properties in the context of statistical learning th eory. Namely,\nthe unit ball in B(P) has low Rademacher complexity [EMW19a, Theorem 6] and thus low\ngeneralization error [EMW18, Theorem 4.1].\n3.2.Relationship to two-layer networks. In Section 2, we derived eight different repre-\nsentations for general Barron functions. While Barron space is a n atural model for infinitely\nwide two-layer networks, it is not the only possible choice and parame ter initialization is key in\ndetermining the correct limiting structure.\nThe generalization bounds mentioned in Remark 3.6 are one reason wh y the path-norm on a\nneural network is considered in the first place; another is that it is e asy to bound in terms of the\nnetwork weights. The direct and inverse approximation theorems ( Theorems 3.7 and 3.8 below)\nestablish that the correct space to consider the infinite neuron limit in under this norm is Barron\nspace. Theorem 3.11 shows that the space is stable under gradient -flow dynamics.\nTheorem 3.7 (Compactness and Inverse Approximation) .Letfm(x) =/summationtextNm\ni=1am\niσ/parenleftbig\n(wm\ni)Tx/parenrightbig\nfor someNm<∞and assume that/summationtextNm\ni=1|ai||wi| ≤1for allm∈N.\n(1) IfPhas finitep-th moments, then there exists a subsequence mk→ ∞andf∈ B(P)\nsuch thatfmk→fstrongly in Lq(P)for allq<p.\n(2) IfPhas compact support, then the convergence even holds in C0,α(sptP)for allα<1.\nThus Barron space includes the limiting objects of norm-bounded fin ite neural networks, i.e.\nBarron space is large enough to include all relevant models for infinite ly wide neural networks\n(with finite path-norm). The theorem was originally proved in [EMW19a , Theorem 5]. We\nreprove it here using the signed Radon measure representation.\nProof.General set-up. Without loss of generality, we assume that wm\ni/\\⌉}atio\\slash= 0 for all i,m. We\ncan writefm(x) =/integraltext\nSdσ(wTx)µm(dw) whereµm=/summationtextNm\ni=1am\ni|wm\ni|δwm\ni/|wm\ni|. The norm bound\ncorresponds to the estimate /ba∇⌈blµm/ba∇⌈blM≤1, so by the compactness theorem for Radon measures\n[EG15, Section 1.9], there exist a subsequence of µm(relabelled) and a finite Radon measure µ\nonSdsuch thatµm∗⇀µ, i.e.\n/integraldisplay\ng(w)µm(dw)→/integraldisplay\ng(w)µ(dw)∀g∈C0(Sd).\nIn particular, fµm→fµpointwise almost everywhere.\nCompact support. If spt(P) is bounded, then fm=fµmis bounded in C0,1(sptP), since\nB(P)֒−→C0,1(sptP) continuously. Since C0,1(sptP)֒−→C0,α(sptP) compactly for all α <1,\nBARRON FUNCTIONS: REPRESENTATION AND POINTWISE PROPERTIE S 17\nwe find that there exists f∗∈C0,1(sptP) such that fm→f∗strongly in C0,α(sptP). Clearly\nf∗=fµ.\nBounded moments. If thep-th moment of Pis bounded, then fmis uniformly bounded in\nLp(P) due to the bound on fµm(0) and the uniform Lipschitz condition. For all R>0, we find\nthat/integraldisplay\nRd|fm−fµ|qP(dx)≤/integraldisplay\nRdmin{|fm−fµ|,R}qP(dx)+/integraldisplay\nRdmax{|fm−fµ|−R,0}qP(dx)\n≤/integraldisplay\nRdmin{|fm−fµ|,R}qP(dx)+2Rq−p/integraldisplay\nRd/bracketleftbig\n1+|x|/bracketrightbigpP(dx)\nby Chebyshev’s inequality. The first term on the right converges to 0 asm→ ∞due to the\ndominated convergence theorem while the second term converges to zero when we take R→ ∞\nin a second step. /square\nConversely, any function in B(P) can be approximated by finite networks in L2(P).\nTheorem 3.8 (Direct Approximation) .Assume that Phas finite second moments and let f∈\nB(P). Then for any m∈Nthere existai,wisuch that\n/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddoublef−m/summationdisplay\ni=1aiσ(wT\ni·)/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble\nL2(P)≤2/ba∇⌈blf/ba∇⌈blB(P)/parenleftbig/integraltext\nRd|x|2+1P(dx)/parenrightbig1/2\n√m,m/summationdisplay\ni=1|ai||wi| ≤2/ba∇⌈blf/ba∇⌈blB(P).\nIn this sense, Barron space is the smallest Banach space which cont ains all finite neural\nnetworks. The result can be deduced from the Maurey-Barron-J ones Lemma in Hilbert space\ngeometry [Bar93, Theorem 1 and Lemma 1] or Monte-Carlo integrat ion [EMW19a, Theorem 4].\nBoth proofs use probabilistic arguments and the normalized repres entation\nfm(x) =1\nmm/summationdisplay\ni=1aiσ(wT\nix), f(x) =/integraldisplay\nRd+2aσ(wTx)π(dw)\nwherethevariables( ai,wi)aredrawniidfromthedistribution π. Ifthesupportofthedatadistri-\nbutionPis compact, the direct approximation theorem can be improved from L2-approximation\ntoL∞-approximation, and the rate m−1/2can be improved to/radicalbig\nlog(m)m−1/2−1/d[KB18, The-\norem 1].\nRemark 3.9.Concerning approximation by a finiteset, the metric entropy of the unit ball of\nBarron space B(P) inL2(P) has been calculated in [SX21, Theorem 1] in the case where Pis the\nuniform distribution on the unit ball in Rd. In particular, the authors find that for n∈Nthere\nexistf1,...,fnBarron functions such that\n/ba∇⌈blf/ba∇⌈blB(P)≤1⇒ ∃ 1≤i≤ns.t./ba∇⌈blf−fi/ba∇⌈blL2(P)≤Cm−1\n2−3\nd\nfor some constant Cwhich may depend on dbut notn. The exponent −1\n2−3\ndis sharp.\nRemark 3.10.We can represent finite neural networks equivalently as f(x) =/summationtextm\ni=1aiσ(wT\nix)\norf(x) =1\nm/summationtextm\ni=1a′\niσ(wT\nix) witha′\ni=mai.\nThe second expression already resembles a Riemann sum or Monte-C arlointegral, so we easily\npassed to the infinite neuron limit in the parameter distribution repre sentation in Section 2.1.\nThe first expression on the other hand is an unnormalized sum, and a t first glance it appears\nthat the limit should be a countable series as is the case for spaces of polynomials, Fourier series\nor eigenfunction expansions. Clearly this heuristic fails, since both r epresentations induce the\nsame continuum limit given their natural path norms. The difference t o the usual setting is\n18 WEINAN E AND STEPHAN WOJTOWYTSCH\nthat the sum is not an expansion in a fixed basis. A better way to pass to the limit in the\nunnormalized sum representation is given in the proof of Theorem 3.7 .\nThe class of countably wide two-layer networks\n/hatwideF∞=/braceleftigg∞/summationdisplay\ni=1aiσ(wT\nix)/vextendsingle/vextendsingle/vextendsingle/vextendsingle∞/summationdisplay\ni=1|ai||wi|<∞/bracerightigg\nis a closed subspace of B(P) since the strong limit of a sequence of measures µn, each of which\nis given by a sum of countably many atoms, is also a sum of countably ma ny atomic measures.\nThe closure of the unit ball of /hatwideF∞in theL2(P)-topology is the unit ball of B(P) by the inverse\nand direct approximation theorems, which is one reason why we pref erB(P) over/hatwideF∞.\nAnother reason is this: Each function σ(wT·) = max{wTx+wd+1,0}fails to be differentiable\nalong the space {(x,1) :wTx+wd+1= 0}. It is easy to see that a function in /hatwideF∞is either affine\nlinear (if the singular sets coincide and the discontinuities cancel out ) or fails to be C1-smooth.\nIn particular, the Barron criterion fails for all countably wide two-la yernetworks: /hatwideF∞∩X={0}.\nFor those reasons we consider Barron space the correct functio n space for two-layer neural\nnetworks with controlled path norms. In addition, Barron space is s table under gradient flow\ntraining in the following sense. Recall that in the ‘mean field’ regime, pa rameter optimization\nfor two-layer networks is described by Wasserstein gradient flows , see [CB18, Proposition B.1] or\n[Woj20, Appendix A].\nTheorem 3.11. [Woj20, Lemma 3.3] Letπ0be a parameter distribution on Rd+2such that\nN0:=/integraldisplay\nRd+2a2+|w|2\nℓ2π0(da⊗dw)<∞.\nAssume that πtevolves by the 2-Wasserstein gradient flow of a risk function al\nR(π) =/integraldisplay\nRd×Rℓ/parenleftbig\nfπ(x),y/parenrightbig\nP(dx⊗dy)\nwhereℓis a sufficiently smooth convex loss function. Then there exis ts¯c>0such that\n¯c/ba∇⌈blfπt/ba∇⌈blB(P)≤/integraldisplay\nRd+2a2+|w|2π0(da⊗dw)≤2/bracketleftbig\nN0+R(π0)t/bracketrightbig\n∀t>0\nandlimsupt→∞t−1/ba∇⌈blfπt/ba∇⌈blB(P)= 0.\nThe constant ¯ cdepends on the equivalence constant between the Euclidean norm o nRdand\nthe norm on w, which is chosen to be dual to the norm which we consider on data spa ceRd(i.e.\nin thex-variables). Consequences of this result are explored in [WE20].\nRemark3.12.Thereareotherfunctionspacesfortwo-layernetworks. Inahig hlyoverparametrized\nscaling regime and given a suitable initialization, the gradient descent d ynamics of network pa-\nrameters are determined by an infinitely wide random feature model, the ‘neural tangent kernel’.\nEven at initialization, parameters are usually chosen such that the p ath norm of a two-layer\nnetwork with mneurons scales like√m. Instead,a,ware chosen randomly in such a way that\nE/bracketleftigg/summationdisplay\ni|ai|2|wi|2/bracketrightigg\n= 2\n(He initialization).\nBARRON FUNCTIONS: REPRESENTATION AND POINTWISE PROPERTIE S 19\n4.Special cases\n4.1.One-dimensional Barron functions. Recall that B(P) does not depend on P, but only\nthe collection of P-null sets. Since Barron functions are (Lipschitz-)continuous, w e observe that\nfµ=fνP-almost everywhere if and only if fµ≡fνon spt(P). For a closed set Awe therefore\ndenoteB(A) =B(P) for any Psuch that spt( P) =A.\nThe first example was originally given in [EW21, Remark A.5]. It shows in a v ery broad sense\nthat one-dimensional Barron space is the largest space with a weak second-order structure.\nExample 4.1.B[0,1] is the space of functions whose first derivative is in BV(0,1)/whose distri-\nbutional derivative is a Radon measure on [0 ,1]. The norm\n/ba∇⌈blf/ba∇⌈bl′=|f(0)|+|f′(0)|+/ba∇⌈blf′′/ba∇⌈blM[0,1]\nis equivalent to /ba∇⌈bl·/ba∇⌈blB[0,1].\nProof.Inclusion in Barron space. Assume for the moment that f∈C2[0,1]. Then\nf(x) =f(0)+/integraldisplayx\n0f′(ξ)dξ\n=f(0)+/integraldisplayx\n0f′(0)+/integraldisplayξ\n0f′′(s)dsdξ\n=f(0)+f′(0)x+/integraldisplayx\n0(x−ξ)f′′(ξ)dξ\n=f(0)σ(1)+f′(0)σ(x)+/integraldisplay1\n0f′′(ξ)σ(x−ξ)dξ\nand thus /ba∇⌈blf/ba∇⌈blB[0,1]≤ /ba∇⌈blf/ba∇⌈bl′. The result holds by approximation also if the second derivative is\nmerely a measure.\nOpposite inclusion. Iff∈ B[0,1], there exists a Radon measure µonS1such that\nf(x) =/integraldisplay\nS1σ(wx+b)µ(dw⊗db)\n=µ({0,1)})σ(1)+/integraldisplay\n{w>0}σ/parenleftbiggw\n|w|x+b\n|w|/parenrightbigg\n|w|µ(dw⊗db)+/integraldisplay\n{w<0}σ/parenleftbiggw\n|w|x+b\n|w|/parenrightbigg\n|w|µ(dw⊗db)\n=µ({0,1)})σ(1)+/integraldisplay\nRσ(x+˜b)˜µ1(d˜b)+/integraldisplay\nRσ(−x+˜b)µ2(d˜b)\nwhereµ1,2=T♯µis the push-forward of the measure |w| ·µon the domain w >0 orw <0\nrespectively along the map\nT:R2\\ →R,(w,b)/ma√sto→b\n|w|.\nWe note that or x∈[0,1] we have σ(x+˜b) = 0 if˜b<−1 and\n/integraldisplay\nRσ(x+˜b) ˜µ1(d˜b) =/integraldisplay0\n−1σ(x+˜b) ˜µ1(d˜b)+/integraldisplay∞\n0σ(x+˜b) ˜µ1(d˜b)\n=/integraldisplay0\n−1σ(x+˜b) ˜µ1(d˜b)+/parenleftbigg/integraldisplay∞\n01˜µ(d˜b)/parenrightbigg\nx+/parenleftbigg/integraldisplay∞\n0˜b˜µ(d˜b)/parenrightbigg\n/integraldisplay\nRσ(−x+˜b)µ2(d˜b) =/integraldisplay1\n0σ(−x+˜b)µ2(d˜b)+/parenleftbigg/integraldisplay0\n−∞1˜µ2(d˜b)/parenrightbigg\nx+/parenleftbigg/integraldisplay0\n−∞˜b˜µ2(d˜b)/parenrightbigg\n.\n20 WEINAN E AND STEPHAN WOJTOWYTSCH\nWe can ignore the linear terms when computing the (distributional) se cond derivative. We claim\nthatf′′\nµ= ˜µ1+(−id)♯˜µ2. This is easily verified formally by noting that σ′′=δ, i.e.σis a Green’s\nfunction for the Laplacian in one dimension.\nMore formally, take g∈C∞\nc(0,1) and use Fubini’s theorem and integration by parts to obtain\n/integraldisplay1\n0/parenleftbigg/integraldisplay1\n0σ(−x+˜b)µ2(d˜b)/parenrightbigg\ng′′(x)dx=/integraldisplay1\n0/parenleftigg/integraldisplay˜b\n0(˜b−x)g′′(x) dx/parenrightigg\n˜µ2(d˜b)\n=/integraldisplay1\n0/parenleftigg\n0−/integraldisplay˜b\n0(−1)g′(x)dx/parenrightigg\n˜µ2(d˜b)\n=/integraldisplay1\n0g(˜b)µ2(d˜b).\nThe boundary terms vanish since g(0) =g′(0) = 0 and ˜b−˜b= 0. The same argument can be\napplied to the second term. We have shown more generally that |f(0)|+|f′(0)| ≤C/ba∇⌈blf/ba∇⌈blB[0,1]\nand finally observe that\n/ba∇⌈blf′′/ba∇⌈blM[0,1]≤ /ba∇⌈bl˜µ2/ba∇⌈blM[0,1]+/ba∇⌈bl˜µ1/ba∇⌈bl[−1,0]≤/integraldisplay1\n0/radicalig\n1+˜b2/vextendsingle/vextendsingle˜µ2/vextendsingle/vextendsingle(d˜b)+/integraldisplay0\n−1/radicalig\n1+˜b2/vextendsingle/vextendsingle˜µ1/vextendsingle/vextendsingle(d˜b)≤ /ba∇⌈blµ/ba∇⌈bl.\nTaking the infimum over µ, we find that also /ba∇⌈blf′′/ba∇⌈blM[0,1]≤ /ba∇⌈blf/ba∇⌈blB[0,1]. /square\nIn particular, since the space of Radon measures is neither separa ble nor reflexive, we deduce\nthat generally B(P) is neither separable nor reflexive.\nRemark 4.2.The same argument shows that B(R) is isomorphic to the space of functions whose\nsecond derivatives are finite Radon measures with finite first momen ts and the norm\n/ba∇⌈blf/ba∇⌈bl′\nB(R)=|f(0)|+|f′(0)|+/integraldisplay\nR/radicalbig\n1+b2|f′′|(db).\nIn particular, non-constant periodic functions are never in B(R).\nRemark 4.3.BV(0,1) embeds into L∞(0,1). Barron space is a proper subspace of the space\nof Lipschitz functions (function whose first derivative lies in L∞) and – heuristically – we can\nimagine it to be roughly as large in the space of Lipschitz functions as BVis inL∞.\nRemark 4.4.B[0,1] is an algebra (i.e. if f,g∈ B[0,1] then also fg∈ B[0,1]). This is generally\nnot true, see Remark 5.16.\n4.2.Positively one-homogeneous Barron functions. It has been recognized since [Bac17]\nthat the space of positively homogeneous Barron functions is signifi cantly easier to understand\nthan full Barron space.\nDenote by RPd−1thed−1-dimensional real projective space, i.e. the space of undirected\nlines inRd, which we represent as the quotient RPd−1=Sd−1/∼of the unit sphere under the\nequivalence relation which identifies wand−w. Without loss of generality, we assume that x,w\nare normalized with respect to the Euclidean norm on Rd.\nLemma 4.5. SetBhom(Rd) ={f∈ B(P) :f(rx) =rf(x)∀r>0}. ThenBhom(P)is isomorphic\nto the product space M(RPd−1)×RdwhereM(RPd−1)denotes the space of Radon measures on\nd−1dimensional real projective space.\nProof.Dimension reduction. Letf=fµ∈ B(Rd) be a positively one-homogeneous function.\nThen for any λ>0, the identity\nf(x) =f(λx)\nλ\nBARRON FUNCTIONS: REPRESENTATION AND POINTWISE PROPERTIE S 21\n=/integraldisplay\nSdσ/parenleftbig\nwT(λx)+b/parenrightbig\nλµ(dw⊗db)\n=/integraldisplay\nSdσ/parenleftbigg\nwTx+b\nλ/parenrightbigg\nµ(dw⊗db)\nholds. We can pass to the limit λ→ ∞by the dominated convergence theorem and obtain\nf(x) =/integraldisplay\nSdσ(wTx)µ(dw⊗db)\n=/integraldisplay\nSdσ/parenleftbiggw\n|w|T\nx/parenrightbigg\n|w|µ(dw⊗db)\n=/integraldisplay\nSd−1σ(wTx) ˆµ(dw)\nwhere ˆµis the push-forward of |w|·µalong the map ( w,b)/ma√sto→w. Clearly /ba∇⌈blˆµ/ba∇⌈bl ≤ /ba∇⌈blµ/ba∇⌈bl, so without\nloss of generality we may assume that f∈ Bhomis represented by a measure µonSd−1.\nOdd-even decomposition. Ifµis a signed Radon measure on Sd, we decompose µ=\nµeven+µoddwhere\nµeven=µ+T♯µ\n2, µodd=µ−T♯µ\n2, T:Sd−1→Sd−1, T(x) =−x.\nNote that\n/ba∇⌈blµeven/odd/ba∇⌈bl ≤/ba∇⌈blµ/ba∇⌈bl+/ba∇⌈blT♯µ/ba∇⌈bl\n2=/ba∇⌈blµ/ba∇⌈bl,/ba∇⌈blµ/ba∇⌈bl=/vextenddouble/vextenddoubleµeven+µodd/vextenddouble/vextenddouble≤ /ba∇⌈blµeven/ba∇⌈bl+/ba∇⌈blµodd/ba∇⌈bl.\nIn particular,\n/ba∇⌈blf/ba∇⌈blBhom= inf\n{µ:fµ=f}/ba∇⌈blµodd/ba∇⌈blM(Sd−1)+/ba∇⌈blµeven/ba∇⌈blM(Sd−1)\nis equivalent to the norm on Bhominduced by the norm of B(Rd). We further find that\nfµ(x) =/integraldisplay\nSdσ(wTx)µ(dw)\n=/integraldisplay\nSdσ(wTx)+σ(−wTx)\n2+σ(wTx)−σ(−wTx)\n2µ(dw)\n=1\n2/integraldisplay\nSd|wTx|(µeven+µodd)(dw)+1\n2/parenleftbigg/integraldisplay\nSdwT(µeven+µodd)(dw)/parenrightbiggT\nx\n=1\n2/integraldisplay\nSd|wTx|µeven(dw)+1\n2/parenleftbigg/integraldisplay\nSdwTµodd(dw)/parenrightbiggT\nx\nsince the other integrals drop out by symmetry. In particular, fµnaturally decomposes into an\neven partfeven\nµ=fµeven, and a linear (in particular odd) part flin. Clearly\nfµ≡fµ′⇔feven\nµ=feven\nµ′, flin\nµ=flin\nµ′.\nLinear part. The linear function f(x) =αTxis/ba∇⌈blα/ba∇⌈bl-Lipschitz, so /ba∇⌈blf/ba∇⌈blB≥ /ba∇⌈blα/ba∇⌈bl. On the other\nhand,\nf=fµwhereµ=/ba∇⌈blα/ba∇⌈bl/bracketleftbig\nδα//bardblα/bardbl−δ−α//bardblα/bardbl/bracketrightbig\n⇒ /ba∇⌈blf/ba∇⌈bl ≤2/ba∇⌈blα/ba∇⌈bl.\nTaking the infimum over all odd measures shows that\n/ba∇⌈blα/ba∇⌈bl ≤ /ba∇⌈blµodd/ba∇⌈bl ≤2/ba∇⌈blα/ba∇⌈bl.\n22 WEINAN E AND STEPHAN WOJTOWYTSCH\nEven part. We can interpret µevenas a signed Radon measure on RPd−1. To conclude the\nproof, it suffices to show that the map µeven/ma√sto→fµevenis injective. Assume that fµeven= 0, i.e./integraldisplay\nSd−1|wTx|µeven(dw) = 0 ∀x∈Sd−1.\nWe first consider the case d= 2 and identify Sd−1= (0,2π) via the usual map φ/ma√sto→(cosφ,sinφ).\nWrite\nx= (cosθ,sinθ),/a\\}b∇ack⌉tl⌉{tx,w/a\\}b∇ack⌉t∇i}ht= cosθcosφ+sinθsinφ= cos(φ−θ).\nClaim:The space generated by the family {cos(· −θ)}θ∈[0,2π)isC0-dense in the space of\ncontinuous π-periodic functions on R.Proof of claim: Note that f(φ) =|cosφ|satisfies\nf′′+f=−/summationtext\nk∈Zδkπ, so ifgis aπ-periodicC2-function, then for any θ∈[0,2π)\nlim\nh→0/integraldisplay2π\n0/bracketleftbigg|cos|(φ+h−θ)−2|cos|(φ−θ)+|cos|(φ−h−θ)\nh2+|cos|(φ−θ)/bracketrightbigg\ng(φ)dφ=−2g(θ).\nFor anyε >0, we can choose hsufficiently small and approximate the integral by a Riemann\nsum withNterms in such a way that\nsup\nθ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleN/summationdisplay\ni=1g(φi)\n2/bracketleftbigg|cos|(φi+h−θ)−2|cos|(φi−θ)+|cos|(φi−h−θ)\nh2+|cos|(φi−θ)/bracketrightbigg\n−g(θ)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle<ε.\nUp to rearranging the sum and passing back to the original coordina tes, this proves the claim\nsinceC2is dense in C0./square\nNowletd≥2. By the two-dimensionalresult, everyfunction ofthe form f(x) =g(|/a\\}b∇ack⌉tl⌉{tx,v/a\\}b∇ack⌉t∇i}ht|) can\nbe approximated arbitrarily well in any fixed plane spanned by vand any ˜v⊥vin a way which is\nconstant in directions orthogonalto the plane. By averagingthe a pproximatingfunctions f˜vover\nthe choice of plane, we find that fcan be approximated arbitrarily well by functions of the form\n|/a\\}b∇ack⌉tl⌉{tw,x/a\\}b∇ack⌉t∇i}ht|on the whole sphere. Cancellations do not occur since the function ( asymptotically) only\ndepends on one the direction which all planes share. Again, we replac e the averaging integral by\na Riemann sum.\nBy a weaker version of the Universal Approximation Theorem [Cyb89 ], sums of functions\ndepending only on a single direction (ridge functions) are dense in C0(Sd−1). /square\nRemark 4.6.We observe that the kernel of the map\nM(Sd−1)→C0,1(Sd−1), µ/ma√sto→fµ\nis the subspace\nN=/braceleftbigg\nµ∈ M(Sd−1)/vextendsingle/vextendsingle/vextendsingle/vextendsingle(−id)♯µ=−µ,/integraldisplay\nSd−1wµ(dw) = 0/bracerightbigg\n.\nRemark 4.7.Since the space of Radon measures is not separable or reflexive for d≥2, neither is\nBhom. The space Bhomis a closed subspace of B(Rd), soB(Rd) is neither separable nor reflexive.\nExample 4.8.Some functions in Bhomare\n(1)σ(wTx) for anyw∈Rd.\n(2) the Euclidean norm f(x) =/ba∇⌈blx/ba∇⌈blℓ2. Up to constant, we can write fas an average over\nσ(wTx) over the uniform distribution π0on the unit sphere\nf(x) =cd/integraldisplay\nSd−1σ(wTx)π0(dw), cd=/bracketleftbigg/integraldisplay\nSd−1σ(w1)π0(dw)/bracketrightbigg−1\n∼2√\nπd.\nsince/integraldisplay\nSd−1σ(w1)π0(dw) =1\n|Sd−1|/integraldisplay1\n0w1|Sd−2|/parenleftbig\n1−w2\n1/parenrightbigd−2\n2dw1\nBARRON FUNCTIONS: REPRESENTATION AND POINTWISE PROPERTIE S 23\n=|Sd−2|\n|Sd−1|/integraldisplay1\n0−1\ndd\ndw(1−w2)d\n2dw\n=|Sd−2|\nd|Sd−1|.\nThis can be computed explicitly as\n|Sd−2|\nd|Sd−1|=2πd−1\n2\nΓ(d−1\n2)\nd2πd\n2\nΓ(d\n2)=π−1/2Γ(d/2)\ndΓ/parenleftbig\n(d−1)/2/parenrightbig=d−1\n2dπ−1/2Γ(d/2)\nd−1\n2Γ/parenleftbig\n(d−1)/2/parenrightbig\n=d−1\n2dπ−1/2Γ(d/2)\nΓ/parenleftbig\n(d+1)/2/parenrightbig∼1\n2√π/radicalig\n2π\nd/2/parenleftbigd\n2e/parenrightbigd\n2\n/radicalig\n2π\n(d+1)/2/parenleftbigd+1\n2e/parenrightbigd+1\n2∼1\n2√π/radicalbigg\n2e\nd+1∼/radicalbigge\n2πd.\nby Stirling’s formula.\n(3) In the first example, fwas non-differentiable along a hyperplane, while in the sec-\nond example, fwas smooth except at the origin (where any non-linear positively one -\nhomogeneous function is non-differentiable). We can use the same a rgument as in the\nsecond example to express f(x) =/radicalbig\nx2\n1+···+x2\nkfor anyk≤d, which is singular along\na singled−k-dimensional subspace.\n(4) Countable sums of these examples (or rotation thereof) with ℓ1-weights lie in Bhom.\n5.Structure of Barron functions\n5.1.Limits at infinity. Barron functions grow at most linearly at infinity. We show that they\nare well behaved in a more precise sense.\nTheorem 5.1. For anyf∈ B(Rd), the function\nf∞:Sd−1→R, f ∞(x) = lim\nr→∞f(rx)\nr\nis well-defined and a Barron function on Sd.\nProof.Writef=fµfor a suitable signed Radon measure µonSd. In this argument, we write\n(w,b) instead of wand (x,1) instead of xlike above since the last entry does not scale. Note\nthat\nf(rx)\nr=1\nr/integraldisplay\nSdσ/parenleftbig\nwT(rx)+b/parenrightbig\nµ(dw)\n=/integraldisplay\nSdσ/parenleftbigg\nwTx+b\nr/parenrightbigg\nµ(dw)\n→/integraldisplay\nSdσ(wTx)µ(dw)\nasr→ ∞by the dominated convergence theorem. The result is immediate. /square\nThe function f∞captures the linearly growing component of fat infinity. Note that Barron\nfunctions like f(x) =σ(x1+1)−σ(x1) may also be bounded. We prove that there is nothing\n‘in between’ the bounded and the linearly growing regime.\nTheorem 5.2. Letf∈ B(Rd)such thatf∞≡0. Thenfis bounded.\n24 WEINAN E AND STEPHAN WOJTOWYTSCH\nProof.Reduction to one dimension. Assume that f∞≡0. Then for every ν∈Sd−1, the\none-dimensional Barron function g(r) =f(rν) satisfies lim r→∞r−1g(r) = 0. We observe that\ng(r) =/integraldisplay\nSdσ/parenleftbig\nrwTν+b/parenrightbig\nµ(dw⊗db)\nis a linear combination of terms of the form σ(r(wTν)+b) whose one-dimensional Barron norm\nis bounded by |wTν|+|b| ≤ |w|+|b|= 1 if we normalize Rdwith respect to the ℓ1-norm (and\nsimilarly for other norms). By taking the linear combination, we obtain\n/ba∇⌈blg/ba∇⌈blB(R)≤ /ba∇⌈blf/ba∇⌈blB(Rd).\nIf a different norm is chosen on Rd, a dimension-dependent factor occurs.\nOne-dimensional case. Recall by Remark 4.2 that ghas a second derivative in the space\nof Radon measures and that\n/ba∇⌈blg/ba∇⌈bl=|g(0)|+|g′(0)|+/integraldisplay\nR/parenleftbig\n1+|ξ|/parenrightbig\n|g′′|(dξ)\nis an equivalent norm on Barron space in one dimension. We note that\nh(x) =h(0)+h′(0)x+/integraldisplayx\n0h′(ξ)dξ\n=h(0)+h′(0)x+xh′(x)−/integraldisplayx\n0ξh′′(ξ)dξ\n=/bracketleftbig\nh′(x)+h′(0)/bracketrightbig\nx+/bracketleftbigg\nh(0)−/integraldisplayx\n0ξh′′(ξ)dξ/bracketrightbigg\nfor smooth functions f:R→R. By approximation, the identity carries over to Barron space, if\nbothf′(x) andthe integral/integraltextx\n0|g′′|(dξ) areinterpretedeitherasleft orrightcontinuousfunctions.\nIn particular, for Barron functions we find that\n(5.1)/vextendsingle/vextendsingleh(x)−/bracketleftbig\nh′(x)+h′(0)/bracketrightbig\nx/vextendsingle/vextendsingle≤ /ba∇⌈blh/ba∇⌈blB.\nAdditionally, we observe that\n/vextendsingle/vextendsingleh′(x)−h′(y)/vextendsingle/vextendsingle=/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplayy\nxh′′(s)ds/vextendsingle/vextendsingle/vextendsingle/vextendsingle\n≤/integraldisplayy\nxs|h′′(s)|1\nsds\n≤/ba∇⌈blh/ba∇⌈blB\nmin{x,y}\nforx,y>1. We distinguish two cases:\n(1) There exists x>1 such that\n/vextendsingle/vextendsingleg′(x)+g′(0)/vextendsingle/vextendsingle>/ba∇⌈blg/ba∇⌈blB\nx.\nThen\n/vextendsingle/vextendsingleg′(0)+g′(y)/vextendsingle/vextendsingle≥/vextendsingle/vextendsingleg′(0)+g′(x)/vextendsingle/vextendsingle−/vextendsingle/vextendsingleg′(x)−g′(y)/vextendsingle/vextendsingle≥/vextendsingle/vextendsingleg′(x)+g′(0)/vextendsingle/vextendsingle−/ba∇⌈blg/ba∇⌈blB\nx=:ε>0\nfor ally≥x. In particular,\nliminf\nx→∞/vextendsingle/vextendsingle/vextendsingle/vextendsingleg(x)\nx/vextendsingle/vextendsingle/vextendsingle/vextendsingle≥ε,\ncontradicting the assumption that ggrows sublinearly.\nBARRON FUNCTIONS: REPRESENTATION AND POINTWISE PROPERTIE S 25\n(2) For all x>1, the estimate\n/vextendsingle/vextendsingleg′(x)+g′(0)/vextendsingle/vextendsingle≤/ba∇⌈blg/ba∇⌈bl\nx\nholds. Then also\n|g(x)| ≤/vextendsingle/vextendsingleg′(x)−g′(0)/vextendsingle/vextendsingle|x|+|g(0)|+/integraldisplay∞\n0|ξ||g′′|(dξ)≤2/ba∇⌈blg/ba∇⌈bl\nholds, i.e.gis bounded.\n/square\nCorollary 5.3. Letf∈ B(Rd). Thenfis a sum of a bounded and a positively one-homogeneous\nfunction:\nf=f∞+/bracketleftbig\nf−f∞/bracketrightbig\n/ba∇⌈blf−f∞/ba∇⌈blL∞(Rd)≤2/ba∇⌈blf/ba∇⌈blB.\n5.2.Barron functions which decay at infinity. It is well-known that there are no finite\ntwo-layer neural networks with compact support in Rdfor anyd≥2 – see e.g. [Lu21] for an\nelementary proof of a (much) stronger result. The same proof ap plies e.g. to finite sums of\ntrigonometric functions sin( ξ·x) and cos(ξ·x), which cannot be compactly supported in Rdeven\nford= 1. Fourier transforms on the other hand, the natural analogue to Barron functions, can\nrepresent any function in Schwartz space, in particular any compa ctly supported and infinitely\nsmooth function. It is therefore not obvious whether compactly s upported Barron functions\nexist. We answer a weaker question as follows.\nLemma 5.4. For anyd≥1, the function f:Rd→R,f(x) =/parenleftbig\n|x|2+1/parenrightbig−1/2is in Barron space\nand/ba∇⌈blf/ba∇⌈blB(Rd)≤C√\ndfor a constant C >0which does not depend on d.\nProof.We note that the function h:R→R,h(z) = exp(−z2/2) is inB(R) since\n|h(0)|+/integraldisplay∞\n−∞|z||h′′(z)|dz <∞.\nFor every fixed ν∈Rd, the function\nfν:Rd→R, fν(x) =h(ν·z)\nsatisfies/ba∇⌈blfν/ba∇⌈blB≤ |ν|/ba∇⌈blh/ba∇⌈blB, so the Gaussian average f(x) of the values fν(x) defines a function f\nsuch that\nf(x) =1\n(2π)d/2/integraldisplay\nRdh(z·ν1e1) exp/parenleftbigg\n−|ν|2\n2/parenrightbigg\ndν.\nBy H¨ older’s inequality for the Gaussian measure, the Barron bound\n/ba∇⌈blf/ba∇⌈blB≤1\n(2π)d/2/integraldisplay\nRd/ba∇⌈blh/ba∇⌈blB|ν|exp/parenleftbigg\n−|ν|2\n2/parenrightbigg\ndν≤/parenleftbigg1\n(2π)d/2/integraldisplay\nRd|ν|2exp/parenleftbigg\n−|ν|2\n2/parenrightbigg\ndν/parenrightbigg1\n2\n/ba∇⌈blh/ba∇⌈blB.\nholds. Since\n1√\n2π/integraldisplay\nRz2exp/parenleftbigg\n−z2\n2/parenrightbigg\ndz=1√\n2π/integraldisplay\nRexp/parenleftbigg\n−z2\n2/parenrightbigg\ndz= 1,\nthe bound /ba∇⌈blf/ba∇⌈blB(Rd)≤√\nd/ba∇⌈blh/ba∇⌈blB(R)follows by Fubini’s theorem. We can explicitly compute fas\nf(x) =f(|x|e1)\n=1\n(2π)1/2/integraldisplay\nRh(z·ν1e1) exp/parenleftbigg\n−ν2\n1\n2/parenrightbigg\ndν1\n=1\n(2π)1/2/integraldisplay\nRexp/parenleftigg\n−/parenleftbig\n|x|2+1/parenrightbig\nν2\n1\n2/parenrightigg\ndν1\n26 WEINAN E AND STEPHAN WOJTOWYTSCH\n=1/radicalbig\n|x|2+11\n(2π)1/2/integraldisplay\nRexp/parenleftigg\n−/parenleftbig/radicalbig\n|x|2+1ν1/parenrightbig2\n2/parenrightigg\n/radicalbig\n|x|2+1dν1\n=1/radicalbig\n|x|2+11\n(2π)1/2/integraldisplay\nRexp/parenleftbigg\n−z2\n2/parenrightbigg\ndz\n=1/radicalbig\n|x|2+1.\n/square\nIf we consider\nf(x) =1/radicalbig\n|x−z1|2+1−1/radicalbig\n|x−z2|2+1,\nthenfis a Barronfunction which decays like O(|x|−2) at infinity. The result admits a refinement\nwhich was pointed out to us by Jonathan Siegel.\nLemma 5.5. For anyd≥1, andk≥0there exists a non-zero Barron function f:Rd→Rand\na constantCk>0such that\nsup\n|x|≥r|f(x)| ≤C|f(0)|\nr2k+1∀r≥2−1/2.\nProof.Step 1. Assume for the moment that a Barron function h:R→Rwith the following\nproperties exists:\n(1)his supported in [ −1,1],\n(2)h(0)/\\⌉}atio\\slash= 0, and\n(3)/integraltext1\n−1h(y)y2jdy= 0 forj= 0,...,k−1.\nAs before, we define\nf(x) =1\n(2π)d/2/integraldisplay\nRdh(x·ν) exp/parenleftbigg\n−|ν|2\n2/parenrightbigg\ndν\nand find that\nf(re1) =1√\n2π/integraldisplay∞\n−∞h(rν1) exp/parenleftbigg\n−|ν|2\n2/parenrightbigg\ndν\n=1√\n2πr/integraldisplay∞\n−∞h(y) exp/parenleftbigg\n−y2\n2r2/parenrightbigg\ndy\n=1√\n2πr∞/summationdisplay\nn=0(−1)n\nn!(2r2)n/integraldisplay1\n−1h(y)y2ndy\n=1√\n2πr∞/summationdisplay\nn=k(−1)n\nn!(2r2)n/integraldisplay1\n−1h(y)y2ndy\n=1√\n2π22kr2k+1∞/summationdisplay\nn=k(−1)n\nn!(2r2)n−k/integraldisplay1\n0h(y)y2ky2(n−k)dy\n≤1√\n2π22kr2k+1∞/summationdisplay\nn=k1\nn!(2r2)n−k/ba∇⌈blh/ba∇⌈blL∞\n≤/ba∇⌈blh/ba∇⌈blL∞√\n2π22kr2k+1/parenleftigg∞/summationdisplay\nn=k1\nn!/parenrightigg\n.\nBARRON FUNCTIONS: REPRESENTATION AND POINTWISE PROPERTIE S 27\nfor allr≥2−1/2. Thus the Lemma is proved, assuming that a suitable Barron functio n in one\ndimension can be found.\nStep 2. In this step, we show that a suitable hexists. The linear map\nAk:B(R)→Rk+2, Ak(h) =\nh(1)\nh(−1)/integraltext1\n−1x2h(x)dx\n.../integraltext1\n−1h(x)x2(k−1)dx\n\nhas a non-trivial kernel Vk⊆ B(R). Sinceh(±1) = 0, we can modify hby adding multiples\nofσ/parenleftbig\n±(x−1)/parenrightbig\nto ensure that his supported in [ −1,1]. Anyh∈Vkinducesfsuch that\nlimr→∞r2k+1f(re1) = 0. The only question is whether there exists hsuch thatf/\\⌉}atio\\slash≡0. This is\ntrue for example if f(0) =h(0)/\\⌉}atio\\slash= 0.\nAssume for the sake of contradiction that Ak(h) = 0 implies that h(0) = 0. This is the case\nif and only if there exist coefficients a0,...,ak−1,b1,b−1such that\nh(0) =k−1/summationdisplay\nj=0aj/integraldisplay1\n−1y2jh(y)dy+b1h(1)+b−1h(−1)\nfor allh∈ B(R). We can show that this is not the case by considering\nhε(y) = max/braceleftbigg\n1−|x|\nε,0/bracerightbigg\n=1\nεσ(x+ε)−2\nεσ(x)+1\nεσ(x−ε).\n/square\nWhile the Barron functions we construct may not be compactly supp orted, they exhibit prop-\nerties which are very different from those of finite two-layer netwo rks. In particular, we note the\nfollowing.\nCorollary 5.6. For anyd≥2, there exists f∈ B(Rd)such thatf/\\⌉}atio\\slash≡0andf∈L1(Rd)∩L∞(Rd).\nThese functions can be approximated efficiently by finite two-layer n etworks in L2(P) if the\ndata distribution Phas finite second moments by Theorem 3.8. Whether there exist Bar ron\nfunctions which are compactly supported, or non-negative Barro n functions which decay faster\nthan|x|−1at infinity, remains an open problem.\n5.3.Singularset. WeshowthatthesingularsetofaBarronfunction(thesetwheret hefunction\nis not differentiable) is fairly small and easy to understand. Again, we write (w,b) explicitly\ninstead ofwand (x,1) instead of xto understand the finer properties of Barron functions.\nLemma 5.7. Letfbe a Barron function on a domain Ω⊆Rd. Then for every x∈Ωand every\nv∈Rd, the one-sided derivatives ∂±\nvf(x)exists and\n∂+\nvf(x) := lim\nhց0f(x+hv)−f(x)\nh\n=/integraldisplay\nA+\nx/a\\}b∇ack⌉tl⌉{tw,v/a\\}b∇ack⌉t∇i}htµ(dw⊗db)+/integraldisplay\nA0\nxσ(/a\\}b∇ack⌉tl⌉{tw,v/a\\}b∇ack⌉t∇i}ht)µ(dw⊗db)\nwheref=fµand\nA+\nx:={(w,b)|/a\\}b∇ack⌉tl⌉{tw,x/a\\}b∇ack⌉t∇i}ht+b>0}, A0\nx:={(w,b)|/a\\}b∇ack⌉tl⌉{tw,x/a\\}b∇ack⌉t∇i}ht+b= 0}.\n28 WEINAN E AND STEPHAN WOJTOWYTSCH\nThe jump of the derivatives is\n[∂vf]x:=∂+\nvf(x)−∂−\nvf(x) =/integraldisplay\nA0x|/a\\}b∇ack⌉tl⌉{tw,v/a\\}b∇ack⌉t∇i}ht|µ(dw⊗db).\nProof.We observe that\nlim\nt→0+σ(a+tb)−σ(a)\nt=\n\nσ(b)a= 0\nb a> 0\n0a<0,lim\nt→0−σ(a+tb)−σ(a)\nt=\n\n−σ(−b)a= 0\nb a> 0\n0a<0.\nNote that\n∂+\nvf(x) := lim\nhց0f(x+hv)−f(x)\nh\n= lim\nhց0/integraldisplayσ(/a\\}b∇ack⌉tl⌉{tx,w/a\\}b∇ack⌉t∇i}ht+b+h/a\\}b∇ack⌉tl⌉{tv,w/a\\}b∇ack⌉t∇i}ht)−σ(/a\\}b∇ack⌉tl⌉{tx,w/a\\}b∇ack⌉t∇i}ht+b)\nhµ(dw⊗db).\nSince both µ+,µ−are finite measures, we may use the dominated convergence theor em with\nmajorizing function |v|to take the limit inside. This proves the first part of the theorem. Th e\nsecond part follows immediately noting that\n[∂vf]x=∂+\nvf(x)−∂−\nvf(x)\n=/integraldisplay\n{(w,b)| /an}bracketle{tw,x/an}bracketri}ht+b>0}/a\\}b∇ack⌉tl⌉{tw,v/a\\}b∇ack⌉t∇i}htµ(dw⊗db)+/integraldisplay\n{(w,b)| /an}bracketle{tw,x/an}bracketri}ht+b=0}σ(/a\\}b∇ack⌉tl⌉{tw,v/a\\}b∇ack⌉t∇i}ht)µ(dw⊗db)\n−/integraldisplay\n{(w,b)| /an}bracketle{tw,x/an}bracketri}ht+b>0}/a\\}b∇ack⌉tl⌉{tw,v/a\\}b∇ack⌉t∇i}htµ(dw⊗db)−/integraldisplay\n{(w,b)| /an}bracketle{tw,x/an}bracketri}ht+b=0}−σ(−/a\\}b∇ack⌉tl⌉{tw,v/a\\}b∇ack⌉t∇i}ht)µ(dw⊗db)\n=/integraldisplay\nA0xσ(/a\\}b∇ack⌉tl⌉{tv,w/a\\}b∇ack⌉t∇i}ht) +σ(−/a\\}b∇ack⌉tl⌉{tv,w/a\\}b∇ack⌉t∇i}ht)µ(dw⊗db).\n/square\nCorollary 5.8. Letµbe a finite signed measure on Sdsuch thatµ(Sd∩H) = 0for every\nhyperplane HinRd+1. ThenfµisC1-smooth on Rd.\nProof.The function\n(x,v)/ma√sto→(∂vf)(x) =/integraldisplay\nSdσ(wTx)µ(dw)\nis continuous by the dominated convergence theorem. /square\nPhilosophically, it makes sense that only the singularity in σcontributes to the singularity of\nfµ, and not the segments where σis linear. A single neuronactivation σ(wTx+b) is differentiable\nexceptalongthehyperplane {x:wTx+b= 0}. Similarly, finitetwo-layernetworkshaveasingular\npart which is contained in a union of hyperplanes. We will show that a sim ilar result holds for\ngeneral Barron functions.\nTheorem 5.9. Letµbe a Radon measure on Sd. We can decompose µ=/summationtext∞\ni=0µiandfµ=/summationtext∞\ni=0fµiin such a way that\n•fµ0isC1-smooth,\n•fori≥1,µiis supported on the intersection of Sdand aki-dimensional affine subspace\nwi+ViofRd+1for some 0≤ki≤d−1,\n•fµiis smooth on Vi∩Rdexcept at a single point xiand constant in directions w∈V⊥\ni∩Rd,\nso\n•the singular set Σioffµiis thed−ki-dimensional affine subspace xi+V⊥\niofRd.\nBARRON FUNCTIONS: REPRESENTATION AND POINTWISE PROPERTIE S 29\nProof.Step 0. For Barron functions of one real variable, Sd=S1is the circle. Since a finite\nmeasure has only countably many atoms (which can be represented as intersections of S1with\na one-dimensional affine subspace of R2), the representation holds with µi≪δwifor the atoms\nwi∈S1ofµandµ0=µ−/summationtext\niµi.\nWe proceed by induction. Assume that the Theorem is proved for k≤d−1.\nStep 1. First, we decompose the measure µinto lower-dimensional strata. Since the total\nvariation measure |µ|is finite, there are only finitely many atoms of a certain size ε>0 of|µ|,\ni.e. only finitely many points x1,...,xN∈Sdsuch that |µ|({xi})≥ε. As a consequence, the set\nA={x1,x2,...}of atoms of |µ|is at most countable. We define\nµ0,i:=µ|{xi},˜µ1=µ−∞/summationdisplay\ni=1µ0,i.\nIn particular, ˜ µ1does not have any atoms and\n/ba∇⌈blµ/ba∇⌈bl=/ba∇⌈bl˜µ1/ba∇⌈bl+∞/summationdisplay\ni=1/ba∇⌈blµ0,i/ba∇⌈bl\nsince the measures are mutually singular. Now we claim that there exis t at most countably\nmany circles s1\n1,s1\n2,...inSdsuch that |µ|(s1\ni)>0, where a circle is the intersection of Sdwith a\ntwo-dimensional affine space. If there were uncountably many circ les of positive measure, there\nwould beε>0 such that uncountably many circles have measure ≥ε, just like for atoms. Since\ncircles are either disjoint or intersect in one or two points, they inte rsect in|˜µ1|-null sets. So if\nthere were infinitely many circles s1,s2,...such that |˜µ1|(si)≥εfor alli∈N, then\n/ba∇⌈blµ/ba∇⌈bl ≥ /ba∇⌈bl˜µ1/ba∇⌈bl ≥∞/summationdisplay\ni=1|˜µ1|(si) =∞,\nleading to a contradiction. We now define\nµ1,i=µ|si,˜µ2= ˜µ1−∞/summationdisplay\ni=1µ1,i.\nWe iterate this procedure, using that spheres of dimension kintersect in spheres of dimension\n≤k−1 to obtain a decomposition\nµ= ˜µd+d−1/summationdisplay\nk=0∞/summationdisplay\ni=1µk,i\nwhere the inner sum may be finite or countable and for all i. If it is finite, we set µk,ito be the\nzero measure on a subspace of the correct dimension and ignore th e distinction notationwise.\nStep 2. Fix indices k,iand the affine space Wk,iof dimension ksuch that spt( µk,i) =\nSd∩Wk,i. By construction, the function\nfµk,i(x) =/integraldisplay\nWk\ni∩Sdσ(wTx+b)µ(dw⊗db)\nis constant in directions orthogonal to Wk\ni, i.e.fµk\ni(x+v) =fµk\ni(x) ifvis orthogonal to the\nprojection/hatwiderWk\niofWk\nionto the {b= 0}-plane./hatwiderWk\nihas dimension k, unless the ‘bias direction’\n(0,...,0,1) is inWk\ni, in which case it has dimension k−1.\nIn either case, fµk,iis a Barron function of ≤d−1 variables. By the induction hypothesis,\nwe can write\nfµk,i=∞/summationdisplay\nj=0fµk\ni,j\n30 WEINAN E AND STEPHAN WOJTOWYTSCH\nwhere the singularset of fµk\ni,jis an affine subspace of Wk\niof dimension ≤d−1. Thus the theorem\nis proved. /square\nRemark 5.10.The singular set Σ fis contained in the union/uniontext\niΣfµiwhich may be empty or not.\nIn particular, the Hausdorff dimension of the singular set of fis an integer k∈ {0,...,d−1}.\nRemark 5.11.We need to consider the decomposition of the singular set since ther e may be\ncancellations between the singularities of different dimensionality. Fo r example, the singular set\nof the Barron function f(x) =|x1| −/radicalbig\nx2\n1+x2\n2is Σ ={x1= 0} \\ {(0,0)}and not a union of\naffine spaces.\nRemark 5.12.The singular set of a single neuron activation has dimension d−1. In Example\n4.8 we present examples of Barron functions whose singular set is a lin ear space of strictly lower\ndimension.\nRemark 5.13.Σ may be dense in Ω. For example, the primitive function of any bounde d mono-\ntone increasing function on [0 ,1] with a dense set of jump discontinuities is in B[0,1] and has a\ndense singular set.\nRemark5.14.Barronfunctions cannothavecurvedsingularsets ofco-dimensio n1. In particular,\nfunctions like\nf1(x) = dist(x,Sd−1), f 2(x) = dist(x,B1(0))\nare not Barron functions, where the distance function, sphere a nd unit ball are all with respect\nto the Euclidean norm.\nRemark 5.15.Ford≥3, the function f(x) = max {x1,...,xd}is not a Barron function over\n[−1,1]d. Namely, the singular set\nΣ =/uniondisplay\ni/ne}ationslash=j{x:xi=xj=f(x)}\nis incompatible with the linear space structure if there exists a third d imension. Note, however,\nthatfcan be represented by a network with ⌈log2(d)⌉hidden layers since\nmax{x1,x2}=x1+σ(x2−x1),max{x1,x2,x3,x4}= max/braceleftbig\nmax{x1,x2},max{x3,x4}/bracerightbig\nand so on.\nRemark 5.16.Note thatσ(x1) andσ(x2) are Barron functions, but the singular set of their\nproduct is the corner\nΣ ={x1= 0,x2≥0}∪{x1≥0,x2= 0}.\nThusσ(x1)σ(x2) is not a Barron function on [ −1,1]2. In particular, Barron space in dimension\nd≥2 is generally not an algebra.\nBarron-type spaces for deep neural networks will be developed in detail in a forth-coming\narticle [EW20]. We briefly discuss three-layer Barron networks using examples of functions\nwhich need to be in any reasonable space of infinitely wide three-layer network.\nRemark5.17.Three-layernetworkshavemuchmoreflexiblesingularsets. For λ>0,thefunction\nf(x) = min{σ(x1), σ(x2−λx1)}\nhas a singular set given by the union of three half-lines\nΣ ={0<x1=x2−λx1}∪{x1= 0,x2−λx1>0}∪{x2−λx1= 0,x1≥0}\nsince the function is zero everywhere outside of the quadrant {x1,x2>0}. Sincex/ma√sto→x2is a\nBarron function on bounded intervals,\nf(x,y) =σ(y−x2)\nBARRON FUNCTIONS: REPRESENTATION AND POINTWISE PROPERTIE S 31\nis a three-layer network on any compact subset of R2with one infinite and one finite layer –\nsee [EW20] for the precise concept of infinitely wide three-layer net works and [EW20, Lemma\n3.12] for a statement on the composition of Barron functions. Her e the singular set is the curve\n{y=x2}. Stranger examples like\nf(x,y) =σ(y−x2)+σ(x−y2)\nare also possible. Generally, the singular sets of three-layer netwo rks are at least as flexible as\nthe level sets of two-layer networks since for a Barron function fand a real value y, the function\nx/ma√sto→/vextendsingle/vextendsinglef(x)−y/vextendsingle/vextendsingle\nis a three-layer network whose singular set is the union of the singula r set offand the level set\n{f=y}.\n5.4.Applications of the structure theorem. The structure theorem for non-differentiable\nBarron functions allows us to characterize the class of morphisms w hich preserve Barron space.\nTheorem 5.18. Letψ:Rd→Rdbe aC1-diffeomorphism and\nAψ:C0,1(Rd)→C0,1(Rd), Aψ(f) =f◦ψ.\nThenAψ(B)⊆ Bif and only if ψis affine. In particular, Aψ(B) =B.\nProof.IfAψ(B)⊆ B, thenf(x) :=σ/parenleftbig\nwTψ(x) +b/parenrightbig\nis a Barron function for any w∈Rdand\nb∈ B. Asψis a diffeomorphism, the singular set of fcoincides with the level set\nA(w,b):=/braceleftbig\nx:wTψ(x)+b= 0/bracerightbig\n.\nWe conclude that for any w,b∈Rd, the level set A(w,b)is a hyperplane in Rd. In particular, for\n1≤i≤d, we find that the level sets of ψi(x) =ei·ψ(x) are parallel hyperplanes. This means\nthat there exist vectors vifor 1≤i≤dand functions φd:R→Rsuch thatψi(x) =φi/parenleftbig\nvT\nix/parenrightbig\n.\nWe note that also ( e1+e2)·ψ(x) =φ1(vT\n1x)+φ2(vT\n2x) has level sets which are hyperplanes,\ni.e.\nφ1(vT\n1x)+φ2(vT\n2x) =˜φ(˜vTx).\nBy regularity, all functions are C1-smooth, and since ψis a diffeomorphism, they are strictly\nmonotone. The diffeomorphism property also implies that v1,v2are linearly independent. In\nview of these properties, ˜ vcannot be a multiple of v1orv2. We compute\n˜φ′(˜vTx)˜v=φ′\n1(vT\n1x)v1+φ′\n2(vT\n2x)v2.\nChoosewin the plane spanned by v1,v2such thatwis orthogonal to ˜ v. Then\n˜φ′(˜vTx)˜v=˜φ′(˜vT(x+λw))˜v=φ′\n1(vT\n1x+λvT\n1w)v1+φ′\n2(vT\n2x+λvT\n2w)v2.\nAssuming that φ1,φ2areC2-smooth, we differentiate the identity with respect to λwe obtain\nthat\n(5.2) 0 = ( vT\n1w)φ′′\n1(vT\n1x+λvT\n1w)v1+(vT\n2w)φ′\n2(vT\n2x+λvT\n2w)v2∀λ∈R.\nSincev1,v2are linearly independent, this can only be satisfied if φ′′\n1=φ′′\n2= 0, i.e. if and only if\nφ1andφ2are both linear. If φ1orφ2is notC2-smooth, we can mollify (5.2) as a function of λ\nbefore we differentiate. The constant vectors v1,v2are not affected by the mollification, so we\nconclude that any mollification of φ1andφ2must be linear. As before, we conclude that φ1and\nφ2are linear functions.\nSinceψ(x) =/parenleftbig\nφi(vT\nix)/parenrightbigd\ni=1and all coefficient functions φiare linear, the whole map ψis\nlinear. /square\n32 WEINAN E AND STEPHAN WOJTOWYTSCH\nThe structure theorem can further be used to show that the Bar ron property cannot be\n‘localized’inthe samewayasclassicalsmoothnesscriteria. Thisobse rvationwasmadepreviously\nin [W+20].\nExample 5.19.LetU⊆R2be a U-shaped domain, e.g.\nU=R2\\{x1= 0,x2≥0}.\nThen the function\nf:U→R, f(x) =/braceleftigg\nσ(x2)x1>0\n0x1≤0\nhas the following property: Everyx∈Uhas a neighbourhood Vsuch thatfis a Barron function\nonV.However,fis not a Barron function on Usince the singular set of fwould need to\ncontain the intersection of Uwith the line {x:x2= 0}. This example can be generalized to\nother domains where it may be less obvious.\nAcknowledgements\nThis work is supported in part by a gift to Princeton University from iF lytek. SW would like\nto thank Jonathan Siegel for helpful discussions.\nReferences\n[ABM14] H. Attouch, G. Buttazzo, and G. Michaille. Variational analysis in Sobolev and BV spaces: applica-\ntions to PDEs and optimization . SIAM, 2014.\n[Amb08] L. Ambrosio. Transport equation and Cauchy problem for non-smooth vector fields. In Calculus of\nvariations and nonlinear partial differential equations , pages 1–41. Springer, 2008.\n[Bac17] F. Bach. Breaking the curse of dimensionality with c onvex neural networks. The Journal of Machine\nLearning Research , 18(1):629–681, 2017.\n[Bar93] A. R. Barron. Universal approximation bounds for su perpositions of a sigmoidal function. IEEE\nTransactions on Information theory , 39(3):930–945, 1993.\n[Bre93] L. Breiman. Hinging hyperplanes for regression, cl assification, and function approximation. IEEE\nTransactions on Information Theory , 39(3):999–1013, 1993.\n[Bre11] H. Brezis. Functional analysis, Sobolev spaces and partial differenti al equations . Universitext.\nSpringer, New York, 2011.\n[CB18] L. Chizat and F. Bach. On the global convergence of gra dient descent for over-parameterized models\nusing optimal transport. In Advances in neural information processing systems , pages 3036–3046,\n2018.\n[CPV20] A. Caragea, P. Petersen, and F. Voigtlaender. Neura l network approximation and estimation of clas-\nsifiers with classification boundary in a barron class. arXiv:2011.09363 [math.FA] , 2020.\n[Cyb89] G. Cybenko. Approximation by superpositions ofa si gmoidalfunction. Mathematics of control, signals\nand systems , 2(4):303–314, 1989.\n[Dob10] M. Dobrowolski. Angewandte Funktionalanalysis: Funktionalanalysis, Sob olev-R¨ aume und elliptische\nDifferentialgleichungen . Springer-Verlag, 2010.\n[EG15] L. C. Evans and R. F. Gariepy. Measure theory and fine properties of functions . CRC press, 2015.\n[EMW18] W. E, C. Ma, and L. Wu. A priori estimates of the popula tion risk for two-layer neural networks.\nComm. Math. Sci. , 17(5):1407 – 1425 (2019), arxiv:1810.06397 [cs.LG] (2018 ).\n[EMW19a] W. E, C. Ma, and L. Wu. Barron spaces and the composit ional function spaces for neural network\nmodels. arXiv:1906.08039 [cs.LG] , 2019.\n[EMW19b] W. E,C. Ma, and L. Wu. Machine learning from a contin uous viewpoint. arxiv:1912.12777 [math.NA] ,\n2019.\n[EW20] W. E and S. Wojtowytsch. On the Banach spaces associat ed with multi-layer ReLU networks of\ninfinite width. CSIAM Trans. Appl. Math. , 1(3):387–440, 2020.\n[EW21] W. E and S. Wojtowytsch. Kolmogorov width decay and po or approximators in machine learning:\nShallow neural networks, random feature models and neural t angent kernels. Res Math Sci , 8(5),\n2021.\n[JKO98] R. Jordan, D. Kinderlehrer, and F. Otto. The variati onal formulation of the Fokker–Planck equation.\nSIAM journal on mathematical analysis , 29(1):1–17, 1998.\nBARRON FUNCTIONS: REPRESENTATION AND POINTWISE PROPERTIE S 33\n[KB16] J. M. Klusowski and A. R. Barron. Risk bounds for high- dimensional ridge function combinations\nincluding neural networks. arXiv preprint arXiv:1607.01434 , 2016.\n[KB18] J. M. Klusowski and A. R. Barron. Approximation by com binations of relu and squared relu ridge\nfunctions with ℓ1andℓ0controls. IEEE Transactions on Information Theory , 64(12):7649–7656,\n2018.\n[Kle06] A. Klenke. Wahrscheinlichkeitstheorie , volume 1. Springer, 2006.\n[LLPS93] M. Leshno, V. Y. Lin, A. Pinkus, and S. Schocken. Mul tilayer feedforward networks with a nonpoly-\nnomial activation function can approximate any function. Neural networks , 6(6):861–867, 1993.\n[Lu21] Z. Lu. A note on the representation power of ghhs. arXiv preprint arXiv:2101.11286 , 2021.\n[MMN18] S. Mei, A. Montanari, and P.-M. Nguyen. A mean field vi ew of the landscape of two-layer neural\nnetworks. Proceedings of the National Academy of Sciences , 115(33):E7665–E7671, 2018.\n[NP20] P.-M. Nguyen and H. T. Pham. A rigorous framework for t he mean field limit of multilayer neural\nnetworks. arXiv:2001.11443 [cs.LG] , 2020.\n[RVE18] G. M. Rotskoff and E. Vanden-Eijnden. Neural network s as interacting particle systems: Asymptotic\nconvexity of the loss landscape and universal scaling of the approximation error. arXiv:1805.00915\n[stat.ML] , 2018.\n[SS20] J. Sirignano and K. Spiliopoulos. Mean field analysis of neural networks: A law of large numbers.\nSIAM J. Appl. Math , 80(2):725–752, 2020.\n[SX21] J. W. Siegel and J. Xu. Optimal approximation rates an d metric entropy of relukand cosine networks.\narXiv preprint arXiv:2101.12365 , 2021.\n[W+20] S. Wojtowytsch et al. Some observations on partial differ ential equations in barron and multi-layer\nspaces.arXiv preprint arXiv:2012.01484 , 2020.\n[WE20] S. Wojtowytsch and W. E. Can shallow neural networks b eat the curse of dimensionality? A mean\nfield training perspective. IEEE Transactions on Artificial Intelligence , 1(2):121–129, Oct 2020.\n[Woj20] S. Wojtowytsch. On the global convergence of gradie nt descent training for two-layer Relu networks\nin the mean field regime. arXiv:2005.13530 [math.AP] , 2020.\nWeinan E, Department of Mathematics and Program in Applied an d Computational Mathematics,\nPrinceton University, Princeton, NJ 08544, USA\nEmail address :[email protected]\nStephan Wojtowytsch, Princeton University, Programin Appli ed and Computational Mathematics,\n205 Fine Hall - Washington Road, Princeton, NJ 08544\nEmail address :[email protected]",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "TEMEtUro6p",
"year": null,
"venue": "Softw. Impacts 2021",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=TEMEtUro6p",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Deep-LfD: Deep robot learning from demonstrations",
"authors": [
"Amir Ghalamzan E",
"Kiyanoush Nazari",
"Hamidreza Hashempour",
"Fangxun Zhong"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "anggpLuJ6sv",
"year": null,
"venue": "CHI 2020",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=anggpLuJ6sv",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Adaptive Photographic Composition Guidance",
"authors": [
"Jane L. E",
"Ohad Fried",
"Jingwan Lu",
"Jianming Zhang",
"Radomír Mech",
"Jose Echevarria",
"Pat Hanrahan",
"James A. Landay"
],
"abstract": "Photographic composition is often taught as alignment with composition grids-most commonly, the rule of thirds. Professional photographers use more complex grids, like the harmonic armature, to achieve more diverse dynamic compositions. We are interested in understanding whether these complex grids are helpful to amateurs. In a formative study, we found that overlaying the harmonic armature in the camera can help less experienced photographers discover and achieve different compositions, but it can also be overwhelming due to the large number of lines. Photographers actually use subsets of lines from the armature to explain different aspects of composition. However, this occurs mainly offline to analyze existing images. We propose bringing this mental model into the camera-by adaptively highlighting relevant lines to the current scene and point of view. We describe a saliency-based algorithm for selecting these lines and present an evaluation of the system that shows that photographers found the proposed adaptive armatures helpful for capturing more well-composed images.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "S1VdoW-d-r",
"year": null,
"venue": "WWW (Companion Volume) 2018",
"pdf_link": "https://dl.acm.org/doi/pdf/10.1145/3184558.3191826",
"forum_link": "https://openreview.net/forum?id=S1VdoW-d-r",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Incorporating Statistical Features in Convolutional Neural Networks for Question Answering with Financial Data",
"authors": [
"Shijia E",
"Shiyao Xu",
"Yang Xiang"
],
"abstract": "The goal of question answering with financial data is selecting sentences as answers from the given documents for a question. The core of the task is computing the similarity score between the question and answer pairs. In this paper, we incorporate statistical features such as the term frequency-inverse document frequency (TF-IDF) and the word overlap in convolutional neural networks to learn optimal vector representations of question-answering pairs. The proposed model does not depend on any external resources and can be easily extended to other domains. Our experiments show that the TF-IDF and the word overlap features can improve the performance of basic neural network models. Also, with our experimental results, we can prove that models based on the margin loss training achieve better performance than the traditional classification models. When the number of candidate answers for each question is 500, our proposed model can achieve 0.622 in Top-1 accuracy (Top-1), 0.654 in mean average precision (MAP), 0.767 in normalized discounted cumulative gain (NDCG), and 0.701 in bilingual evaluation understudy (BLEU). If the number of candidate answers is 30, all the values of the evaluation metrics can reach more than 90%.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "qRy3NRrY6x",
"year": null,
"venue": "CoRR 2020",
"pdf_link": "http://arxiv.org/pdf/2006.02619v1",
"forum_link": "https://openreview.net/forum?id=qRy3NRrY6x",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Integrating Machine Learning with Physics-Based Modeling",
"authors": [
"Weinan E",
"Jiequn Han",
"Linfeng Zhang"
],
"abstract": "Machine learning is poised as a very powerful tool that can drastically improve our ability to carry out scientific research. However, many issues need to be addressed before this becomes a reality. This article focuses on one particular issue of broad interest: How can we integrate machine learning with physics-based modeling to develop new interpretable and truly reliable physical models? After introducing the general guidelines, we discuss the two most important issues for developing machine learning-based physical models: Imposing physical constraints and obtaining optimal datasets. We also provide a simple and intuitive explanation for the fundamental reasons behind the success of modern machine learning, as well as an introduction to the concurrent machine learning framework needed for integrating machine learning with physics-based modeling. Molecular dynamics and moment closure of kinetic equations are used as examples to illustrate the main issues discussed. We end with a general discussion on where this integration will lead us to, and where the new frontier will be after machine learning is successfully integrated into scientific modeling.",
"keywords": [],
"raw_extracted_content": "Integrating Machine Learning with Physics-Based Modeling\nWeinan E1,2, Jiequn Han1, and Linfeng Zhang2\n1Department of Mathematics, Princeton University\n2Program in Applied and Computational Mathematics, Princeton University\nAbstract\nMachine learning is poised as a very powerful tool that can drastically improve\nour ability to carry out scienti\fc research. However, many issues need to be ad-\ndressed before this becomes a reality. This article focuses on one particular issue of\nbroad interest: How can we integrate machine learning with physics-based modeling\nto develop new interpretable and truly reliable physical models? After introducing\nthe general guidelines, we discuss the two most important issues for developing ma-\nchine learning-based physical models: Imposing physical constraints and obtaining\noptimal datasets. We also provide a simple and intuitive explanation for the fun-\ndamental reasons behind the success of modern machine learning, as well as an\nintroduction to the concurrent machine learning framework needed for integrating\nmachine learning with physics-based modeling. Molecular dynamics and moment\nclosure of kinetic equations are used as examples to illustrate the main issues dis-\ncussed. We end with a general discussion on where this integration will lead us to,\nand where the new frontier will be after machine learning is successfully integrated\ninto scienti\fc modeling.\n1 Fundamental laws and practical methodologies\nPhysics is centered on two main themes: the search for fundamental laws and the solution\nof practical problems. The former has resulted in Newton's laws, Maxwell equations,\nthe theory of relativity and quantum mechanics. The latter has been the foundation of\nmodern technology, ranging from automobiles, airplanes, computers to cell phones. In\n1929, after quantum mechanics was just discovered, Paul Dirac made the following claim\n[1]:\n\\The underlying physical laws necessary for the mathematical theory of a large part\nof physics and the whole of chemistry are thus completely known, and the di\u000eculty is\nonly that the exact application of these laws leads to equations much too complicated to\nbe soluble. \"\nWhat happened since then has very much con\frmed Dirac's claim. It has been uni-\nversally agreed that to understand problems in chemistry, biology, material science, and\nengineering, one rarely needs to look further than quantum mechanics for \frst princi-\nples. But solving practical problems using quantum mechanics principles, for example,\n1arXiv:2006.02619v1 [physics.comp-ph] 4 Jun 2020\nthe Schr odinger equation, is a highly non-trivial matter, due, among other things, to\nthe many-body nature of the problem. To overcome these mathematical di\u000eculties, re-\nsearchers have proceeded along the following lines:\n1. Looking for simpli\fed models. For example, Euler's equations are often enough for\nstudying gas dynamics. There is no need to worry about the detailed electronic\nstructure entailed in the Schr odinger equation.\n2. Finding approximate solutions using numerical algorithms and computers.\n3. Multi-scale modeling: In some cases one can model the behavior of a system at the\nmacroscopic scale by using only a micro-scale model.\nWe will brie\ry discuss each.\n1.1 Seeking simpli\fed models\nSeeking simpli\fed models that either capture the essence of a problem or describe some\nphenomenon to a satisfactory accuracy has been a constant theme in physics. Ideally, we\nwould like our simpli\fed models to have the following properties:\n1. express fundamental physical principles (e.g. conservation laws),\n2. obey physical constraints (e.g. symmetries, frame-indi\u000berence),\n3. be as universally accurate as possible: Ideally one can conduct a small set of exper-\niments in idealized situations and obtain models that can be used under much more\ngeneral conditions,\n4. be physically meaningful (interpretable).\nEuler's equations for gas dynamics is a very successful example of simpli\fed physical\nmodels. It is much simpler than the \frst principles of quantum mechanics, and it is a\nvery accurate model for dense gases. For ideal gases, the only parameter required is the gas\nconstant. For complex gases, one needs the entire equation of state, which is a function of\nonly two variables. Other success stories include the Navier-Stokes equations for viscous\n\ruids, linear elasticity equations for small deformations of solids, and the Landau theory\nof phase transition.\nUnfortunately, not every e\u000bort in coming up with simpli\fed models has been as suc-\ncessful. A good example is the e\u000bort of developing extended Euler equations for rari\fed\ngas. Since the work of Grad [2], there have been numerous e\u000borts on developing Euler-like\nmodels for the dynamics of gases at larger Knudsen numbers. So far this e\u000bort has not\nproduced any widely accepted models yet.\nTo put things into perspective, we brie\ry discuss the methodologies that people use\nto obtain simpli\fed models.\n2\nFigure 1: Representative physical models at di\u000berent scale and their most important\nmodeling ingredients.\n1. Generalized hydrodynamics [3]. The idea is to use symmetries, conservation laws,\nand the second law of thermodynamics to extract as much de\fnitive information\nabout the dynamics as possible, and model the rest using linear constitutive re-\nlations. A successful example is the Ericksen-Leslie equations for nematic liquid\ncrystals, which works reasonably well in the absence of line defects [4].\n2. The weakly nonlinear theory and gradient expansion trick. This is an idea cham-\npioned by Landau. It has been successfully used in developing models for phase\ntransition, superconductivity, hydrodynamic instability, and a host of other inter-\nesting physical problems.\n3. Asymptotic analysis. This is a mathematical tool that can be used to systematically\nextract simpli\fed models by exploiting the existence of some small parameters. For\nexample, Euler's equation can be derived this way from the Boltzmann equation in\nthe limit of small Knudsen number.\n4. The Mori-Zwanzig formalism. This is a general strategy for eliminating unwanted\ndegrees of freedom in a physical model. The price to be paid is that the resulting\nmodel becomes nonlocal, for example, with memory e\u000bects. For this reason, the\napplication of the Mori-Zwanzig formalism has been limited to situations where the\nnonlocal term is linear. An example is the generalized Langevin equations [5].\n5. Principal component-based model reduction. This is a broad class of techniques used\nin engineering for developing simpli\fed models. A typical example is to project the\noriginal model onto a few principal components obtained using data produced from\n3\nthe original model. Well-known applications of this technique include reduced mod-\nels that aim to capture the dynamics of large scale coherent structures in turbulent\n\rows [6].\n6. Drastically truncated numerical schemes. The well-known Lorenz system is an ex-\nample of this kind. It is the result of a drastic truncation of the two-dimensional\nincompressible Navier-Stokes-Boussinesq equation modeling thermal convection, an\nidealized model for weather prediction. By keeping only three leading modes for the\nstream-function and temperature, one arrives at the system\n_x=\u001b(y\u0000x);_y=\u0000xz+rx\u0000y; _z=xy\u0000bz;\nwhere\u001bis the Prandtl number, r=Ra=Racis the normalized Rayleigh number,\nandbis some other parameter. A well-known feature of this model is that it exhibits\nchaotic behavior, and this has been used as an indication for the intrinsic di\u000eculties\nin weather prediction. Of course, the obvious counter-argument is how well the\nLorenz system captures the dynamics of the original model.\n1.2 Numerical algorithms\nSince analytical solutions are rare even after we simplify the models, one has to resort to\nnumerical algorithms to \fnd accurate approximate solutions. Many numerical algorithms\nhave been developed for solving the partial di\u000berential equations (PDEs) that arise from\nphysics, including \fnite di\u000berence, \fnite element and spectral methods. The availability\nof these algorithms has completely changed the way we do science, and to an even greater\nextent, engineering. For example nowadays numerical computation plays a dominant role\nin the study of \ruid and solid mechanics. Similar claims can be made for atmospheric\nscience, combustion, material sciences and a host of other disciplines, though possibly to\na lesser extend.\nRoughly speaking, one can say that we now have satisfactory algorithms for low dimen-\nsional problems (say three dimensions). But things quickly become much more di\u000ecult as\nthe dimensionality goes up. A good example is the Boltzmann equation: The dimension-\nality of the phase space and the nonlocality in the collision kernel makes it quite di\u000ecult\nto solve the Boltzmann equation using the kinds of algorithms mentioned above, even\nthough the dimensionality is small compared to the ones encountered in the many-body\nSchr odinger equation.\nThis brings us to the issue that lies at the core of the many di\u000ecult problems that we\nface: The curse of dimensionality (CoD) : As the dimensionality grows, the complexity\n(or computational cost) grows exponentially.\n1.3 Multi-scale modeling\nOne important idea for overcoming the di\u000eculties mentioned above is multi-scale modeling\n[5], a general philosophy based on modeling the behavior of macro-scale systems using\nreliable micro-scale models, instead of relying on ad hoc macro-scale models. The idea\nis to make use of the results of the micro-scale model on much smaller spatial-temporal\n4\ndomains to predict the kind of macro-scale quantities that we are interested in [5]. This\ngeneral philosophy is valid for a wide range of scienti\fc disciplines. But until now, the\nsuccess of multi-scale modeling has been less spectacular than what was expected twenty\nyears ago. The following challenges contributed to this.\n1. The micro-scale models are often not that reliable. For example when studying\ncrack propagation, we often use molecular dynamics as the micro-scale model. But\nthe accuracy of molecular dynamics models for dynamic processes that involve bond\nbreaking is often questionable.\n2. Even though multi-scale modeling can drastically reduce the size of the micro-scale\nsimulation required, it is still beyond our current capability.\n3. The key bene\ft we are exploiting with multi-scale modeling is the separation of\nthe micro- and macro-scales of the problem. But for the most interesting and most\nchallenging problems, this often breaks down.\n4. At a technical level, e\u000ecient multi-scale modeling requires e\u000bective algorithms for\nextracting the relevant information needed from micro-scale simulations. This is a\ndata analysis issue that has not been adequately addressed.\nThere are two basic strategies for multi-scale modeling, the sequential multi-scale\nmodeling and the concurrent multi-scale modeling. In sequential multi-scale modeling,\nthe needed components from the micro-scale model, for example, the constitutive rela-\ntion, are obtained beforehand, and this information is then supplied to some macro-scale\nmodel. For this reason, sequential multi-scale modeling is also called \\precomputing\". In\nconcurrent multi-scale modeling, the coupling between the macro- and micro-scale models\nis done on the \ry as the simulation proceeds. Sequential multi-scale modeling results in\nnew models. Concurrent multi-scale modeling results in new algorithms.\n1.4 Di\u000eculties that remain\nA lot of progresses have been made using these methodologies in combination with physical\ninsight as well as trial and error parameter \ftting. This has enabled us to solve a wide\nvariety of problems, ranging from performing density functional theory (DFT) calculations\nto predict properties of materials and molecules, to studying the climate using general\ncirculation models. In spite of these advances, many issues remain as far as getting good\nmodels is concerned, and many problems remain di\u000ecult. Here are some examples:\n1. A crucial component of the Kohn-Sham DFT is the exchange-correlation functional.\nHowever, systematically developing e\u000ecient and accurate exchange-correlation func-\ntionals is still a very di\u000ecult task.\n2. The most important component of a molecular dynamics model is the potential\nenergy surface (PES) that describes the interaction between the nuclei in the sys-\ntem. Accurate and e\u000ecient models of PES has always been a di\u000ecult problem in\nmolecular dynamics. We will return to this issue later.\n5\n3. An attractive idea for modeling the dynamics of macromolecules is to develop coarse-\ngrained molecular dynamics models. However, implementing this idea in practice is\nstill a di\u000ecult task.\n4. Developing hydrodynamic models for non-Newtonian \ruids is such a di\u000ecult task\nthat the subject itself has lost some steam.\n5. Moment closure models for rari\fed gases. This was mentioned above and will be\ndiscussed in more detail later.\n6. For \ruids, we have the Navier-Stokes equations. What is the analog for solids?\nBesides linear elasticity models, there are hardly any other universally accepted\ncontinuum models of solids. This comment applies to nonlinear elasticity. Plasticity\nis even more problematic.\n7. Turbulence models. This has been an issue ever since the work of Reynolds and we\nstill do not have a systematic and robust way of addressing this problem.\nFrom the viewpoint of developing models, one major di\u000eculty has always been the\n\\closure problem\": When constructing simpli\fed models, we encounter terms that need\nto be approximated in order to obtain a closed system. Whether accurate closure can be\nachieved also depends in an essential way on the level at which we impose the closure, i.e.\nthe variables we use to close the system. For example, for DFT models, it is much easier\nto perform the required closure for orbital-based models than for orbital-free models. For\nturbulence models, the closure problem is much simpli\fed in the context of large eddy\nsimulation than Reynolds average equations, since a lot more information is kept in the\nlarge eddy simulation.\nFrom the viewpoint of numerical algorithms, these problems all share one important\nfeature: There are a lot of intrinsic degrees of freedom. For example turbulent \rows are\ngoverned by the Navier-Stokes equations, which is a low dimensional problem, but its\nhighly random nature means that we should really be looking for its statistical descrip-\ntion, which then becomes a very high dimensional problem with the dimensionality being\nproportional to the range of active scales. Existing numerical algorithms can not handle\nthese intrinsically high dimensional problems e\u000bectively.\nIn the absence of systematic approaches, one has to resort to ad hoc procedures which\nare not just unpleasant but also unreliable. Turbulence modeling is a very good example\nof the kind of pain one has to endure in order to address practical problems.\n1.5 Machine learning comes to the rescue\nRecent advance in machine learning o\u000bers us unprecedented power for approximating\nfunctions of many variables. This allows us to go back to all the problems that were made\ndi\u000ecult by CoD. It also provides an opportunity to reexamine the issues discussed above,\nwith a new added dimension.\nThere are many di\u000berent ways in which machine learning can be used to help solving\nproblems that arise in science and engineering. We will focus our discussion on the\nfollowing issue:\n6\n\u000fHow can we use machine learning to \fnd new interpretable and truly reliable\nphysical models?\nWhile machine learning can be a very powerful tool, it usually does not work so well\nwhen used as a blackbox. One main objective of this article is to discuss the important\nissues that have to be addressed when integrating machine learning with physics-based\nmodeling.\nHere is a list of basic requirements for constructing new physical models with the help\nof machine learning:\n1. The models should satisfy the requirements listed in Section 1.1.\n2. The dataset that one uses to construct the model should be a good representation\nof all the practical situations that the model is intended for. It is one thing to \ft\nsome data. It is quite another thing to construct reliable physical models.\n3. To reduce the amount of human intervention, the process of constructing the models\nshould be end-to-end.\nLater we will examine these issues in more detail after a brief introduction to machine\nlearning.\n2 Machine learning\nBefore getting into the integration of machine learning with physics-based modeling, let\nus brie\ry discuss an issue that is in many people's mind: What is the magic behind\nneural network-based machine learning? While this question is still at the center of very\nintensive studies in theoretical machine learning, some insight can already be gained from\nvery simple considerations.\nWe will discuss a basic example, supervised learning: Given ther dataset S=f(xj;yj=\nf\u0003(xj));j= 1;2;\u0001\u0001\u0001;ng, learnf\u0003. Herefxj;j= 1;2;\u0001\u0001\u0001ngis a set of points in the ddi-\nmensional Euclidean space. For simplicity we have neglected measurement noises. If the\n\\labels\"fyjgtake discrete values, the problem is called a classi\fcation problem. Oth-\nerwise it is called a regression problem. In practice, one divides Sinto two subsets, a\ntraining set used to train the model and a testing set used to test the model.\nA well-known example is the classi\fcation example of Cifar 10 dataset [7]. Here the\ntask is to classify the images in the dataset into 10 categories. Each image has 32 \u000232\npixels. Thus it can be viewed as a point in 32 \u000232\u00023 = 3072 dimensional space. The\nlast factor of 3 comes from the dimensionality in the color space. Therefore this problem\ncan be viewed as approximating a discrete-valued function of 3072 variables.\nThe essence of the supervised learning problem is the approximation of a target func-\ntion using a \fnite dataset. While the approximation of functions is an ancient topic,\nin classical approximation theory, we typically use polynomials, piece-wise polynomials,\nwavelets, splines and other linear combinations of \fxed basis functions to approximate a\ngiven function. These kinds of approximations su\u000ber from CoD:\nf\u0003\u0000fm\u0018m\u0000\u000b=d\u0000(f\u0003);\n7\nwheremis the number of free parameters in the approximation scheme, \u000bis some \fxed\nconstant depending on the approximation scheme, \u0000( f\u0003) is an error constant depending\non the target function f\u0003. Take\u000b= 1. If we want to reduce the error by a factor of 10,\nthe number of parameters needed goes up by 10d. This is clearly impractical when the\ndimensionality dis large.\nThe one area in which we have lots of experiences and successes in handling high\ndimensional problems is statistical physics, the computation of high dimensional expec-\ntations using Monte Carlo (MC) methods. Indeed computing integrals of functions with\nmillions of variables has become a routine practice in statistical physics that we forget to\nnotice how remarkable this is. Let gand\u0016be a function and a probability distribution\nin theddimensional Euclidean space respectively, and let\nI(g) =Ex\u0018\u0016g(x); Im(g) =1\nmX\njg(xj);\nwherefxjgis a sequence of i.i.d. samples of the probability distribution \u0016. Then we have\nE(I(g)\u0000Im(g))2=var(g)\nm;var(g) =Z\nXg2(x)dx\u0000\u0012Z\nXg(x)dx\u00132\n:\nNote that the error rate 1 =pnis independent of the dimensionality of the problem. Had\nwe used grid-based algorithms such as the Trapezoidal rule, we would have instead:\nI(g)\u0000Im(g)\u0018m\u0000\u000b=d\u0000(g);\nresulting in the CoD. Note also that var( g) can be very large in high dimensions. This is\nwhy variance reduction is a central theme in MC.\nThe MC story is the golden standard for dealing with high dimensional problems. A\nnatural question is: Can we turn the function approximation problem, the key issue in\nsupervised learning, into MC integration type of problems? To gain some insight, let us\nconsider the Fourier representation of functions:\nf\u0003(x) =Z\nRda(!)ei(!;x)d!: (1)\nWe are used to approximate this expression by some grid-based discrete Fourier transform\nf\u0003(x)\u00181\nmX\nja(!j)ei(!j;x); (2)\nand this approximation su\u000bers from CoD.\nIf instead that we consider functions represented by\nf\u0003(x) =Z\nRda(!)ei(!;x)\u0019(d!) =E!\u0018\u0019a(!)ei(!;x); (3)\nwhere\u0019is a probability distribution on the Euclidean space Rd, then the natural approx-\nimation becomes\nf\u0003(x)\u00181\nmmX\nj=1a(!j)ei(!j;x); (4)\n8\nwheref!jgare i.i.d. samples of \u0019. For reasons discussed earlier, this approximation does\nnot su\u000ber from CoD. The right hand side of (4) is an example of neural network functions\nwith one hidden layer and activation function \u001bde\fned by \u001b(z) =eiz. If one wants a\none-sentence description of the magic behind neural network models, the reasoning above\nshould be a good candidate.\nThere is one key di\u000berence between the supervised learning problem and MC integra-\ntion: In MC, the probability distribution \u0016is given, usually in fairly explicit form. In\nsupervised learning, the probability distribution \u0019is unknown. Instead we are only given\ninformation about the target function f\u0003on a \fnite dataset. Therefore it is infeasible to\nperform the approximation in (4). Instead one has to replace it by some optimization\nalgorithms based on \ftting the dataset.\n3 Concurrent machine learning\nIn classical supervised learning, the labeled dataset is given beforehand and learning is\nperformed afterwards. This is usually not the case when machine learning is used in\nconnection with physical models. Instead, data generation and training is an interactive\nprocess: Data is generated and labeled on the \ry as model training proceeds. In analogy\nwith multi-scale modeling, we refer to the former class of problems \\sequential machine\nlearning\" problems and the latter kind \\concurrent machine learning\" problems.\nActive learning is something in between. In active learning, we are given the unla-\nbeled data, and we decide which ones should be labeled during the training process. In\nconcurrent learning, one also needs to generate the unlabeled data.\nTo come up with reliable physical models, one has to have reliable data: The dataset\nshould be able to represent all the situations that the model is intended for. On the\nother hand, since data generation typically involves numerical solutions of the underlying\nmicro-scale model, a process that is often quite expensive, we would like to have as small\na dataset as possible. This calls for a procedure that generates the data adaptively in an\ne\u000ecient way.\n3.1 The ELT algorithm\nThe (exploration-labeling-training) ELT algorithm was \frst formulated in Ref. [8] al-\nthough similar ideas can be traced back for a long time. Starting with no (macro-scale)\nmodel and no data but with a micro-scale model, the ELT algorithm proceeds iteratively\nwith the following steps:\n1.exploration : explore the con\fguration space, and decide which con\fgurations need\nto be labeled. The actual implementation of this step requires an algorithm for\nexploring the con\fguration space and a criterion for deciding which con\fgurations\nneed to be labeled.\nOftentimes the current macro-scale model is used to help with the exploration.\n2.labeling : compute the micro-scale solutions for the con\fgurations that need to be\nlabeled, and place them in the training dataset.\n9\n3.training : train the needed macro-scale model.\nThe ELT algorithm requires the following components: a macro-scale explorer, an\nindicator to evaluate whether a given con\fguration should be labeled, a micro-scale model\nused for the labeling, and a machine learning model for the quantities of interest.\nOne way of deciding whether a given con\fguration needs to be labeled is to estimate\nthe error of the current machine learning model at the given con\fguration. When neural\nnetwork models are used, a convenient error indicator can be constructed as the variance\nbetween the predictions given by an ensemble of neural network models (say with the same\narchitecture but di\u000berent initialization of the optimization algorithm). If the variance is\nsmall, then the predictions from di\u000berent models are close to each other. In this case,\nit is likely that these predictions are already quite accurate at the given con\fguration.\nTherefore this con\fguration does not need to be labeled. Otherwise, it is likely that the\npredictions are inaccurate and the given con\fguration should be labeled.\n3.2 Variational Monte Carlo\nAnother example where concurrent learning might be relevant is machine learning-based\napproach for variational Monte Carlo algorithms. Variational Monte Carlo (VMC) [9, 10],\nfor solving the Schr odinger equation was among the \frst set of applications of machine\nlearning in computational science [11, 12]. In these and other similar applications, one\nbegins with no data and no solution, the data is generated on the \ry as the solution process\nproceeds. Given a Hamiltonian ^Hwith the state variable x, the variational principle states\nthat the wave-function \t associated with the ground state minimizes within the required\nsymmetry the following energy functional\nE[\t] =R\n\t\u0003(x)^H\t(x)dxR\n\t\u0003(x)\t(x)dx=Ex\u0018j\t(x)j2^H\t(x)\n\t(x): (5)\nIn a machine learning-based approach, one parameterizes the wave-function using some\nmachine learning model, \t \u0018\t\u0012with some parameter \u0012that needs to determined. In\nthis case both the integrand and the distribution in (5) depend on \u0012. To \fnd the optimal\n\u0012, the learning process consists of two alternating phases, the sampling phase and the\noptimization phase. In the sampling phase, a sampling algorithm is used to sample the\nprobability distribution generated by the current approximation of \t2. This sample is\nused to evaluate the loss function in (5). In the optimization phase, one iteration of the\noptimization algorithm is applied to the loss function. This procedure also resembles the\nEM (expectation-maximization) algorithm.\n4 Molecular modeling\nOne of the most successful applications of machine learning to scienti\fc modeling is in\nmolecular dynamics. Molecular dynamics (MD) is a way of studying the atomic scale\nproperty of materials and molecules by tracking the dynamics of the nuclei in the system\nusing classical Newtonian dynamics. The key issue in MD is how to model the potential\n10\nenergy surface (PES) that describes the interaction between the nuclei. This is a function\nthat depends on the position of all the nuclei in the system:\nE=E(x1;x2;:::;xi;:::;xN);\nTraditionally, there have been two ways to deal with this problem. The \frst is to compute\nthe inter-atomic forces on the \ry using \frst principles-based models such as the DFT [13].\nThis is the so-called ab initio molecular dynamics pioneered by Car and Parrinello [14].\nThis approach gives an accurate description of the system under consideration. However,\nit is computationally very expensive, limiting the size of the system that one can handle\nto thousands of atoms. At the other extreme is the approach of using empirical potentials.\nIn a nutshell this approach aims at modeling the PES with empirical formulas. A well-\nknown example is the Lennard-Jones potential, a reasonable model for describing the\ninteraction between inert atoms:\nVij= 4\u000f\u0012\n(\u001b\nrij)12\u0000(\u001b\nrij)6\u0013\n; E =1\n2X\ni6=jVij:\nEmpirical potential-based MD is very e\u000ecient, but guessing the right formula that can\nmodel the PES accurately enough is understandably a very di\u000ecult task, particularly\nfor complicated systems such as high entropy alloys. In practice one has to resort to\na combination of physical insight and ad hoc approximation. Consequently the gain in\ne\u000eciency is at the price of reliability. In particular, since these empirical potentials are\ntypically calibrated using data for equilibrium systems, their accuracy at other interesting\ncon\fgurations, such as transition states, is often in doubt.\nThe pioneering work of Behler and Parrinello introduced a new paradigm for perform-\ningab initio molecular dynamics [15]. In this new paradigm, a \frst principle-based model\nacts as the vehicle for generating highly accurate data. Machine learning is then used to\nparametrize this data and output a model of the PES with ab initio accuracy. See also\nthe work of Csanyi et al [16].\nBehler and Parrinello introduced a neural network architecture that is naturally ex-\ntensive (see Figure 2). In this architecture, each nucleus is associated with a subnetwork.\nWhen the system size is increased, one just has to append the existing network with more\nsubnetworks corresponding to the new atoms added.\nAs discussed before, to construct truly reliable models of PES, one has to address two\nproblems. The \frst is getting good data. The second is imposing physical constraints.\nLet us discuss the second problem \frst. The main physical constraints for modeling\nthe PES are the symmetry constraints. Naturally the PES model should be invariant\nunder translational and rotational symmetries. It should also be invariant under the\nrelabeling of atoms of the same chemical species. This is a permutational symmetry\ncondition. Translational symmetry is automatically taken care of by putting the origin of\nthe local coordinate frame for each nucleus at that nucleus. To deal with other symmetry\nconstraints, Behler and Parrinello resorted to constructing \\local symmetry functions\"\nusing bonds, bond angles, etc., in the same spirit as the constructions in the Stillinger-\nWeber potential [18]. This construction is a bit ad hoc and is vulnerable to the criticism\nmentioned earlier for empirical potentials.\n11\nFigure 2: Illustration of the neural network architecture for a system of Natoms. From\nRef. [17].\nTo demonstrate the importance of preserving the symmetry, we show in Figure 3 the\ncomparison of the results with and without imposing symmetry constraints. One can\nsee that without enforcing the symmetry, the test accuracy of the neural network model\nis rather poor. Even a poor man's version of enforcing symmetry can improve the test\naccuracy drastically.\nA poor man's way of enforcing symmetry is to remove by hand the degrees of freedom\nassociated with these symmetries. This can be accomplished as follows [19, 17]:\n\u000fenforcing rotational symmetry by \fxing in some way a local frame of reference. For\nexample, for atom i, we can use its nearest neighbor to de\fne the x-axis and use\nthe plane spanned by its two nearest neighbors to de\fne the z-axis, thereby \fxing\nthe local frame.\n\u000fenforcing permutational symmetry by \fxing an ordering of the atoms in the neigh-\nborhood. For example, within each species we can sort the atoms according to their\ndistances to the reference atom i.\nThis simple operation allows us to obtain neural network models with very good accuracy,\nas shown in Figure 3 (a).\nThe only problem with this procedure is that it creates small discontinuities when the\nordering of the atoms in a neighborhood changes. These small discontinuities, although\nnegligible for sampling a canonical ensemble, show up drastically in a microcanonical\nmolecular dynamics simulation, as can be seen in Figure 3 (b).\nTo construct a smooth potential energy model that satis\fes all the symmetry con-\nstraints we have to reconsider how to represent the general form of symmetry-preserving\nfunctions. The idea is to precede the \ftting network in each subnetwork by an embed-\nding network that produces a su\u000ecient number of symmetry-preserving functions. In this\nway the end model is automatically symmetry-preserving. The PES generated this way\nis called (the smooth version of) Deep Potential [20]. MD driven by Deep Potential is\ncalled DeePMD. The idea of embedding network is fairly general and can be used in other\nsituations where symmetry is an important issue [21, 22].\n12\nFigure 3: (a) Test accuracy without imposing the permutational symmetry (blue) and\nwith a poor man's way of enforcing the permutational constraints (red). (b) Total en-\nergy per atom as a function of time from microcanonical DeePMD simulations, using a\npoor man's way to impose symmetry constraints (blue) and using the smooth embedding\nnetwork (red).\nTo construct the general form of symmetry-preserving functions, we draw inspiration\nfrom the following two observations.\nTranslation and Rotation. The matrix \n ij\u0011ri\u0001rjis an over-complete array of\ninvariants with respect to translation and rotation [23, 24], i.e., it contains the complete\ninformation of the point pattern r. However, this symmetric matrix switches rows and\ncolumns under a permutational operation.\nPermutation. We consider a generalization of Theorem 2 of Ref. [25]: A function\nf(r1;:::;ri;:::;rN) is invariant to the permutation of instances in ri, if and only if it can\nbe decomposed into the form \u001a(P\nig(ri)ri), for suitable transformations gand\u001a.\nWe now go back to the \frst problem, the generation of good data using the ELT\nalgorithm. The objective is to develop a procedure that can generate Deep Potentials for\na given system that can be used in a wide variety of thermodynamic conditions. In the\nimplementation in [26], exploration was done using the following ideas. At the macro-\nscale level, one samples the ( T;p) space of thermodynamic variables in some way. For each\nvalue of (T;p), one samples the canonical ensemble, using MD with the Deep Potential\navailable at the current iteration. In addition, the e\u000eciency of the exploration can be\nimproved by initializing the MD using a variety of di\u000berent initial con\fgurations, say\ndi\u000berent crystal structures. Labeling was done using the Kohn-Sham DFT with periodic\nboundary condition. Training was done using Deep Potential. An example is given in\nFigure 4. In this example, only \u00180.005% of the con\fgurations explored was labeled [26].\nIn particular, the Deep Potential constructed this way satis\fes all the requirements listed\nin Section 1.5.\nAs a \frst example of the applications of these ideas, we show in Figure 5 the accu-\nracy of the Deep Potential for a wide variety of systems, from small molecules, to large\nmolecules, to oxides and high entropy alloys. One can see that in all the cases stud-\n13\nFigure 4: Schematic plot of one iteration of the DP-GEN scheme, taking the Al-Mg sys-\ntem as an example. (a) Exploration with DeePMD. (a.1) Preparation of initial structures.\nWe start from stable crystalline structures of pure Al and Mg, compress and dilate the\nstable structures uniformly to allow for a larger range of number densities, and then ran-\ndomly perturb the atomic positions and cell vectors of all the initial crystalline structures.\nSurface-related structures are generated with rigid displacement. Based on con\fgurations\nof pure metal we also generate random alloy structures. (a.2) Canonical simulation at a\ngiven temperature. (b) Labeling with electronic structure calculations. (c) Training with\nthe DP model. From [26].\nied, Deep Potential achieves an accuracy comparable to that of the underlying DFT. As\na second example, we show the results of using Deep Potential to study the structural\ninformation of liquid water (see Figure 6). One can see that the results for the radial\nand angular distribution functions are comparable to the results from ab initio MD. As\na third example, combined with the state-of-art high performance computing platform\n(Summit), molecular dynamics simulations with ab initio accuracy have been pushed to\nsystems of up to 100 million atoms, making it possible to study more complex phenom-\nena that require truly large-scale simulations [27, 28]. An application to nanocrystalline\ncopper was presented in Ref. [28]. Deep Potential and DP-GEN have been implemented\nin the open-source packages DeePMD-kit [29] and DP-GEN [30], respectively, and have\nattracted researchers from various disciplines.\nFor other related work, we refer to [31, 32, 33].\n14\nstandardized DFT energystandardized DeepPot-SE energy\n(a) small molecules (b) MoS 2 + Pt (c) CoCrFeMnNi HEA\n(d) TiO 2\n(e) pyridine (f) othersFigure 5: Comparison of the DFT energies and the Deep Potential-predicted energies on\nthe testing snapshots. The energies are standardized for a clear comparison. (a) Small\nmolecules. (b) MoS 2and Pt. (c) CoCrFeMnNi high-entropy alloy (HEA). (d) TiO 2. (e)\nPyridine (C 5H5N). (f) Other systems: Al 2O3, Cu, Ge, and Si. From [20].\n0.00.51.01.52.02.53.03.5\n 0 1 2 3 4 5 6RDF g(r)\nr [Å]DeePMD O−O\nDeePMD O−H\nDeePMD H−H\nDFT O−O\nDFT O−H\nDFT H−H\n0.00.10.20.30.40.50.60.70.8\n 0.5 1 1.5 2 2.5 3P(ψ)\nψ [rad]DeePMD\nDFT\nFigure 6: Correlation functions of liquid water from DeePMD and PI-AIMD. Left: radial\ndistribution functions. Right: the O-O-O angular distribution function. From [17].\n15\n5 Moment closure for kinetic models of gas dynamics\nThe dynamics of gases is described very well by the Boltzmann equation for the single-\nparticle phase space density function f=f(x;v;t) :R3\u0002R3\u0002R!R+:\n@tf+v\u0001rxf=1\n\"Q(f): (6)\nHere we use \"to denote the Knudsen number, Qis the collision operator. When \"is\nsmall, the Boltzmann equation can be accurately reduced to the Euler equation for the\nmacroscopic variables, \u001a,uandTwhich represent density, bulk velocity and temperature\nrespectively:\n\u001a=Z\nR3fdv;u=1\n\u001aZ\nR3fvdv; T =1\n3\u001aZ\nR3fjv\u0000uj2dv: (7)\n@tU+rx\u0001F(U) = 0; (8)\nwithU= (\u001a;\u001au;E)T,F(U) = (\u001au;\u001au\nu+pI;(E+p)u)T,p=\u001aT,E=1\n2\u001ajuj2+3\n2\u001aT.\nEuler's equation can be obtained by projecting Boltzmann's equation onto the \frst few\nmoments that de\fned in (7), and making use of the local Maxwellian approximation to\nclose the system:\nfM(v) =\u001a\n(2\u0019T)3\n2exp\u0012\n\u0000jv\u0000uj2\n2T\u0013\n: (9)\nThe accuracy of the Euler's equation deteriorates when the Knudsen number increases,\nsince the local Maxwellian is no longer a good approximation for the solution of the\nBoltzmann equation. To improve upon Euler's equation, Grad proposed the idea of\nusing more moments to arrive at extended Euler-like equations [2]. As an example, Grad\nproposed the well-known 13-moment equation. Unfortunately Grad's 13-moment equation\nsu\u000bers from all kinds of problems, among which is the loss of hyperbolicity in certain\nregions of the state space. Since then, many di\u000berent proposals have been made in order\nto obtain reliable moment closure systems. The usual procedure is as follows.\n1. Start with a choice of a \fnite-dimensional linear subspace of functions of v(usually\nsome set of polynomials, e.g., Hermite polynomials).\n2. Expand f(x;v;t) using these functions as bases and take the coe\u000ecients as moments\n(including macroscopic variables \u001a,u,T, etc.).\n3. Close the system using some simpli\fed assumptions, e.g., truncating moments of\nhigher orders.\nFor instance, in Grad 13-moment system, the moments are constructed using the basis\nf1;v;(v\u0000u)\n(v\u0000u);jv\u0000uj2(v\u0000u)g. It is fair to say that at the moment, this e\u000bort\nhas not really succeeded.\nThere are two important issues that one has to address in order to obtain reliable\nmoment closure systems. The \frst is: What is the best set of moments that one should\n16\nuse? The second is: How does one close the projected system? The second problem is\nthe well-known closure problem.\nOne possible machine learning-based approach is as follows [34].\n1. Learn the set of generalized moments using some machine learning models such as\nthe auto-encoder. The problem can be formulated as follows: Find an encoder \t that\nmapsf(\u0001;v) to generalized moments W2RMand a decoder \b that recovers the original\nffromU;W\nW= \t(f) =Z\nwfdv;\b(U;W)(v) =h(v;U;W):\nThe goal is to \fnd an optimal wandhparametrized by neural networks through\nminimizing\nE\nf\u0018Dkf\u0000\b(\t(f))k2+\u0015\u0011(\u0011(f)\u0000h\u0011(U;W))2:\nwhere\u0011(f) denotes the entropy.\nWe call ( U;W) the set of generalized moments.\n2. Learn the reduced model for the set of generalized moments.\nProjecting the kinetic equation onto the set of generalized moments, one obtains equa-\ntions of the form:\n8\n<\n:@tU+rx\u0001F(U;W;\") = 0;\n@tW+rx\u0001G(U;W;\") =1\n\"R(U;W;\"):(10)\nThis is the general conservative form of the moment system. The term 1 =\"is inherited\ndirectly from (6). Our task is to learn F;G;Rfrom the original kinetic equation.\nTo get reliable reduced models, we still have to address the issues of getting good data\nand respecting physical constraints. To address the \frst problem, one can use the ELT\nalgorithm with the following implementation [34]:\n\u000fexploration: random initial conditions made up of random waves and discontinuities.\n\u000flabeling: solving the kinetic equation. This can be done locally in the momentum\nspace since the kinetic models have a \fnite speed of propagation.\n\u000ftraining: as discussed above.\nTo address the second problem, one notices that there is an additional symmetry:\nthe Galilean invariance. This is a dynamic constraint respected by the kinetic equation.\nSpeci\fcally, for every u2R3, de\fne\nf0(x;u;t) =f(x\u0000tu0;v\u0000u0;t):\nIffis a solution of the Boltzmann equation, then so is f0. To enforce similar constraints\nfor the reduced model, we de\fne the Galilean-invariant moments by:\nWGal= \t(f) =Z\nf(v)w\u0012v\u0000up\nT\u0013\ndv: (11)\n17\nModeling the dynamics of WGalbecomes more subtle due to the spatial dependence in\nu;T. For simplicity we will work with a discretized form of the dynamic model in the\nclosure step. Suppose we want to model the dynamics of WGal;jat the spatial grid point\nindexed by j. Integrating the Boltzmann equation against the generalized basis at this\ngrid point gives\n@tZ\nRDf(v)w \nv\u0000ujp\nTj!\ndv+rx\u0001Z\nRDf(v)w \nv\u0000ujp\nTj!\nvTdv=Z\nRD1\n\"Q(f)w \nv\u0000ujp\nTj!\ndv:\n(12)\nThe closes equations now take the form\n@tWGal+rx\u0001GGal(U;WGal;Uj) =1\n\"RGal(U;WGal): (13)\nOne can perform machine learning using this form.\nThe numerical results given in [34] shows the promise of this approach. Here we only\ngive an example of using this approach to study shock wave structure in high-speed rare\fed\ngas \rows, when the gas experiences a fast transition in a tube between two equilibrium\nstates. During the transition the \row deviates substantially from the thermodynamical\nequilibrium. The classical Navier-Stokes-Fourier (NSF) equations fail when the Mach\nnumber is bigger than 2. Figure 7 presents a comparison of the shock wave pro\fles with\nthe BGK (BhatnagarGrossKrook) collision model [35] when the Mach number is 5.5,\nand the results of the machine learning-based closure model with 3 additional Hermite\nmoments (HermMLC). One can see that the machine learning-based model achieves good\nagreements with the original Boltzmann equation.\n−6−4−2 0 2 4\nx1.01.52.02.53.03.5Density\nBoltzmann\nNSF\nHermMLC\n−6−4−2 0 2 4\nx234567Bulk Velocity\nBoltzmann\nNSF\nHermMLC\n−6−4−2 0 2 4\nx246810Temperature\nBoltzmann\nNSF\nHermMLC\n−6−4−2 0 2 4\nx0246810Stress\nBoltzmann\nNSF\nHermMLC\n−6−4−2 0 2 4\nx−40−30−20−100Heat Flux\n Boltzmann\nNSF\nHermMLC\nFigure 7: Pro\fles of mass density, bulk velocity, temperature, normal stress, and heat \rux\nwhen the Mach number equals 5.5, obtained from the Boltzmann equation, the Navier-\nStokes-Fourier equations, and HermMLC.\n18\n6 Discussions and concluding remarks\nWe have focused on developing sequential multi-scale models using current machine learn-\ning. These models are very much like the physical models we are used to, except that\nsome functions in the models exist in the form of machine learning models such as neural\nnetwork functions. In this spirit, these models are no di\u000berent from Euler's equations for\ncomplex gases where the equation of state is stored as tables or subroutines. As we dis-\ncussed above, following the right protocols will ensure that these machine learning-based\nmodels are just as reliable as the original micro-scale models.\nIn principle the same procedure and same set of protocols are also applicable in a\npurely data-driven context without a micro-scale model: One just has to replace the\nlabeler by experimental results. In practice, this is still a relatively unexplored territory.\nAnother way to integrate machine learning with physics-based modeling is from the\nviewpoint of learning dynamical systems. After all, the physical models we are looking\nfor are all examples of dynamical systems. One representative machine learning model for\nlearning dynamical systems is the recurrent neural networks [36]. In this connection, an\ninteresting observation made in [37] is that there is a natural connection between the Mori-\nZwanzig formalism and the recurrent neural network model. Recurrent neural network\nmodels are machine learning models for time series. It models the output time series using\na local model by introducing hidden variables. In the absence of these hidden variables,\nthe relationship between the input and output time series will be nonlocal with memory\ne\u000bects. In the presence of these hidden variables (which are learned as part of the training\nprocess), the relationship becomes local. Therefore, recurrent neural network models can\nbe viewed as a way of unwrapping the memory e\u000bects by introducing hidden variables.\nOne can also think of it as a way of performing Mori-Zwanzig by compressing the space of\nunwanted degrees of freedom. This connection and this viewpoint of integrating machine\nlearning with physics-based modeling has yet to be explored.\nGoing back to the seven problems listed in Section 1.4, we have already addressed\nproblems 2 and 5. For problem 1, some progress has been reported in [38] and [39, 40]\nfor molecules. Similar models/algorithms as DeePMD and Deep Potential have been\ndeveloped for coarse-grained molecular dynamics [41]. For problem 4, some initial progress\nhas been made in [42]. For problem 7, e\u000borts that use neural network models can be traced\nback to [43] but there is still a huge gap compared with the standards promoted in this\npaper. Problem 6 is wide open. Overall, one should expect substantial progress to be\nmade on all these problems in the next few years.\nThis brings us to a new question: What will be the next frontier, after machine\nlearning is successfully integrated into physics-based models? With the new paradigm\n\frmly in place, the key bottleneck will quickly become the micro-scale models that act as\nthe generators of golden standard data. In the area of molecular modeling, this will be\nthe solution of the original many-body Schr odinger equation. In the area of turbulence\nmodeling, this will be the Navier-Stokes equation. It is possible that machine learning\ncan also play an important role in developing the next-generation algorithms for these\nproblems [11, 44].\n19\nReferences\n[1] Paul Adrien Maurice Dirac. Quantum mechanics of many-electron systems. Proceed-\nings of the Royal Society A , 123(792):714{733, 1929.\n[2] Harold Grad. On the kinetic theory of rare\fed gases. Communications on Pure and\nApplied Mathematics , 2(4):331{407, 1949.\n[3] Sybren Ruurds De Groot and Peter Mazur. Non-equilibrium thermodynamics .\nCourier Corporation, 2013.\n[4] Pierre-Gilles De Gennes and Jacques Prost. The physics of liquid crystals , volume 83.\nOxford university press, 1993.\n[5] Weinan E. Principles of multiscale modeling . Cambridge University Press, 2011.\n[6] Hendrik Tennekes and John Leask Lumley. A \frst course in turbulence . MIT press,\n1972.\n[7] Alex Krizhevsky, Geo\u000brey Hinton, et al. Learning multiple layers of features from\ntiny images. 2009.\n[8] Linfeng Zhang, Han Wang, and Weinan E. Reinforced dynamics for enhanced sam-\npling in large atomic and molecular systems. The Journal of Chemical Physics ,\n148(12):124113, 2018.\n[9] William Lauchlin McMillan. Ground state of liquid He 4. Physical Review ,\n138(2A):A442, 1965.\n[10] David Ceperley, Geo\u000brey V Chester, and Malvin H Kalos. Monte Carlo simulation\nof a many-fermion study. Physical Review B , 16(7):3081{3099, 1977.\n[11] Giuseppe Carleo and Matthias Troyer. Solving the quantum many-body problem\nwith arti\fcial neural networks. Science , 355(6325):602{606, 2017.\n[12] Weinan E and Bing Yu. The deep Ritz method: a deep learning-based numerical\nalgorithm for solving variational problems. Communications in Mathematics and\nStatistics , 6(1):1{12, 2018.\n[13] Walter Kohn and Lu Jeu Sham. Self-consistent equations including exchange and\ncorrelation e\u000bects. Physical Review , 140(4A):A1133, 1965.\n[14] Roberto Car and Michele Parrinello. Uni\fed approach for molecular dynamics and\ndensity-functional theory. Physical Review Letters , 55(22):2471, 1985.\n[15] J org Behler and Michele Parrinello. Generalized neural-network representation of\nhigh-dimensional potential-energy surfaces. Physical review letters , 98(14):146401,\n2007.\n20\n[16] Albert P Bart\u0013 ok, Mike C Payne, Risi Kondor, and G\u0013 abor Cs\u0013 anyi. Gaussian ap-\nproximation potentials: The accuracy of quantum mechanics, without the electrons.\nPhysical review letters , 104(13):136403, 2010.\n[17] Linfeng Zhang, Jiequn Han, Han Wang, Roberto Car, and Weinan E. Deep potential\nmolecular dynamics: A scalable model with the accuracy of quantum mechanics.\nPhysical Review Letters , 120:143001, Apr 2018.\n[18] Frank H Stillinger and Thomas A Weber. Computer simulation of local order in\ncondensed phases of silicon. Physical Review B , 31(8):5262, 1985.\n[19] Jiequn Han, Linfeng Zhang, Roberto Car, and Weinan E. Deep potential: a gen-\neral representation of a many-body potential energy surface. Communications in\nComputational Physics , 23(3):629{639, 2018.\n[20] Linfeng Zhang, Jiequn Han, Han Wang, Wissam A Saidi, Roberto Car, and Weinan\nE. End-to-end symmetry preserving inter-atomic potential energy model for \fnite\nand extended systems. In Advances of the Neural Information Processing Systems\n(NIPS) , 2018.\n[21] Linfeng Zhang, Mohan Chen, Xifan Wu, Han Wang, Weinan E, and Roberto\nCar. Deep neural network for the dielectric response of insulators. arXiv preprint\narXiv:1906.11434 , 2019.\n[22] Grace Sommers, Marcos Calegari, Linfeng Zhang, Han Wang, and Roberto Car. Ra-\nman spectrum and polarizability of liquid water from deep neural networks. Physical\nChemistry Chemical Physics , 2020.\n[23] Albert P Bart\u0013 ok, Risi Kondor, and G\u0013 abor Cs\u0013 anyi. On representing chemical envi-\nronments. Physical Review B , 87(18):184115, 2013.\n[24] Hermann Weyl. The Classical Groups: Their Invariants and Representations . Prince-\nton university press, 2016.\n[25] Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Ruslan R\nSalakhutdinov, and Alexander J Smola. Deep sets. In Advances in Neural Information\nProcessing Systems (NIPS) , 2017.\n[26] Linfeng Zhang, De-Ye Lin, Han Wang, Roberto Car, and Weinan E. Active learning\nof uniformly accurate interatomic potentials for materials simulation. Physical Review\nMaterials , 3(2):023804, 2019.\n[27] Denghui Lu, Han Wang, Mohan Chen, Jiduan Liu, Lin Lin, Roberto Car, Weinan E,\nWeile Jia, and Linfeng Zhang. 86 p\rops deep potential molecular dynamics simulation\nof 100 million atoms with ab initio accuracy. arXiv preprint arXiv:2004.11658 , 2020.\n[28] Weile Jia, Han Wang, Mohan Chen, Denghui Lu, Jiduan Liu, Lin Lin, Roberto\nCar, Weinan E, and Linfeng Zhang. Pushing the limit of molecular dynamics with\nab initio accuracy to 100 million atoms with machine learning. arXiv preprint\narXiv:2005.00223 , 2020.\n21\n[29] Han Wang, Linfeng Zhang, Jiequn Han, and Weinan E. DeePMD-kit: A deep learn-\ning package for many-body potential energy representation and molecular dynamics.\nComputer Physics Communications , 228:178 { 184, 2018.\n[30] Yuzhi Zhang, Haidi Wang, Weijie Chen, Jinzhe Zeng, Linfeng Zhang, Han Wang,\nand Weinan E. Dp-gen: A concurrent learning platform for the generation of reliable\ndeep learning based potential energy models. Computer Physics Communications ,\npage 107206, 2020.\n[31] Evgeny V Podryabinkin and Alexander V Shapeev. Active learning of linearly\nparametrized interatomic potentials. Computational Materials Science , 140:171{180,\n2017.\n[32] Justin S Smith, Ben Nebgen, Nicholas Lubbers, Olexandr Isayev, and Adrian E\nRoitberg. Less is more: Sampling chemical space with active learning. The Journal\nof Chemical Physics , 148(24):241733, 2018.\n[33] Noam Bernstein, G\u0013 abor Cs\u0013 anyi, and Volker L Deringer. De novo exploration and self-\nguided learning of potential-energy surfaces. npj Computational Materials , 5(1):1{9,\n2019.\n[34] Jiequn Han, Chao Ma, Zheng Ma, and Weinan E. Uniformly accurate machine\nlearning-based hydrodynamic models for kinetic equations. Proceedings of the Na-\ntional Academy of Sciences , 116(44):21983{21991, 2019.\n[35] Prabhu Lal Bhatnagar, Eugene P Gross, and Max Krook. A model for collision\nprocesses in gases. i. small amplitude processes in charged and neutral one-component\nsystems. Physical review , 94(3):511, 1954.\n[36] David E Rumelhart, Geo\u000brey E Hinton, and Ronald J Williams. Learning represen-\ntations by back-propagating errors. nature , 323(6088):533{536, 1986.\n[37] Chao Ma, Jianchun Wang, and Weinan E. Model reduction with memory and the\nmachine learning of dynamical systems. Communications in Computational Physics ,\n25(4):947{962, 2018.\n[38] Yixiao Chen, Linfeng Zhang, Han Wang, and Weinan E. Ground state en-\nergy functional with hartree-fock e\u000eciency and chemical accuracy. arXiv preprint\narXiv:2005.00169 , 2020.\n[39] Lixue Cheng, Matthew Welborn, Anders S Christensen, and Thomas F Miller III.\nA universal density matrix functional from molecular orbital-based machine learn-\ning: Transferability across organic molecules. The Journal of Chemical Physics ,\n150(13):131103, 2019.\n[40] Ryo Nagai, Ryosuke Akashi, and Osamu Sugino. Completing density functional\ntheory by machine learning hidden messages from molecules. npj Computational\nMaterials , 6(1):1{8, 2020.\n22\n[41] Linfeng Zhang, Jiequn Han, Han Wang, Roberto Car, and Weinan E. DeePCG: con-\nstructing coarse-grained models via deep neural networks. The Journal of Chemical\nPhysics , 149(3):034101, 2018.\n[42] Huan Lei, Lei Wu, and Weinan E. Machine learning based non-newtonian \ruid model\nwith molecular \fdelity. arXiv preprint arXiv:2003.03672 , 2020.\n[43] F Sarghini, G De Felice, and S Santini. Neural networks based subgrid scale modeling\nin large eddy simulations. Computers & \ruids , 32(1):97{108, 2003.\n[44] Jiequn Han, Linfeng Zhang, and Weinan E. Solving many-electron Schr odinger equa-\ntion using deep neural networks. Journal of Computational Physics , 399:108929,\n2019.\n23",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "T4-BbmxGxNf",
"year": null,
"venue": "CoRR 2019",
"pdf_link": "http://arxiv.org/pdf/1904.05263v3",
"forum_link": "https://openreview.net/forum?id=T4-BbmxGxNf",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Analysis of the Gradient Descent Algorithm for a Deep Neural Network Model with Skip-connections",
"authors": [
"Weinan E",
"Chao Ma",
"Qingcan Wang",
"Lei Wu"
],
"abstract": "The behavior of the gradient descent (GD) algorithm is analyzed for a deep neural network model with skip-connections. It is proved that in the over-parametrized regime, for a suitable initialization, with high probability GD can find a global minimum exponentially fast. Generalization error estimates along the GD path are also established. As a consequence, it is shown that when the target function is in the reproducing kernel Hilbert space (RKHS) with a kernel defined by the initialization, there exist generalizable early-stopping solutions along the GD path. In addition, it is also shown that the GD path is uniformly close to the functions given by the related random feature model. Consequently, in this \"implicit regularization\" setting, the deep neural network model deteriorates to a random feature model. Our results hold for neural networks of any width larger than the input dimension.",
"keywords": [],
"raw_extracted_content": "Analysis of the Gradient Descent Algorithm\nfor a Deep Neural Network Model with Skip-connections\nWeinan E\u00031,2,3, Chao Ma,y2, Qingcan Wang,z2, and Lei Wux2\n1Department of Mathematics, Princeton University\n2Program in Applied and Computational Mathematics, Princeton University\n3Beijing Institute of Big Data Research\nAbstract\nThe behavior of the gradient descent (GD) algorithm is analyzed for a deep neural\nnetwork model with skip-connections. It is proved that in the over-parametrized regime, for\na suitable initialization, with high probability GD can \fnd a global minimum exponentially\nfast. Generalization error estimates along the GD path are also established. As a\nconsequence, it is shown that when the target function is in the reproducing kernel Hilbert\nspace (RKHS) with a kernel de\fned by the initialization, there exist generalizable early-\nstopping solutions along the GD path. In addition, it is also shown that the GD path is\nuniformly close to the functions given by the related random feature model. Consequently,\nin this \\implicit regularization\" setting, the deep neural network model deteriorates to a\nrandom feature model. Our results hold for neural networks of any width larger than the\ninput dimension.\n1 Introduction\nThis paper is concerned with the following questions on the gradient descent (GD) algorithm\nfor deep neural network models:\n1. Under what condition, can the algorithm \fnd a global minimum of the empirical risk?\n2.Under what condition, can the algorithm \fnd models that generalize, without using any\nexplicit regularization?\nThese questions are addressed for a speci\fc deep neural network model with skip connections.\nFor the \frst question, it is shown that with proper initialization, the gradient descent algorithm\nconverges to a global minimum exponentially fast, as long as the network is deep enough. For\nthe second question, it is shown that if in addition the target function belongs to a certain\nreproducing kernel Hilbert space (RKHS) with kernel de\fned by the initialization, then the\ngradient descent algorithm does \fnd models that can generalize. This result is obtained as a\nconsequence of the estimates on the the generalization error along the GD path. However, it\n\[email protected]\[email protected]\[email protected]\[email protected]\n1arXiv:1904.05263v3 [cs.LG] 14 Apr 2019\nis also shown that the GD path is uniformly close to functions generated by the GD path\nfor the related random feature model. Therefore in this particular setting, as far as \\implicit\nregularization\" is concerned, this deep neural network model is no better than the random\nfeature model.\nIn recent years there has been a great deal of interest on the two questions raised\nabove[ 10,13,12,4,2,1,32,7,17,19,29,26,9,28,5,3,11,31,21]. An important recent\nadvance is the realization that over-parametrization can simplify the analysis of GD dynamics\nin two ways: The \frst is that in the over-parametrized regime, the parameters do not have to\nchange much in order to make an O(1) change to the function that they represent [ 10,19]. This\ngives rise to the possibility that only a local analysis in the neighborhood of the initialization is\nnecessary in order to analyze the GD algorithm. The second is that over-parametrization can\nimprove the non-degeneracy of the associated Gram matrix [ 30], thereby ensuring exponential\nconvergence of the GD algorithm [13].\nUsing these ideas, [ 2,32,12,13] proved that (stochastic) gradient descent converges to\na global minimum of the empirical risk with an exponential rate. [ 17] showed that in the\nin\fnite-width limit, the GD dynamics for deep fully connected neural networks with Xavier\ninitialization can be characterized by a \fxed neural tangent kernel. [ 10,19] considered the\nonline learning setting and proved that stochastic gradient descent can achieve a population\nerror of\"using poly(1=\") samples. [ 7] proved that GD can \fnd the generalizable solutions\nwhen the target function comes from certain RKHS. These results all share one thing in\ncommon: They all require that the network width msatis\fesm\u0015poly(n;L), whereL;n\ndenote the network depth and training set size, respectively. In fact, [ 10,7] required that\nm\u0015poly(n;2L). In other words, these results are concerned with very wide networks. In\ncontrast, in this paper, we will focus on deep networks with \fxed width (assumed to be larger\nthandwheredis the input dimension).\n1.1 The motivation\nOur work is motivated strongly by the results of the companion paper [ 16] in which similar\nquestions were addressed for the two-layer neural network model. It was proved in [ 16] that\nin the so-called \\implicit regularization\" setting, the GD dynamics for the two-layer neural\nnetwork model is closely approximated by the GD dynamics for a random feature model with\nthe features de\fned by the initialization. For over-parametrized models, this statement is\nvalid uniformly for all time. In the general case, this statement is valid at least for \fnite time\nintervals during which early stopping leads to generalizable models for target functions in\nthe relevant reproducing kernel Hilbert space (RKHS). The numerical results reported in [ 16]\nnicely corroborated these theoretical \fndings.\nTo understand what happens for deep neural network models, we \frst turn to the ResNet\nmodel:\nh(l+1)=h(l)+U(l)\u001b(V(l)h(l)); l= 0;1;\u0001\u0001\u0001;L\u00001 (1.1)\nh(0)= (xT;1)T2Rd+1; fL(x;\u0012) =wTh(L)\nwhereU(l)2R(d+1)\u0002m;V(l)2Rm\u0002(d+1);w= (0;:::; 0;1), and\u0012=fU(l);V(l)gL\nl=1denote the\nall the parameters to be trained.\nA main observation exploited in [ 16] is the time scale separation between the GD dynamics\nfor the coe\u000ecients in- and outside the activation function, i.e. the fU(l)g's and thefV(l)g's\nIn a typical practical setting, one would initialize the fV(l)g's to beO(1) and thefUg's to be\no(1). This results in a slower dynamics for the fV(l)g's, compared with the dynamics for the\n2\nfU(l)g's, due to the factor presence of an extra factor of U(l)in the dynamical equation for\nV(l). In the case of two-layer networks, this separation of time scales resulted in the fact that\nthe parameters inside the activation function were e\u000bectively frozen during the time period of\ninterest. Therefore the GD path stays close to the GD path for the random feature model\nwith the features given by the initialization.\nTo see whether similar things happen for the ResNet model, we consider the following\n\\compositional random feature model\" in which (1.1) is replaced by\nh(l+1)=h(l)+U(l)\u001b(V(l)(0)h(l)); l= 0;1;\u0001\u0001\u0001;L\u00001 (1.2)\nNote that in (1.2) theV(l)'s are \fxed at their initial values, the only parameters to be updated\nby the GD dynamics are the U(l)'s.\nNumerical experiments Here we provide numerical evidences for the above intuition by\nconsidering a very simple target function: f\u0003(x) =max(x1;0);wherex= (x1;\u0001\u0001\u0001;xd)T. We\ninitialize (1.1) and(1.2) byUi;k= 0;Vk;j\u0018N(0;1=m);8k2[m];i;j2[d+ 1]. Since we are\ninterested in the e\u000bect of depth, we choose m= 1. Please refer to Appendix A for more\ndetails.\nFigure 1 displays the comparison of the GD dynamics for ResNet and the related \\compo-\nsitional random feature model\". We see a clear indication that (1) GD algorithm converges to\na global minimum of the empirical risk for deep residual network, and (2) for deep neural\nnetworks, the GD dynamics for the two models stays close to each other.\n0 10000 20000 30000 40000 50000\nNumber of iterations10−710−610−510−410−310−210−1LossDepth = 2\ntrain: nn\ntest: nn\ntrain: rf\ntest: rf\n0 10000 20000 30000 40000 50000\nNumber of iterations10−310−210−1LossDepth = 10\ntrain: nn\ntest: nn\ntrain: rf\ntest: rf\n0 10000 20000 30000 40000 50000\nNumber of iterations10−1510−1310−1110−910−710−510−310−1LossDepth = 100\ntrain: nn\ntest: nn\ntrain: rf\ntest: rf\nFigure 1: The time history of the GD algorithm for the residual network ( nn) and compositional\nrandom feature ( rf). Left: depth=2; Middle: depth=10; Right: depth=100. Here we use the\nsame learning rate for two models.\n101102103104\nDepth0.0000.0250.0500.0750.1000.125Test Loss\nNon-Reg\nReg\nFigure 2: Comparison of test accuracy of the regularized and the un-regularized residual\nnetworks with di\u000berent depths. For the regularized model, we choose \u0015= 0:005.\n3\nFigure 2 shows the testing error for the optimal (convergent) solution shown in Figure 1\nas the depth of the ResNet changes. We see that the testing error seems to be settling down\non a \fnite value as the network depth is increased. As a comparison, we also show the testing\nerror for the optimizers of the regularized model proposed in [ 14] (see (A.2) ). One can see\nthat for this particular target function, the testing error for the minimizers of the regularized\nmodel is consistently very small as one varies the depth of the network.\nThese results are similar to the ones shown in [ 16] for two-layer neural networks. They\nsuggest that for ResNets, GD algorithm is able to \fnd global minimum for the empirical risk\nbut in terms of the generalization property, the resulting model may not be better than the\ncompositional random feature model.\nOn the theoretical side, we have not yet succeeded in dealing directly with the ResNet\nmodel. Therefore in this paper we will deal instead with a modi\fed model which shares a\nlot of common features with the ResNet model but the simpli\fes the task of analyzing error\npropagation between the layers. We believe that the insight we have gained in this analysis is\nhelpful for understanding general deep network models.\n1.2 Our contribution\nIn this paper, we analyze the gradient descent algorithm for a particular class of deep neural\nnetworks with skip-connections. We consider the least square loss and assume that the\nnonlinear activation function is Lipschitz continuous (e.g. Tanh, ReLU ).\n\u000fWe prove that if the depth Lsatis\fesL\u0015poly(n), then gradient descent converges to a\nglobal minimum with zero training error at an exponential rate. This result is proved\nby only assuming that the network width is larger than d. As noted above, the previous\noptimization results [12, 2, 32] require that the width msatis\fesm\u0015poly(n;L).\n\u000fWe provide a general estimate for the generalization error along the GD path, assuming\nthat the target function is in a RKHS with the kernel de\fned by the initialization. As a\nconsequence, we show that population risk is bounded from above by O(1=pn) if certain\nearly stopping rules are used. In contrast, the generalization result in [ 7] requires that\nm\u0015poly(n;2L).\n\u000fWe prove that the GD path is uniformly close to the functions given by the related\nrandom feature model (see Theorem 6.6). Consequently the generalization property of\nthe resulting model is no better than that of the random feature model. This allows us\nto conclude that in this \\implicit regularization\" setting, the deep neural network model\ndeteriorates to a random feature model. In contrast, it has been established in [ 15,14]\nthat for suitable explicitly regularized models, optimal generalization error estimates\n(e.g. rates comparable to the Monte Carlo rate) can be proved for a much larger class of\ntarget functions.\nThese results are very much analogous to the ones proved in [ 16] for two-layer neural networks.\nOne main technical ingredient in this work is to use a combination of the identity mapping\nand skip-connections to stabilize the forward and backward propagation in the neural network.\nThis enable us to consider deep neural networks with \fxed width. The second main ingredient\nis the exploitation of a possible time scale separation between the GD dynamics for the\nparameters in- and outside the activation function: The parameters inside the activation\nfunction are e\u000bectively frozen during the GD dynamics compared with the parameters outside\nthe activation function.\n4\n2 Preliminaries\nThroughout this paper, we let [ n] =f1;2;:::;ng, and usekkandkkFto denote the `2and\nFrobenius norms, respectively. For a matrix A, we useAi;:;A:;j;Ai;jto denote its i-th row,\nj-th column and ( i;j)-th entry, respectively. We let Sd\u00001=fx2Rd:kxk= 1 and use \u00190\nto indicate the uniform distribution over Sd\u00001. We useX.Yas a shorthand notation for\nX\u0014CY, whereCis some absolute constant. X&Yis similarly de\fned.\n2.1 Problem setup\nWe consider the regression problem with training data set given by f(xi;yi)gn\ni=1, wherefxign\ni=1\nare i.i.d. samples drawn from a \fxed (but unknown) distribution \u001a. For simplicity, we assume\nkxk2= 1 andjyj\u00141. We usef(\u0001; \u0002) :Rd!Rto denote the model with parameter \u0002. We\nare interested in minimizing the empirical risk, de\fned by\n^Rn(\u0002) =1\n2nnX\ni=1(f(xi; \u0002\u0000yi)2: (2.1)\nWe lete(x;y) =f(x; \u0002)\u0000yand^e= (e(x1;y1);:::;e (xn;yn))T2Rn, then ^Rn(\u0002) =1\n2nk^ek2.\nFor the generalization problem, we need to specify how the fyjgn\nj=1's are obtained. Let\nf\u0003:Rd!Rbe our target function. Then we have yj=f\u0003(xj). We will assume that there\nare no measurement noises. This makes the argument more transparent but does not change\nthings qualitatively: Essentially the same argument applies to the case with measurement\nnoise.\nOur goal is to estimate the population risk, de\fned by\nR(\u0002) =1\n2E\u001a[(f(x; \u0002)\u0000f\u0003(x))2]\nDeep neural networks with skip-connections\nWe will consider a special deep neural network model with multiple skip-connections, de\fned\nby\nh(1)= (x;0)T2Rd+1\nh(l+1)=0\n@h(1)\n1:d\nh(l)\nd+11\nA+U(l)\u001b(V(l)h(l)); l= 1;\u0001\u0001\u0001;L\u00001\nf(x; \u0002) =wTh(L):(2.2)\nHereU(l)2R(d+1)\u0002m;V(l)2Rm\u0002(d+1);w2Rd+1. Note that Landmaxfm;d+ 1gare the\ndepth and width of the network respectively. \u001b(\u0001) :R!Ris a scalar nonlinear activation\nfunction, which is assumed to be 1-Lipschitz continuous and \u001b(0) = 0. For any vector\nv2RNwe de\fne\u001b(v) = (\u001b(v1);:::;\u001b (vN))T. For simplicity, we \fx wto be (0;:::; 0;1)T.\nThus the parameters that need to be estimated are: \u0002 = fU(l);V(l)gL\nl=1. We also de\fne\ng(l)=\u001b(V(l)h(l))2Rm, the output of the l-th nonlinear hidden layer.\nThis network model has the following feature: The \frst dentries ofh(l)are directly\nconnected to the input layer by a long-distance skip-connection, only the last entry is\n5\nconnected to the previous layer. As will be seen later, the long-distance skip-connections help\nto stabilize the deep network. We further let:\nU(l)= \nC(l)\n(a(l))T!\n; V(l)=\u0010\nB(l)r(l)\u0011\n;h(l)= \nz(l)\ny(l)!\n;\nwhereC(l)2Rd\u0002m;a(l)2Rm,B(l)2Rm\u0002d;r(l)2Rmandz(l)2Rd;y(l)2R. With these\nnotations, we can re-write the model as\nz(1)=x; y(1)= 0\nz(l+1)=z(1)+C(l)\u001b(B(l)z(l)+r(l)y(l))\ny(l+1)=y(l)+\u0000\na(l)\u0001T\u001b(B(l)z(l)+r(l)y(l)); l= 1;\u0001\u0001\u0001;L\u00001:\nf(x; \u0002) =y(L):(2.3)\nGradient descent\nWe will analyze the behavior of the gradient descent algorithm, de\fned by\n\u0002t+1= \u0002t\u0000\u0011r^Rn(\u0002t);\nwhere\u0011is the learning rate. For simplicity, in most cases, we will focus on its continuous\nversion:d\u0002t\ndt=\u0000r^Rn(\u0002t): (2.4)\nInitialization We will focus on a special class of initialization:\nC(l)\n0= 0;a(l)\n0= 0;pmrow(B(l)\n0)\u0018\u00190;r(l)\n0= 0; (2.5)\nwhere the third item means that each row of B(l)\n0is independently drawn from the uniform\ndistribution over fb2Rd:kbk= 1=pmg. Thus for this initialization, kB(l)\n0kF= 1.\nNote that all the results in this paper also hold for slightly larger initializations, e.g.\nmaxl2[L]fkC(l)\n0kF;ka(l)\n0k;kr(l)\n0kg=O(1=L) and rowi(B(l)\n0)\u0018N(0;I=m ). But for simplicity,\nwe will focus on the initialization (2.5).\n2.2 Assumption on the input data\nFor the given activation function \u001b, we can de\fne a symmetric positive de\fnite (SPD) function\nk0(x;x0)def=Ew\u0018\u00190[\u001b(wTx)\u001b(wTx0)]: (2.6)\nDenote byHk0the RKHS induced by k0(\u0001;\u0001). For the given training set, the (empirical) kernel\nmatrixK= (Ki;j)2Rn\u0002nis de\fned as\nKi;j=1\nnk0(xi;xj); i;j = 1;\u0001\u0001\u0001;n\nWe make the following assumption on the training data set.\n6\nAssumption 2.1. For the given training data fxign\ni=1, we assume that Kis positive de\fnite,\ni.e.\n\u0015ndef=\u0015min(K)>0: (2.7)\nRemark 1.Note that\u0015n\u0014mini2[n]Ki;i\u00141=n, and in general \u0015ndepends on the data set.\nIf we assume that fxign\ni=1are independently drawn from \u00190, it was shown in [ 6] that with\nhigh probability \u0015n\u0015\u0016n=2 where\u0016nis then-th eigenvalue of the Hilbert-Schmidt integral\noperatorTk0:L2(Sd\u00001;\u00190)7!L2(Sd\u00001;\u00190) de\fned by\nTk0f(x) =Z\nSd\u00001k0(x;x0)f(x0)d\u00190(x0):\nUsing this result, [30] provided lower bounds for \u0015nbased on some geometric discrepancy.\n3 The main results\nLet \u0002tbe the solution of the GD dynamics (2.4) at timetwith the initialization de\fned in\n(2.5). We \frst show that with high probability, the landscape of ^Rn(\u0002) near the initialization\nhas some coercive property which guarantees the exponential convergence towards a global\nminimum.\nLemma 3.1. Assume that there are constants c1;c2;c3such that 4c3J(\u00020)<c2\n1c2\n2and for\nk\u0002\u0000\u00020k\u0014c1\nc2J(\u0002)\u0014krJ(\u0002)k2\u0014c3J(\u0002): (3.1)\nThen for any t\u00150, we have\nJ(\u0002t)\u0014e\u0000c2tJ(\u00020):\nProof. Lett0def= infft:k\u0002t\u0000\u00020k\u0015c1g. Then for t2[0;t0], the condition (3.1) is satis\fed.\nThus we have\ndJ=dt =\u0000krJk2\u0014\u0000c2J:\nConsequently, we have,\nJ(\u0002t)\u0014e\u0000c2tJ(\u00020)\nIt remains to show that actually t0=1. Ift0<1, then we have\nk\u0002t0\u0000\u00020k\u0014Zt0\n0krJ(\u0002t)kdt\u0014p\nc3J(\u00020)Z1\n0e\u0000c2t=2dt\u00142p\nc3J(\u00020)\nc2(i)\n<c1;\nwhere (i) is due to the assumption that 4 c3J(\u00020)<c2\n1c2\n2. This contradicts the de\fnition of\nt0.\nOur main result for optimization is as follows.\nTheorem 3.2 (Optimization) .For any\u000e2(0;1), assume that L&maxf\u0015\u00002\nnln(n2=\u000e);\u0015\u00003\nng.\nWith probability at least 1\u0000\u000eover the initialization \u00020, we have that for any t\u00150,\n^Rn(\u0002t)\u0014e\u0000L\u0015nt\n2^Rn(\u00020): (3.2)\n7\nIn contrast to other recent results for multi-layer neural networks [ 13,2,32], we do not\nrequire the network width to increase with the size of the data set or the depth of the network.\nAs is the case for two-layer neural networks [ 16], the fact that the GD dynamics stays in a\nneighborhood of the initialization suggests that it resembles the situation of a random feature\nmodel. Consequently, the generalization error can be controlled if we assume that the target\nfunction is in the appropriate RKHS.\nAssumption 3.3. Assume that f\u00032Hk0, i.e.\nf\u0003(x) =E\u00190[a\u0003(!)\u001b(!Tx)]\nkf\u0003k2\nHk0=E\u00190[ja\u0003(!)j2]<+1:\nIn addition, we also assume that sup!2Sd\u00001ja\u0003(!)j<1.\nIn the following, we will denote \r(f\u0003) =maxf1;sup!2Sd\u00001ja\u0003(!)jg. Obviously,kf\u0003kHk0\u0014\n\r(f\u0003).\nTheorem 3.4 (Generalization) .Assume that the target function f\u0003satis\fes Assumption 3.3.\nFor any\u000e2(0;1), assume that L&maxf\u0015\u00002\nnln(n2=\u000e);\u0015\u00003\nn;\r2(f\u0003)g. Then with probability at\nleast 1\u0000\u000eover the random initialization, the following holds for any t2[0;1],\nR(\u0002t).1\nL3=2\u00152n+t\n\u00152n+\r2(f\u0003)\nLt+ \n1 +p\nL\r(f\u0003)tpn!2c3(\u000e)\r2(f\u0003)pn:\nwherec(\u000e) = 1 +p\n2 ln(1=\u000e).\nIn addition, by choosing the stopping time appropriately, we obtain the following result:\nCorollary 3.5 (Early-stopping) .Assume that L&maxf\u0015\u00002\nnln(n2=\u000e);\u0015\u00003\nn;\r2(f\u0003)g. Let\nT=pn=L, then we have\nR(\u0002T).c3(\u000e)\r2(f\u0003)pn:\n4 Landscape around the initialization\nDe\fnition 1. For anyc>0 , we de\fne a neighborhood around the initialization \u0002 0by\nIc(\u00020)def=f\u0002 : max\nl2[L]fka(l)\u0000a(l)\n0k;krl\u0000r(l)\n0k;kB(l)\u0000B(l)\n0kF;kC(l)\u0000C(l)\n0kFg\u0014c\nLg:(4.1)\nLet\"c=c=L. We will assume that c\u00151;\"c\u00141. In the following, we \frst prove that both\nthe forward and backward propagation is stable regardless of its depth. We then show that\nthe norm of the gradient can be bounded from above and below by the loss function, similar\nto the condition required in Lemma 3.1. This implies that there are no issues with vanishing\nor exploding gradients.\n8\n4.1 Forward stability\nAt \u0002 0, it easy to check that\ny(l)(x; \u00020) = 0;z(l)(x; \u00020) =x;g(l)(x; \u00020) =\u001b(B(l)\n0x):\nFor simplicity, when it is clear from the context, we will omit the dependence on xand \u0002 in\nthe notations.\nProposition 4.1. IfL\u00154c2, we have for any \u00022Ic(\u00020)andx2Sd\u00001that\njy(l)(x; \u0002)\u0000y(l)(x; \u00020)j\u00144c\nkz(l)(x; \u0002)\u0000z(l)(x; \u00020)k\u00144c\nL\nkg(l)(x; \u0002)\u0000g(l)(x; \u00020)k\u00146c2\nL:(4.2)\nRemark 2.We see that all the variables are close to their initial value except y(l), which is\nused to accumulate the prediction from each layer.\nProof. Leto(l)=kz(l)(x; \u0002)\u0000z(l)(x; \u00020)k. Then by (2.3), we have\no(l+1)\u0014\"c\u0010\n(1 +\"c)(1 +o(l)) +\"cjy(l)j\u0011\njy(l+1)j\u0014jy(l)j+\"c\u0010\n(1 +\"c)(1 +o(l)) +\"cjy(l)j\u0011\n;\nwitho(1)= 0;y(1)= 0. Adding the two inequalities gives us:\no(l+1)+jy(l+1)j\u00142\"c(1 +\"c)o(l)+ (1 + 2\"2\nc)jy(l)j+ 2\"c(1 +\"c)\nSince 2\"c= 2c=L\u00141, the above inequality can be simpli\fed as\no(l+1)+jy(l+1)j\u0014(1 + 2\"2\nc)(o(0)+jy(0)j) + 2:25\"c\n\u00142:25\"clX\nl0=0(1 + 2\"c)l0\u00142:25L\"ceL2\"2\nc\u00144c\nThus we obtain that for any l2[L],jy(l)j\u00144c. Plugging it back to the recursive formula for\no(l), we get\no(l+1)\u00141:25\"co(l)+ 2:25\"c\nThis gives us\no(l)\u00144c=L8l2[L]:\nNow the deviation of g(l)can be estimated by\nkg(l)(x; \u0002)\u0000g(l)(x; \u00020)k=k\u001b(B(l)z(l)(x) +r(l)y(l)(x))\u0000\u001b(B(l)\n0x)k\n\u0014kB(l)z(l)(x) +r(l)y(l)(x)\u0000B(l)\n0xk\nBy inserting the previous estimates, we obtain\nkg(l)(x; \u0002)\u0000g(l)(x; \u00020)k\u00146c2\nL\n9\n4.2 Backward stability\nFor convenience, we de\fne the gradients with respect to the neurons by\n\u000b(l)(x; \u0002) =ry(l)f(x; \u0002)\f(l)(x; \u0002) =rz(l)f(x; \u0002)\r(l)(x; \u0002) =rg(l)f(x; \u0002):\nFor simplicity, we will omit the explicit reference of xand \u0002 in these notations when it is\nclear from the context. Note that \u000b(l)2R;\r(l)2Rm;\f(l)2Rd, and it is easy to derive the\nfollowing back-propagation formula using the chain rule,\n\r(l)=a(l)\u000b(l+1)+\u0000\nC(l)\u0001T\f(l+1)\n\f(l)=\u0000\nB(l)\u0001T\r(l)\n\u000b(l)=\u000b(l+1)+ (r(l))T\r(l):(4.3)\nAt the top layer, we have that for any \u0002 and x:\n\u000b(L)= 1;\f(L)= 0:\nIn addition, we have at \u0002 0\n\u000b(l)(x; \u00020) = 1;\f(l)(x; \u00020) = 0;\r(l)(x; \u00020) = 0:\nProposition 4.2. IfL\u00156c2, we have for any \u00022Ic(\u00020)andx\nj\u000b(l)(x; \u0002)\u00001j\u00145c\nL;k\f(l)(x; \u0002)k\u00144c\nL;k\r(l)(x; \u0002)k\u00143c\nL(4.4)\nProof. According to the (4.3), we have\n \n\f(l)\n\u000b(l)!\n=0\n@(C(l)B(l))T(B(l))Ta(l)\n(C(l)r(l))T1 + (r(l))Ta(l)1\nA \n\f(l+1)\n\u000b(l+1)!\n;\nwhich gives us\nk\f(l)k\u0014\"c(1 +\"c)k\f(l+1)k+\"c(1 +\"c)\u000b(l+1)(4.5)\nj\u000b(l)j\u0014\"2\nck\f(l+1)k+ (1 +\"2\nc)\u000b(l+1): (4.6)\nAdding them, we obtain\n\"ck\f(l)k+\u000b(l)\u0014\"2\nc(2 +\"c)k\f(l+1)k+ (1 + 2:25\"2\nc)\u000b(l+1)\u0014(1 + 2:25\"2\nc)(k\f(l+1)k+\u000b(l+1)):\nTherefore, we have\n\u000b(l)\u0014\"\f(l)+\u000b(l)\u0014(1 + 2:25\"2\nc)L\u00141 + 5c\"c:\nInserting the above estimates back to (4.5) gives us\nk\f(l)k\u00141:25\"ck\f(l+1)k+ 2:5\"c;\nfrom which we obtain that\nk\f(l)k\u00144\"c\n10\n. Using the (4.3) again, we get\nk\r(l)k=ka(l)\u000b(l+1)+\u0000\nC(l)\u0001T\f(l+1)k\u00143\"c:\nFor the lower bound, using (4.3), we get\n\u000b(l)=\u000b(l+1)+ (r(l))T\r(l)\u0015\u000b(l+1)\u00003\"2\nc\n\u0015\u000bL\u00003L\"2\nc\u00151\u00003c2\nL:\n4.3 Bounding the gradients\nWe are now ready to bound the gradients. First note that we have\nra(l)f(x) =\u000b(l)(x)g(l)(x)\nrB(l)f(x) =\r(l)(x)\u0000\nz(l)(x)\u0001T\nrC(l)f(x) =\f(l)(x)\u0000\ng(l)(x)\u0001T\nrr(l)f(x) =\r(l)(x)y(l)(x);\nwhere we have omitted the dependence on \u0002. Using the stability results, we can bound the\ngradients by the empirical loss.\nLemma 4.3 (Upper bound) .IfL\u0015100c2, then for any \u00022Ic(\u00020)we have\nmaxfkra(l)^Rnk2;krr(l)^Rnk2g\u0014(1 +50c2\nL)^Rn\nmaxfkrB(l)^Rnk2;krC(l)^Rnk2g\u001420c2\nL2^Rn(4.7)\nProof. Using Lemma 4.1 and Lemma 4.2, we have\nkra(l)^Rnk2=k1\nnnX\ni=0e(xi;yi)\u000b(l)(xi)g(l)(xi)k2\n\u0014^Rn(\u0002)1\nnnX\ni=1k\u000b(l)(xi)g(l)(xi)k2\n\u0014^Rn(\u0002)\nnnX\ni=0(1 +5c\nL)2(1 +6c2\nL)2\n\u0014(1 +50c2\nL)^Rn(\u0002)\n11\nAnalogously, we have\n(A):krB(l)^Rnk2\nF=k1\nnnX\ni=0e(xi;yi)\r(l)(xi)\u0000\nz(l)(xi)\u0001Tk2\nF\n\u0014^Rn(\u0002)1\nnnX\ni=1(3c\nL)2(1 +4c\nL)2.15c2\nL2^Rn(\u0002);\n(B):krC(l)^Rnk2\nF=k1\nnnX\ni=0ei\f(l)(xi)g(l)\nk(xi)kF\n\u0014^Rn(\u0002)1\nnnX\ni=0(4c\nL)2(1 +6c2\nL)2\u001420c2\nL2^Rn(\u0002);\n(C):krr(l)^Rnk2=k1\nnnX\ni=0eiy(l)(xi)\r(l)(xi)k2\n\u0014^Rn(\u0002)1\nnnX\ni=0(4c)2(3c\nL)2\u00141\n2^Rn(\u0002):\nWe now turn to the lower bound. The technique used is similar to case for two-layer\nneural networks [13]. De\fne a Gram matrix H= (Hi;j)2Rn\u0002nwith\nHi;j(\u0002) =1\nnLLX\nl=1hra(l)f(xi);ra(l)f(xj)i: (4.8)\nAt the initialization, we have\nHi;j(\u00020) =1\nnLLX\nl=1h\u001b(B(l)\n0xi);\u001b(B(l)\n0xj)i:\nThis matrix can be viewed as an empirical approximation of the kernel matrix Kde\fned in\nSection 2.2, since each row of B(l)\n0is independently drawn from the uniform distribution over\nthe sphere of radius 1 =pm. Using standard concentration inequalities, we can prove that\nwith high probability, the smallest eigenvalue of the Gram matrix is bounded from below by\nthe smallest eigenvalue of the kernel matrix. This is stated in the following lemma, whose\nproof is deferred to Appendix B.\nLemma 4.4. For any\u000e2(0;1), assume that L\u00158 ln(n2=\u000e)\nm\u00152n. Then with probability at least\n1\u0000\u000eover the random initialization:\n\u0015min(H(\u00020))\u00153\u0015n\n4: (4.9)\nMoreover, we can show that for any \u0002 2Ic(\u00020), the Gram matrix H(\u0002) is still strictly\npositive de\fnite as long as Lis large enough.\nLemma 4.5. For any\u000e2(0;1), assume that L\u0015maxf8 ln(n2=\u000e)\nm\u00152n;200c2\n\u0015ng. With probability\n1\u0000\u000eover the random initialization, we have for any \u00022Ic(\u00020),\n\u0015min(H(\u0002))\u0015\u0015n\n2: (4.10)\n12\nProof.\nHi;j(\u0002)\u0000Hi;j(\u00020) =\n1\nnLLX\nl=1h\u000b(l)(xi; \u0002)g(l)(xi; \u0002);\u000b(l)(xj; \u0002)g(l)(xj; \u0002)i\u0000hg(l)(xi; \u00020);g(l)(xj; \u00020) (4.11)\nLemmas 4.1 and 4.2 tell us that for any xi2Sd\u00001\nkg(l)(xi; \u0002)\u0000g(l)(xi; \u00020)k\u00146c2\nL;ja(l)(xi; \u0002)\u00001j\u00145c\nL:\nUsing these in (4.11) gives us\njHi;j(\u0002)\u0000Hi;j(\u00020)j\u001450c2\nnL:\nApplying Weyl's inequality that \u001bmin(A1+A2)\u0015\u001bmin(A)\u0000\u001bmax(A2), we obtain\n\u0015min(H(\u0002))\u0015\u0015min(H(\u00020))\u0000kH(\u0002)\u0000H(\u00020)k2\n\u00153\n4\u0015n\u000050c2\nL:\nThus as long as L\u0015200c2\u0015\u00001\nn, we must have \u0015min(H(\u0002))\u0015\u0015n=2.\nWith the above lemma, we can now provide a lower bound for the square norm of the\ngradient.\nLemma 4.6 (Lower bound) .For any \fxed \u000e2(0;1), assume that L\u0015maxf8 ln(n2=\u000e)\nm\u00152n;200c2\n\u0015ng.\nWith probability at least 1\u0000\u000eover the random initialization, we have for any \u00022Ic(\u00020), the\nempirical risk satis\fes\nkr^Rn(\u0002)k2\u0015\u0015nL\n2^Rn(\u0002): (4.12)\nProof.\nkr^Rn(\u0002)k2\u0015LX\nl=1kra(l)^Rn(\u0002)k2=LX\nl=1k1\nnnX\ni=1e(xi;yi)ra(l)f(xi)k2\n=L\nneTH(\u0002)e\u0015L\u0015min(H(\u0002)) ^Rn(\u0002):\nApplying Lemma 4.5 completes the proof.\n5 Optimization\nProof of Theorem 3.2 Letcn= 8=\u0015nand de\fne a stopping time\nt0def= infft: \u0002t=2Icn(\u00020)g:\nBy Lemma 4.5, since\nL\u0015maxf8 ln(n2=\u000e)\nm\u00152n;200c2\nn\n\u0015ng;\n13\nwe have that with probability at least 1 \u0000\u000eover the choice of \u0002 0, the following inequality\nholds for any 0\u0014t\u0014t0:\nkr^Rn(\u0002t)k2\u0015\u0015nL\n2^Rn(\u0002t):\nHence we have\n^Rn(\u0002t)\u0014e\u0000\u0015nLt\n2^Rn(\u00020);8t\u0014t0:\nNow we prove that t0=1. Ift0<1, we must have \u0002 t02@(Icn(\u00020)), the boundary of\nIcn(\u00020). It is easy to see that for any l2[L],\nka(l)\nt0\u0000a(l)\n0k\u0014Zt0\n0kra(l)^Rnkdt(i)\n\u0014Z1\n0q\n(1 + 50c2n=L)^Rn(\u0002t)dt\n\u0014p\n(1 + 50c2n=L)Z1\n0e\u0000L\u0015nt\n4q\n^Rn(\u00020)dta\n\u00144p\n2\nL\u0015n<cn\nL(5.1)\nwhere (i) is due to Lemma 4.3. Similarly we have kr(l)\nt0\u0000r(l)\n0k<cn=L, and\nmaxfkC(l)\nt0\u0000C(l)\n0k;kB(l)\nt0\u0000B(l)\n0kg\u0014p\n20c2n=L24\nL\u0015n<cn\nL(5.2)\nThis says that \u0002 t02I\u000e\ncn(\u00020), which contradicts the de\fnition of t0.\nThe above proof also suggests that with high probability, \u0002 tis always close to the\ninitialization.\nProposition 5.1. For any\u000e2(0;1), assume that L\u0015maxf8\u0015\u00002\nnln(n2=\u000e);3000\u0015\u00003\nng. With\nprobability at least 1\u0000\u000eover the initialization \u00020, we have that for any t\u00150,\n\u0002t2Icn(\u00020);\nwherecn= 8=\u0015n.\n6 Generalization\n6.1 The reference model\nTo analyze the generalization error, we will consider the following random feature model [ 23]\nas a reference model,\n~f(x;a;B0)def=aT\u001b(B0x) =LX\nl=1(a(l))T\u001b(B(l)\n0x);\nwherea(l)2Rm;B(l)\n02Rm\u0002danda2RmL;B02RmL\u0002ddenote the stacked parameters.\nThe\u001b(B0x) andaare the random features and coe\u000ecients, respectively. The initialization\nofa(l)\n0andB(l)\n0are the same as the deep neural networks. fB(l)\n0gL\nl=1are kept \fxed during\nthe training after the initialization, while fa(l)gL\nl=1are updated according to gradient descent.\nFor this model, we de\fne the empirical and population risks by\n^En(a;B0)def=1\n2nnX\ni=1(~f(xi;a;B0)\u0000yi)2;E(a;B0)def=1\n2Ex;y[(~f(x;a;B0)\u0000y)2]:\nConcerning the reference model, we have\n14\nTheorem 6.1. Assume that the target function f\u0003satis\fes Assumption 3.3. Then for any\n\fxed\u000e2(0;1), with probability at least 1\u0000\u000eover the random choice of B0, there exists\na\u00032RmLsuch that\nE(a\u0003;B0)\u0014c2(\u000e)\r2(f\u0003)\nmL;\nwherec(\u000e) = 1 +p\n2 ln(1=\u000e). Furthermore,ka\u0003k\u0014\r(f\u0003)=p\nL.\nThis result essentially appeared in [ 25,24]. Since we are interested in the explicit control\nfor the norm of the solution, we provide a complete proof in Appendix C.1.\nGradient descent for the random feature model Denote by ~atthe solution of GD for\nthe random feature model:\n~a0=a0\nd~at\ndt=\u0000r^En(~at;B0):(6.1)\nThe generalization property of GD solutions for random feature models was analyzed in [ 8].\nHere we provide a much simpler approach based on the following lemma.\nLemma 6.2. For any \fxed B0, the gradient descent (6.1) satis\fes,\n^En(~at;B0)\u0014^En(a\u0003;B0) +ka0\u0000a\u0003k2\n2t\nk~at\u0000a\u0003k\u0014ka0\u0000a\u0003k+ 2t^En(a\u0003;B0):\nThe proof can be found in Appendix C.2. Here the key observation is the second inequality.\nIn the casea0= 0, we have\nk~atk\u00142ka\u0003k+ 2t^En(a\u0003;B0):\nNote that Theorem (6.1) implies that ^En(a\u0003;B0) is small. This gives us a control over the\nnorm of GD solutions. We can now obtain the following estimates of population risk.\nTheorem 6.3. Assume that the target function f\u0003satis\fes Assumption 3.3 and L&\r2(f\u0003).\nThen for any \fxed \u000e2(0;1), with probability at least 1\u0000\u000eover the random choice of B0, the\nfollowing holds for any t2[0;1]\nE(~at;B0).c2(\u000e)\r2(f\u0003)\nmL+\r2(f\u0003)\n2Lt+ \n1 +p\nL\r(f\u0003)tpn!2c3(\u000e)\r2(f\u0003)pn:\nThe three terms at the right hand side are bounds for the approximation error, optimization\nerror and estimation error, respectively. The proof of this result can be found in Appendix C.3.\n6.2 Bounding the di\u000berence between the two models\nWe use\u0012to represent C;rand denote the deep net by f(x;a;B;\u0012 ). Then for any xwe have,\nf(x;a;B0;0) = ~f(x;a;B0): (6.2)\nIn particular, when \u0002 is close to the initialization, we have the following bounds for the two\nmodels.\n15\nLemma 6.4. If\u0002 = (a;B;\u0012 )2Ic(\u00020), then we have for any x2Sd\u00001\njf(x;a;B;\u0012 )\u0000~f(x;a;B0)j\u00146c2kak=L\nkra(l)f(x;a;B;\u0012 )\u0000ra(l)~f(x;a;B0)k\u0014c2\nL:\nProof. De\fne ~f(l)(x) =Pl\ni=1a(l)\u001b(B(l)\n0x), then we have\n~f(l+1)(x)\u0000y(l+1)(x) =~f(l)l(x)\u0000y(l)(x) +a(l)(g(l)(x)\u0000\u001b(B(l)\n0x))\nProposition 4.1 implies that\nja(l)(g(l)(x)\u0000\u001b(B(l)\n0x))j\u00146ka(l)kc2=L:\nHence we have\njf(x;a;B;\u0012 )\u0000f(x;a;B0;0)j\u00146c2kak=L:\nBy applying Propositions 4.1 and 4.2, we have, for any x2Sd\u00001\nkra(l)f(x;a;B;\u0012 )\u0000ra(l)~f(x;a;B0)k=k\u000b(l)(x)g(l)(x)\u0000\u001b(B(l)\n0x)k\n\u0014k\u000b(l)(x)(g(l)(x)\u0000g(l)\n0(x))k+k(\u000b(l)(x)\u00001)g(l)\n0(x)k\n\u0014(1 +5c\nL)6c2\nL+5c\nL\n.c2\nL:\nNow we can proceed to bound the deviation between the GD dynamics of the two models.\nLet \u0002t= (at;Bt;\u0012t) denote the solution of gradient descent for the deep neural network model.\nWe have\nLemma 6.5. For any\u000e2(0;1), assume that L&maxf\u0015\u00002\nnln2(n=\u000e);\u0015\u00003\nng. With probability\nat least 1\u0000\u000eover the random initialization, we have for t2[0;1]\nkat\u0000~atk.c2\nntp\nL;\nProof. First we have that\nd(at\u0000~at)\ndt=\u00001\nnnX\ni=1(f(xi; \u0002t)\u0000yi)raf(xi; \u0002t)\u0000(~f(xi;~at;B0)\u0000yi)ra~f(xi;~at;B0)\n=\u00001\nnnX\ni=1(f(xi; \u0002t)\u0000yi)raf(xi; \u0002t)\u0000(~f(xi;at;B0)\u0000yi)ra~f(xi;at;B0)\n\u00001\nnnX\ni=1(~f(xi;at;B0)\u0000yi)ra~f(xi;at;B0)\u0000(~f(xi;~at;B0)\u0000yi)ra~f(xi;~at;B0)\n=:\u00001\nnnX\ni=1Pi\nt\u00001\nnnX\ni=1Qi\nt:\n16\nThe last equality de\fnes Pi\ntandQi\nt. Let us estimate Qi\nt;Pi\ntseparately. We \frst have\nQi\nt=\u001b(B0xi)\u001bT(B0xi)(at\u0000~at):\nProposition 5.1 tells us that \u0002 t2Icn(\u00020) withcn= 8=\u0015n. Hence\nkatk2=kat\u0000a0k2\u0014LX\nl=1ka(l)\nt\u0000a(l)\n0k2\u0014c2\nn=L\u00141:\nUsing Lemma 6.4, we have\nkPi\ntk\u0014jf(xi; \u0002t)jkraf(x; \u0002t)\u0000r a~f(x;at;B0)k\n+jf(xi; \u0002t)\u0000~f(xi;at;B0)jkra~f(x;at;B0)k\n+jyijkraf(xi; \u0002t)\u0000~f(x;at;B0)k\n\u00142c2\nnp\nL+6c2\nnkatkp\nL\u00143c2\nnp\nL: (6.3)\nLet\u000et=at\u0000~at, then\ndk\u000etk2\ndt=\u0000\u000eT\nt1\nnnX\ni=1\u001b(B0xi)\u001bT(B0xi)\u000et\u00001\nnnX\ni=1h\u000et;Pi\nti;\n\u00141\nnnX\ni=1k\u000etkkPi\ntk\u00143c2\nnp\nLk\u000etk;\nwhere the last inequality follows from (6.3). Thus we have\ndk\u000etk\ndt\u00143c2\nnp\nL:\nSincek\u000e0k= 0, we have\nk\u000etk\u0014Zt\n0dk\u000et0k\ndtdt0\u00143c2\nntp\nL:\n6.3 Implicit regularization\nThe previous analysis shows that the gradient descent dynamics of the deep neural network\nmodel stays close that of the reference model. More speci\fcally, we have the following result.\nTheorem 6.6. For any\u000e2(0;1), assume that L&maxf\u0015\u00002\nnln(n2=\u000e);\u0015\u00003\nng. Then with\nprobability at least 1\u0000\u000eover the initialization \u00020, we have that for any t\u00150,\njf(x;at;Bt;\u0012t)\u0000~f(x;at;B0)j.c3\nn\nL3=2\nProof. By Proposition 5.1, we know that \u0002 t= (at;Bt;\u0012t)2Icn(\u00020). So we can use Lemma 6.4,\nwhich gives us,\njf(x;at;Bt;\u0012t)\u0000~f(x;at;B0)j\u00146c2\nnkatk\nL\n(i)\n\u00146c3\nn\nL3=2;\n17\nwhere (i) is due to the fact that we have katk2=PL\nl=1ka(l)k2\u0014c2\nn=Lfor \u00022Icn(\u00020).\nThe above theorem implies that the functions represented by the GD trajectory are\nuniformly close to that of the random feature model if Lis large enough. This allows us to\nestimate the population risk for the deep neural network model using results for the random\nfeature model\nProposition 6.7. For any \fxed \u000e>0, assume that L&maxf\u0015\u00002\nnln(n2=\u000e);\u0015\u00003\nng. Then with\nprobability at least 1\u0000\u000eover the random initialization \u00020, we have\nR(\u0002t).c3\nn\nL3=2+c2\nnt+E(~at;B0):\nProof. We write\nR(at;Bt;\u0012t) =R(at;Bt;\u0012t)\u0000R(at;B0;0) +R(at;B0;0)\u0000R(~at;B0;0)\n+R(~at;B0;0)\n=R(at;Bt;\u0012t)\u0000E(at;B0) +E(at;B0)\u0000E(~at;B0) +E(~at;B0):(6.4)\nLet \u0001i=~f(xi;at;B0)\u0000f(xi; \u0002t). Since \u0002 t2Icn(\u00020), we have\nj\u0001ij\u00146c2\nnkatk=L\u00146c3\nn=L3=2\u00141:\nThen\njR(at;Bt;\u0012t)\u0000E(at;B0)j=j1\n2nnX\ni=1(f(xi; \u0002t)\u0000yi)2\u0000(f(xi; \u0002t)\u0000yi+ \u0001i)2j\n=j1\nnnX\ni=1ei\u0001ij+j1\n2n\u00012\nij\n\u00142^Rn(at;Bt;\u0012t) max\nij\u0001ij\n\u001412^Rn(\u00020)c3\nn=L3=2\u00146c3\nn=L3=2;\nwhere we used the fact that ^Rn(\u00020)\u00141=2. In addition, we have for the reference model,\njE(at;B0)\u0000E(~at;B0)j\u0014kat\u0000~atkkB0k\u00143c2\nnt:\nThe proof is completed by combining these estimates.\nProof of Theorem 3.4 By directly combining Theorem 6.3 and Proposition 6.7, we obtain\nTheorem 3.4.\nProof of Corollary 3.5 The condition on Limplies that L\u0015maxfpn;\r2(f\u0003)g. Hence\nT=pn=L\u00141. Using Theorem 3.4, we get\nR(\u0002T)\u00141\nL3=2\u00152n+pn\nL\u00152n+\r2(f\u0003)pn+\u0012\n1 +\r(f\u0003)p\nL\u00132c3(\u000e)\r2(f\u0003)pn:\nSinceL&\u0015\u00002\nnn, the above inequality can be simpli\fed as\nR(\u0002T).c3(\u000e)\r2(f\u0003)pn:\n18\n7 Discrete Time Analysis\nIn this section, we will analyze the discrete time gradient descent with a constant learning\nrate\u0011:\n\u0002t+1= \u0002t\u0000\u0011r^Rn(\u0002t); t= 0;1;2;::: (7.1)\nThe following theorem shows that the linear convergence of empirical risk ^Rn(\u0002t) still holds\nfor the discrete version.\nTheorem 7.1. Consider the neural network model (2.2) with initialization (2.5) . For any\n\fxed\u000e2(0;1), assume that the depth L&maxf\u0015\u00002\nnln(n2=\u000e);\u0015\u00003\nngand the learning rate\n\u0011.\u0015n\nL. Then with probability at least 1\u0000\u000eover the random initialization, we have\n^Rn(\u0002t)\u0014\u0012\n1\u0000\u0015n\u0011L\n8\u0013t\n^Rn(\u00020): (7.2)\nTo prove this theorem, we need several preliminary results.\nLemma 7.2. Assume that L&c2, and \u0002t;\u0002t+12Ic(\u00020). Then for any x2Sd\u00001,\nkg(l)\nt+1(x)\u0000g(l)\nt(x)k.c\u0011q\n^Rn(\u0002t): (7.3)\nProof. We use \u0001 to denote the increment of a vector or matrix from iteration ttot+ 1, i.e.,\n\u0001v=vt+1\u0000vt.\nFor the weights a(l),B(l),C(l)andr(l), from Lemma 4.3, we have\nk\u0001a(l)k2=\u00112kra(l)^Rn(\u0002t)k2.\u00112^Rn(\u0002t);k\u0001B(l)k2\nF&c2\u00112\nL2^Rn(\u0002t);\nk\u0001C(l)k2\nF&c2\u00112\nL2^Rn(\u0002t);k\u0001r(l)k2&c4\u00112\nL2^Rn(\u0002t):\nFor the neurons z(l),y(l)andg(l), since \u0002 t;\u0002t+12Ic(\u00020), we have\nk\u0001z(l+1)k=\r\r\rC(l)\nt+1g(l)\nt+1\u0000C(l)\ntg(l)\nt\r\r\r\n\u0014kC(l)\ntkkg(l)\nt+1\u0000g(l)\ntk+kC(l)\nt+1\u0000C(l)\ntkkg(l)\nt+1k\n.c\nLk\u0001g(l)k+c\u0011\nLq\n^Rn(\u0002t); (7.4)\nk\u0001y(l+1)k=\r\r\r\u0001y(l)+a(l)\nt+1g(l)\nt+1\u0000a(l)\ntg(l)\nt\r\r\r\n\u0014k\u0001y(l)k+ka(l)\ntkkg(l)\nt+1\u0000g(l)\ntk+ka(l)\nt+1\u0000a(l)\ntkkg(l)\nt+1k\n.k\u0001y(l)k+c\nLk\u0001g(l)k+\u0011q\n^Rn(\u0002t); (7.5)\nk\u0001g(l)k=\r\r\r\u001b\u0010\nB(l)\nt+1z(l)\nt+1+r(l)\nt+1y(l)\nt+1\u0011\n\u0000\u001b\u0010\nB(l)\ntz(l)\nt+r(l)\nty(l)\nt\u0011\r\r\r\n\u0014\r\r\rB(l)\nt+1z(l)\nt+1+r(l)\nt+1y(l)\nt+1\u0000B(l)\ntz(l)\nt\u0000r(l)\nty(l)\nt\r\r\r\n\u0014kB(l)\ntkk\u0001z(l)k+k\u0001B(l)kkz(l)\nt+1k+kr(l)\ntkk\u0001y(l)k+k\u0001r(l)kky(l)\nt+1k\n.k\u0001z(l)k+c\u0011\nLq\n^Rn(\u0002t) +c\nLk\u0001y(l)k+c3\u0011\nLq\n^Rn(\u0002t): (7.6)\n19\nPlugging (7.4) (7.5) in(7.6), and usingk\u0001y(l)k=Pl\u00001\nk=0\u0002\nk\u0001y(k+1)k\u0000k \u0001y(k)k\u0003\n(since \u0001y(0)=\n0), we obtain\nk\u0001g(l)k.k\u0001z(l)k+c\nLk\u0001y(l)k+c3\u0011\nLq\n^Rn(\u0002t)\n.\u0014c\nLk\u0001g(l\u00001)k+c\u0011\nLq\n^Rn(\u0002t)\u0015\n+c\nLl\u00001X\nk=0\u0014c\nLk\u0001g(k)k+\u0011q\n^Rn(\u0002t)\u0015\n+c3\u0011\nLq\n^Rn(\u0002t)\n.c\nLk\u0001g(l\u00001)k+c2\nL2l\u00001X\nk=0k\u0001g(k)k+\u0012c\u0011\nL+c\u0011+c3\u0011\nL\u0013q\n^Rn(\u0002t)\n.c\nLk\u0001g(l\u00001)k+c2\nL2l\u00001X\nk=0k\u0001g(k)k+c\u0011q\n^Rn(\u0002t):\nSinceL&c2, by induction, we have k\u0001g(l)k.c\u0011q\n^Rn(\u0002t) forl= 0;1;:::;L\u00001.\nLemma 7.3. Assume that \u00150=\u0015min(H(\u00020))>0. Assume further that L&c2=\u00150and\n\u0011.\u00150=L. If\u0002t;\u0002t+12Ic(\u00020), then\n^Rn(\u0002t+1)\u0014\u0012\n1\u0000\u00150\u0011L\n4\u0013\n^Rn(\u0002t): (7.7)\nProof.\n^Rn(\u0002t+1)\u0000^Rn(\u0002t) =1\n2nnX\ni=1h\n[f(xi; \u0002t+1)\u0000yi]2\u0000[f(xi; \u0002t)\u0000yi]2i\n=1\nnnX\ni=1[f(xi; \u0002t)\u0000yi][f(xi; \u0002t+1)\u0000f(xi; \u0002t)] +1\n2nnX\ni=1[f(xi; \u0002t+1)\u0000f(xi; \u0002t)]2:\nNotice that\nf(x; \u0002) =L\u00001X\nl=0D\na(l);g(l)(x)E\n;\nwe can write\n^Rn(\u0002t+1)\u0000^Rn(\u0002t) =I1+I2+I3;\nwhere\nI1=1\nnnX\ni=1[f(xi; \u0002t)\u0000yi]L\u00001X\nl=0D\na(l)\nt+1\u0000a(l)\nt;g(l)\nt(xi)E\n;\nI2=1\nnnX\ni=1[f(xi; \u0002t)\u0000yi]L\u00001X\nl=0D\na(l)\nt+1;g(l)\nt+1(xi)\u0000g(l)\nt(xi)E\n;\nI3=1\n2nnX\ni=1[f(xi; \u0002t+1)\u0000f(xi; \u0002t)]2:(7.8)\nWe will show that I1is the dominant term that leads to linear convergence, and I2,I3are\nhigh order terms.\n20\nForI1, since\na(l)\nt+1\u0000a(l)\nt=\u0000\u0011ra(l)^Rn(\u0002t) =\u0000\u0011\n2nnX\nj=1[f(xj; \u0002t)\u0000yj]\u000b(l)\ntg(l)\nt(xj);\nwe have\nI1=\u0000\u0011L\n2nnX\ni;j=1[f(xi; \u0002t)\u0000yi][f(xj; \u0002t)\u0000yj]~Hi;j(\u0002t);\nwhere\n~Hi;j(\u0002t) =1\nLL\u00001X\nl=0D\n\u000b(l)\ntg(l)\nt(xj);g(l)\nt(xi)E\n:\nFollowing the proof of Lemma 4.5, for \u0002 t2Ic(\u00020), we have\nj~Hi;j(\u0002t)\u0000Hi;j(\u00020)j=1\nLL\u00001X\nl=0hD\n\u000b(l)\ntg(l)\nt(xj);g(l)\nt(xi)E\n\u0000D\ng(l)\n0(xj);g(l)\n0(xi)Ei\n.c2\nL;\nand\u0015min(~H(\u0002t))\u0015\u0015min(H(\u00020))=2 =\u00150=2 sinceL&c2=\u00150. Therefore, we have\nI1\u0014\u0000\u0011L\n2n\u0001\u00150\n2nX\ni=1[f(xi; \u0002t)\u0000yi]2=\u0000\u00150\u0011L\n2^Rn(\u0002t): (7.9)\nForI2andI3, since \u0002t;\u0002t+12Ic, Lemma 4.3 implies that ka(l)\nt+1\u0000a(l)\ntk=\u0011kra(l)^Rn(\u0002t)k.\n\u0011q\n^Rn(\u0002t), and Lemma 7.2 implies that kg(l)\nt+1\u0000g(l)\ntk.c\u0011q\n^Rn(\u0002t). We have\nI2=1\nnnX\ni=1[f(xi; \u0002t)\u0000yi]L\u00001X\nl=0D\na(l)\nt+1;g(l)\nt+1(xi)\u0000g(l)\nt(xi)E\n\u00141\nnnX\ni=1jf(xi; \u0002t)\u0000yijL\u00001X\nl=0ka(l)\nt+1kkg(l)\nt+1(xi)\u0000g(l)\nt(xi)k\n.q\n^Rn(\u0002t)\u0001L\u0001c\nL\u0001c\u0011q\n^Rn(\u0002t)\n=c2\u0011^Rn(\u0002t).\u00150\u0011L^Rn(\u0002t);\nwherec2\u0011.\u00150\u0011LsinceL&c2=\u00150. Meanwhile, since\nf(xi; \u0002t+1)\u0000f(xi; \u0002t) =L\u00001X\nl=0hD\na(l)\nt+1\u0000a(l)\nt;g(l)\nt(xi)E\n+D\na(l)\nt+1;g(l)\nt+1(xi)\u0000g(l)\nt(xi)Ei\n\u0014L\u00001X\nl=0h\nka(l)\nt+1\u0000a(l)\ntkkg(l)\nt(xi)k+ka(l)\nt+1kkg(l)\nt+1(xi)\u0000g(l)\nt(xi)ki\n.L\u00001X\nl=0\u0014\n\u0011q\n^Rn(\u0002t)\u00011 +c\nL\u0001c\u0011q\n^Rn(\u0002t)\u0015\n=\u0011(L+c2)q\n^Rn(\u0002t).\u0011Lq\n^Rn(\u0002t); (7.10)\n21\nwe also have\nI3=1\n2nnX\ni=1[f(xi; \u0002t+1)\u0000f(xi; \u0002t)]2.\u00112L2^Rn(\u0002t).\u00150\u0011L\nn^Rn(\u0002t); (7.11)\nwhere\u00112L2.\u00150\u0011Lsince\u0011.\u00150=L.\nCombining (7.9) (7.10) and (7.11), we get\n^Rn(\u0002t+1)\u0000^Rn(\u0002t) =I1+I2+I3\u0014\u0000\u00150\u0011L\n4^Rn(\u0002t):\nLemma 7.4. Letc&1=\u00150where\u00150=\u0015min(H(\u00020))>0. Lett0be such that \u0002t2Ic(\u00020)and\n^Rn(\u0002t)\u0014\u0010\n1\u0000\u00150\u0011L\n4\u0011t^Rn(\u00020)fort= 0;1;:::;t 0, then we have \u0002t0+12Ic(\u00020).\nProof. From Lemma 4.3, ka(l)\nt+1\u0000a(l)\ntk=\u0011kra(l)^Rn(\u0002t)k.\u0011q\n^Rn(\u0002t)for \u0002t2Ic(\u00020). So\nwe have\nka(l)\nt0+1\u0000a(l)\n0k\u0014t0X\nt=0ka(l)\nt+1\u0000a(l)\ntk=\u0011t0X\nt=0kra(l)^Rn(\u0002t)k\n.\u0011t0X\nt=0q\n^Rn(\u0002t)\u0014\u0011q\n^Rn(\u00020)t0X\nt=0\u0012\n1\u0000\u00150\u0011L\n4n\u0013t\n2\n\u0014\u0011q\n^Rn(\u00020)\u00018\n\u00150\u0011L=8\n\u00150Lq\n^Rn(\u00020)<c\nL\nif we choose the absolute constant Clarge enough. Similar results hold for B(l),C(l)andr(l).\nTherefore, \u0002 t0+12Ic(\u00020).\nProof of Theorem 7.1 SinceL&\u0015\u00002\nnln(n2=\u000e), by Lemma 4.4, \u00150=\u0015min(H(\u00020))\u0015\u0015n=2\nwith probability at least 1 \u0000\u000e.\nLetc&1=\u00150. We prove the theorem by introduction:\n\u0002t2Ic(\u00020);^Rn(\u0002t)\u0014\u0012\n1\u0000\u0015n\u0011L\n8\u0013t\n^Rn(\u00020): (7.12)\nObviously statement (7.12) holds fort= 0. Assume that it holds for t= 0;1;:::;t 0, then by\nLemma 7.4, we have \u0002 t0+12Ic(\u00020). Thus the assumptions of Lemma 7.3 hold, and we have\n^Rn(\u0002t0+1)\u0014\u0012\n1\u0000\u00150\u0011L\n4\u0013\n^Rn(\u0002t0)\u0014\u0012\n1\u0000\u0015n\u0011L\n8\u0013t0+1\n^Rn(\u00020):\nTherefore, statement (7.12) holds for t0+ 1. This completes the proof of Theorem 7.1.\n8 Conclusion\nWith what we have learned from numerical results presented earlier and the theoretical results\nfor this simpli\fed deep neural network model, one is tempted to speculate that similar results\n22\nhold for the ResNet model for the general case, i.e. the GD dynamics does converge to the\nglobal minima of the empirical risk, and the GD path stays close to the GD path for the\ncompositional random feature model during the appropriate time intervals. A natural next\nstep is to put these speculative statements on a rigorous footing.\nWe should also note that even if they are true, these results do not exclude the possibility\nthat in some other parameter regimes, the models found by the GD algorithm for ResNets\ngeneralize better than the ones for the compositional random feature model and that there\nexist non-trivial \\implicit regularization\" regimes. They do tell us that \fnding these regimes\nis a non-trivial task.\nAcknowledgement: The work presented here is supported in part by a gift to Princeton\nUniversity from iFlytek and the ONR grant N00014-13-1-0338.\nReferences\n[1]Zeyuan Allen-Zhu, Yuanzhi Li, and Yingyu Liang. Learning and generalization in overpa-\nrameterized neural networks, going beyond two layers. arXiv preprint arXiv:1811.04918 ,\n2018.\n[2]Zeyuan Allen-Zhu, Yuanzhi Li, and Zhao Song. A convergence theory for deep learning\nvia over-parameterization. arXiv preprint arXiv:1811.03962 , 2018.\n[3]Sanjeev Arora, Nadav Cohen, Noah Golowich, and Wei Hu. A convergence analysis of\ngradient descent for deep linear neural networks. In International Conference on Learning\nRepresentations , 2019.\n[4]Sanjeev Arora, Simon S Du, Wei Hu, Zhiyuan Li, and Ruosong Wang. Fine-grained anal-\nysis of optimization and generalization for overparameterized two-layer neural networks.\narXiv preprint arXiv:1901.08584 , 2019.\n[5]Peter L. Bartlett, David P. Helmbold, and Philip M. Long. Gradient descent with identity\ninitialization e\u000eciently learns positive de\fnite linear transformations. In International\nConference on Machine Learning , pages 520{529, 2018.\n[6]Mikio L Braun. Accurate error bounds for the eigenvalues of the kernel matrix. Journal\nof Machine Learning Research , 7(Nov):2303{2328, 2006.\n[7]Yuan Cao and Quanquan Gu. A generalization theory of gradient descent for learning\nover-parameterized deep ReLU networks. arXiv preprint arXiv:1902.01384 , 2019.\n[8]Luigi Carratino, Alessandro Rudi, and Lorenzo Rosasco. Learning with SGD and random\nfeatures. In Advances in Neural Information Processing Systems , pages 10213{10224,\n2018.\n[9]L\u0013 ena \u0010c Chizat and Francis Bach. On the global convergence of gradient descent for\nover-parameterized models using optimal transport. In Advances in Neural Information\nProcessing Systems 31 , pages 3040{3050. 2018.\n[10]Amit Daniely. SGD learns the conjugate kernel class of the network. In Advances in\nNeural Information Processing Systems , pages 2422{2430, 2017.\n23\n[11]Simon S Du and Wei Hu. Width provably matters in optimization for deep linear neural\nnetworks. arXiv preprint arXiv:1901.08572 , 2019.\n[12]Simon S Du, Jason D Lee, Haochuan Li, Liwei Wang, and Xiyu Zhai. Gradient descent\n\fnds global minima of deep neural networks. arXiv preprint arXiv:1811.03804 , 2018.\n[13]Simon S. Du, Xiyu Zhai, Barnabas Poczos, and Aarti Singh. Gradient descent provably\noptimizes over-parameterized neural networks. In International Conference on Learning\nRepresentations , 2019.\n[14]Weinan E, Chao Ma, and Qingcan Wang. A priori estimates of the population risk for\nresidual networks. arXiv preprint arXiv:1903.02154 , 2019.\n[15]Weinan E, Chao Ma, and Lei Wu. A priori estimates for two-layer neural networks. arXiv\npreprint arXiv:1810.06397 , 2018.\n[16]Weinan E, Chao Ma, and Lei Wu. A comparative analysis of the optimization and\ngeneralization property of two-layer neural network and random feature models under\ngradient descent dynamics. arXiv preprint arXiv:1904.04326 , 2019.\n[17]Arthur Jacot, Franck Gabriel, and Cl\u0013 ement Hongler. Neural tangent kernel: Convergence\nand generalization in neural networks. In Advances in neural information processing\nsystems , pages 8580{8589, 2018.\n[18]Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In\nInternational Conference on Learning Representations , 2015.\n[19]Yuanzhi Li and Yingyu Liang. Learning overparameterized neural networks via stochastic\ngradient descent on structured data. In Advances in Neural Information Processing\nSystems , pages 8168{8177, 2018.\n[20]Mehryar Mohri, Afshin Rostamizadeh, and Ameet Talwalkar. Foundations of machine\nlearning . MIT press, 2018.\n[21]Behnam Neyshabur, Ryota Tomioka, and Nathan Srebro. In search of the real in-\nductive bias: On the role of implicit regularization in deep learning. arXiv preprint\narXiv:1412.6614 , 2014.\n[22]Behnam Neyshabur, Ryota Tomioka, and Nathan Srebro. Norm-based capacity control\nin neural networks. In Conference on Learning Theory , pages 1376{1401, 2015.\n[23]Ali Rahimi and Benjamin Recht. Random features for large-scale kernel machines. In\nAdvances in neural information processing systems , pages 1177{1184, 2008.\n[24]Ali Rahimi and Benjamin Recht. Uniform approximation of functions with random\nbases. In Annual Allerton Conference on Communication, Control, and Computing ,\npages 555{561. IEEE, 2008.\n[25]Ali Rahimi and Benjamin Recht. Weighted sums of random kitchen sinks: Replacing min-\nimization with randomization in learning. In Advances in neural information processing\nsystems , pages 1313{1320, 2009.\n24\n[26]Grant Rotsko\u000b and Eric Vanden-Eijnden. Parameters as interacting particles: long time\nconvergence and asymptotic error scaling of neural networks. In Advances in neural\ninformation processing systems , pages 7146{7155, 2018.\n[27]Shai Shalev-Shwartz and Shai Ben-David. Understanding machine learning: From theory\nto algorithms . Cambridge university press, 2014.\n[28]Justin Sirignano and Konstantinos Spiliopoulos. Mean \feld analysis of neural networks:\nA central limit theorem. arXiv preprint arXiv:1808.09372 , 2018.\n[29]Mei Song, Andrea Montanari, and Phan-Minh Nguyen. A mean \feld view of the landscape\nof two-layers neural networks. In Proceedings of the National Academy of Sciences , volume\n115, pages E7665{E7671, 2018.\n[30]Bo Xie, Yingyu Liang, and Le Song. Diverse neural network learns true target functions.\nInArti\fcial Intelligence and Statistics , pages 1216{1224, 2017.\n[31]Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Under-\nstanding deep learning requires rethinking generalization. In International Conference\non Learning Representations , 2017.\n[32]Difan Zou, Yuan Cao, Dongruo Zhou, and Quanquan Gu. Stochastic gradient descent\noptimizes over-parameterized deep ReLU networks. arXiv preprint arXiv:1811.08888 ,\n2018.\n25\nA Experiment setup\nResNet The residual network considered is given by\nh(1)=V(0)x2Rd+1\nh(l+1)=h(l)+U(l)\u001b(V(l)h(l)); l= 1;\u0001\u0001\u0001;L\u00001\nf(x; \u0002) =wTh(L);(A.1)\nwhereU(l)2R(d+1)\u0002m;V(l)2Rm\u0002(d+1);w2Rd+1, andV(0)= (Id;0)T2R(d+1)\u0002d. Through-\nout the experiment, since we are interested in the e\u000bect of depth, we choose m= 1 to\nspeed up the training. For any j;k2[d+ 1];i2[m], we initialize the ResNet by U(l)\nk;i= 0,\nV(l)\ni;j\u0018N(0;1\nm).wis initialized as (0 ;:::; 0;1) and kept \fxed during training. We use gradient\ndescent to optimize the empirical risk and the learning rates for the ResNets of di\u000berent\ndepths are manually tunned to achieve the best test performance.\nCompositional norm regularization Consider the following regularized model\nminimizeJ(\u0002) := ^Rn(\u0002) +\u0015pnk\u0002kP: (A.2)\nHere the compositional norm is de\fned by (in this case it is the direct extension of the path\nnorm [22] to ResNets )\nk\u0002kPdef=jwjTLY\nl=1(I+jU(l)jjV(l)j)jV(0)j:\nHere for a matrix A= (ai;j), we de\fnejAjdef=(jai;jj). For the above regularized model, we\nuse Adam [18] optimizer to solve problem (A.2).\nB Proof of Lemma 4.4\nProof. For a given t\u00150, de\fne the event Si;j=f\u00020:jHi;j(\u00020)\u0000k(xi;xj)j\u0014t=ng. Using\nHoe\u000bding's inequality, we get PfSc\ni;jg\u0014e\u00002mLt2. Hence\nPf\\i;jSi;jg= 1\u0000Pf[i;jSc\ni;jg\n\u00151\u0000X\ni;jPfSc\ni;jg\n\u00151\u0000n2e\u00002mLt2:\nTherefore, with probability 1 \u0000n2e\u00002mLt2, the following inequality holds,\n\u0015min(H(\u00020))\u0015\u0015min(K)\u0000kH\u0000KkF\u0015\u0015n\u0000t:\nTakingt=\u0015n=4, we complete the proof.\n26\nC Proofs for the random feature model\nC.1 Proof of Theorem 6.1\nNote thatB0= (b0\n1;:::;b0\nmL)T2RmL\u0002dwithpmb0\nj\u0018\u0019(\u0001). By choosing a\u0003= (a\u0003\n1;:::;a\u0003\nmL)T\nwitha\u0003\nj=a\u0003(pmb0\nj)=(pmL), we have\n~f(x;a\u0003;B0) =1\nmLmLX\nj=1a\u0003(pmb0\nj)\u001b(xTpmb0\nj):\nTherefore EB0[~f(x;a\u0003;B0)] =f\u0003(x). Consider the following random variable,\nZ(B0) =k~f(\u0001;a\u0003;B0)\u0000f\u0003(\u0001)kdef=q\nExj~f(x;a\u0003;B0)\u0000f\u0003(x)j2:\nLet~B0= (b0\n1;:::; ~b0\nj;:::;b0\nmL)T, which equals to B0except ~b0\nj. Then we have\njZ(B0)\u0000Z(~B0)j=k~f(\u0001;a\u0003(B0);B0)\u0000f\u0003(\u0001)k\u0000k ~f(\u0001;a\u0003(~B0);~B0)\u0000f\u0003(\u0001)k\n\u0014k~f(\u0001;a\u0003(B0);B0)\u0000~f(\u0001;a\u0003(~B0);~B0)k\n\u00142\r(f\u0003)\nmL\nApplying McDiarmid's inequality, we have that with probability 1 \u0000\u000ethe following inequality\nholds\nZ(B0)\u0014E[Z(B0)] +\r(f\u0003)r\n2 ln(1=\u000e)\nmL: (C.1)\nUsing the Cauchy-Schwarz inequality, we have E[Z(B0)]\u0014p\nE[Z2(B0)]. Since\nE[Z2(B0)] =ExEB0j~f(x;a\u0003;B0)\u0000f\u0003(x)j2\u0014\r(f\u0003)\nmL;\nwe obtain E[Z(B0)]\u0014\r(f\u0003)p\nmL. Plugging this into (C.1) gives us\nExj~f(x;a\u0003;B0)\u0000f\u0003(x)j2=Z2(B0)\u0014\r2(f\u0003)\nmL\u0010\n1 +p\n2 ln(1=\u000e)\u00112\n:\nIn addition, it is clear that ka\u0003k=qPmL\nj=1ja\u0003\njj2\u0014\r(f\u0003)=p\nL.\nC.2 Proof of Lemma 6.2\nLetJ(t)def=t(^En(~at;B0)\u0000^En(a\u0003;B0)) +1\n2k~at\u0000a\u0003k2, then we have\ndJ(t)\ndt=^En(~at;B0)\u0000^En(a\u0003;B0)\u0000h~at\u0000a\u0003;r^En(~at)i\u0000tkr^En(~at;B0)k2:\nBy using the convexity of ^En(\u0001;B0), it is easy to see that dJ=dt\u00140. Therefore, J(t)\u0014J(0).\nThis leads to\nt(^En(~at;B0)\u0000^En(a\u0003;B0)) +1\n2k~at\u0000a\u0003k2\u00141\n2ka0\u0000a\u0003k2:\nIt is easy to see that\nt^En(~at;B0)\u0014t^En(a\u0003;B0) +1\n2ka0\u0000a\u0003k2\u00001\n2k~at\u0000a\u0003k2\n1\n2k~at\u0000a\u0003k2\u00141\n2ka0\u0000a\u0003k2+t^En(a\u0003;B0):\nThis completes the proof.\n27\nC.3 Proof of Theorem 6.3\nLet\ngen(a;B0) =jE(a;B0)\u0000^En(a;B0)j:\nUsing Lemma 6.2 and Theorem 6.1, we have\nE(~at;B0)\u0014jE(~at;B0)\u0000^En(~at;B0)j+^En(~at;B0)\n(i)\n\u0014jE(~at;B0)\u0000^En(~at;B0)j+j^En(a\u0003;B0)\u0000E(a\u0003;B0)j+E(a\u0003;B0) +k~a0\u0000a\u0003k2\n2t\n(ii)\n\u0014gen(~at;B0) + gen(a\u0003;B0) +c(\u000e)\r2(f\u0003)\nmL+\r2(f\u0003)\n2Lt: (C.2)\nWe next proceed to estimate the two generalization gaps by using the Rademacher complexity.\nLetFc=faT\u001b(B0x) :kak\u0014cg, andHc=f1\n2(aT\u001b(B0x)\u0000y)2:kak\u0014cg. We \frst have\nRad(Fc) =1\nnE[ sup\nkak\u0014cnX\ni=1\u0018iaT\u001b(B0xi)]\n=1\nnE[ sup\nkak\u0014cha;nX\ni=1\u0018i\u001b(B0xi)i]\n\u0014c\nnE[knX\ni=1\u0018i\u001b(B0xi)k]\n(i)\n\u0014c\nnvuutE[knX\ni=1\u0018i\u001b(B0xi)k2]\n\u0014c\nnvuutnX\ni=1E[\u00182\ni]\u001b(B0xi)T\u001b(B0xi) +X\ni6=jE[\u0018i\u0018j]\u001b(B0xi)T\u001b(B0xj)\n\u0014p\nLcpn;\nwhere the expectation is taken over \u00181;:::;\u0018n, and (i) follows from the fact thatp\ntis concave\nin (0;1].\nFor anyathat satis\feskak\u0014c, it is obvious that j~f(x;a;B0)j\u0014p\nLc. Let`(y0;y) =\n(y0\u0000y)2\n2denote the loss function. In this case, jy0j\u0014p\nLcandjyj\u00141, so`(\u0001;y) is (p\nLc+\n1)\u0000Lipschitz continuous. Using the contraction property of Rademacher complexity (see e.g.\n[27, 20]), we have\nRad(Hc)\u0014(p\nLc+ 1)Rad(Fc)\u0014p\nLc(c+ 1)pn:\nThe standard Rademacher complexity based bound gives that for any \fxed \u000e2(0;1), with\nprobability at least 1 \u0000\u000e\ngen(a;B0)\u00142(p\nLc+ 1)cp\nLpn+ (p\nLc+ 1)2r\nln(1=\u000e)\nn; (C.3)\nfor anyasuch thatkak\u0014c.\n28\nConsider a decomposition of the whole hypothesis space, F=[k2N+FkwithFk=\nfaT\u001b(B0x) :kak\u0014cg. Let\u000ek=\u000e\nk2. Ifkis pre-speci\fed, then with probability 1 \u0000\u000ek,(C.3)\nholds forc=k. Then from the union bound for all k2N+, we obtain that for any \fxed\n\u000e>0, with probability 1 \u0000\u000e, the following estimates holds for any a,\ngen(a;B0).(p\nLkak+ 1)kakp\nLpn+ (p\nLkak+ 1)2r\nln((1 +kak)=\u000e)\nn:\nTheorem 6.1 says that ka\u0003k\u0014\r(f\u0003)=p\nL. Hence we obtain\ngen(a\u0003;B0).\r2(f\u0003)1 +p\nln(1=\u000e)pn: (C.4)\nBy Lemma 6.2, we have\nk~atk\u00142ka\u0003k+t^En(a\u0003;B0)\n\u00142ka\u0003k+t(E(a\u0003;B0) + gen(a\u0003;B0))\n.\r(f\u0003)p\nL+t\r2(f\u0003)(c2(\u000e)\nmL+1 +p\nln(1=\u000e)pn);\nwherec(\u000e) = 1 +p\nln(1=\u000e). So using L\u0015\r2(f\u0003)\u00151, we have for any t2[0;1],\ngen(~at;B0). \n1 +p\nL\r(f\u0003)tpn!2c3(\u000e)\r2(f\u0003)pn; (C.5)\nPlugging the estimates (C.4) and (C.5) into (C.2) completes the proof.\n29",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "ewpd_lhYouD",
"year": null,
"venue": "CoRR 2018",
"pdf_link": "http://arxiv.org/pdf/1807.00297v1",
"forum_link": "https://openreview.net/forum?id=ewpd_lhYouD",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Exponential Convergence of the Deep Neural Network Approximation for Analytic Functions",
"authors": [
"Weinan E",
"Qingcan Wang"
],
"abstract": "We prove that for analytic functions in low dimension, the convergence rate of the deep neural network approximation is exponential.",
"keywords": [],
"raw_extracted_content": "arXiv:1807.00297v1 [cs.LG] 1 Jul 2018Exponential Convergence of the Deep Neural Network\nApproximation for Analytic Functions\nWeinan E1,2,3and Qingcan Wang4\n1Department of Mathematics and PACM, Princeton University\n2Center for Big Data Research, Peking University\n3Beijing Institute of Big Data Research\n4PACM, Princeton University\nDedicated to Professor Daqian Li on the occasion of his 80th b irthday\nAbstract\nWe prove that for analytic functions in low dimension, the co nvergence rate of the deep neural network\napproximation is exponential.\n1 Introduction\nThe approximation properties of deep neural network models is among the most tantalizing problems in\nmachine learning. It is widely believed that deep neural net work models are more accurate than shallow ones.\nYet convincing theoretical support for such a speculation i s still lacking. Existing work on the superiority of\nthe deep neural network models are either for very special fu nctions such as the examples given in Telgarsky\n(2016), or special classes of functions such as the ones havi ng a specific compositional structure. For the latter,\nthe most notable are the results proved by Poggio et al. (2017 ) that the approximation error for deep neural\nnetwork models is exponentially better than the error for th e shallow ones for a class of functions with specific\ncompositional structure. However, given a general functio nf, one cannot calculate the distance from fto\nsuch class of functions. In the more general case, Yarotsky ( 2017) considered Cβ-differentiable functions,\nand proved that the number of parameters needed to achieve an error tolerance of εisO(ε−d/βlog1/ε).\nMontanelli and Du (2017) considered functions in Koborov sp ace. Using connection with sparse grids, they\nproved that the number of parameters needed to achieve an err or tolerance of εisO(ε−1/2(log1/ε)d).\nFor shallow networks, there has been a long history of provin g the so-called universal approximation\ntheorem, going back to the 1980s (Cybenko, 1989). For networ ks with one hidden layer, Barron (1993)\nestablished a convergence rate of O(n−1/2)wherenis the number of hidden nodes. Such universal approxi-\nmation theorems can also be proved for deep networks. Lu et al . (2017) considered networks of width d+4\nfor functions in ddimension, and proved that these networks can approximate a ny integrable function with\nsufficient number of layers. However, they did not give the con vergence rate with respect to depth. To fill\nin this gap, we give a simple proof that the same kind of conver gence rate for shallow networks can also be\nproved for deep networks.\nThe main purpose of this paper is to prove that for analytic fu nctions, deep neural network approximations\nconverges exponentially fast. The convergence rate deteri orates as a function of the dimensionality of the\nproblem. Therefore the present result is only of significanc e in low dimension. However, this result does\nreveal a real superior approximation property of the deep ne tworks for a wide class of functions.\nSpecifically this paper contains the following contributio ns:\n1. We construct neural networks with fixed width d+4to approximate a large class of functions where\nthe convergence rate can be established.\n2. For analytic functions, we obtain exponential convergen ce rate, i.e. the depth needed only depends on\nlog1/εinstead of εitself.\n1\n2 The setup of the problem\nWe begin by defining the network structure and the distance us ed in this paper, and proving the corresponding\nproperties for the addition and composition operations.\nWe will use the following notations:\n1. Colon notation for subscript: let {xm:n}={xi:i=m,m+1,...,n}and{xm1:n1,m2:n2}={xi,j:i=\nm1,m1+1,...,n 1,j=m2,m2+1,...,n 2}.\n2. Linear combination: denote y∈ L(x1,...,x n)if there exist βi∈R,i= 1,...,n , such that y=\nβ0+β1x1+···+βnxn.\n3. Linear combination with ReLU activation: denote ˜y∈˜L(x1,...,x n)if there exists y∈ L(x1,...,x n)\nand˜y= ReLU( y) = max( y,0).\nDefinition 1. Given a function f(x1,...,x d), if there exist variables {y1:L,1:M}such that\ny1,m∈˜L(x1:d), yl+1,m∈˜L(x1:d,yl,1:M), f∈ L(x1:d,y1:L,1:M), (1)\nwherem= 1,...,M ,l= 1,...,L−1, thenfis said to be in the neural nets class FL,M/parenleftbig\nRd/parenrightbig\n, and{y1:L,1:M}\nis called a set of hidden variables off.\nA function f∈ FL,Mcan be regarded as a neural net with skip connections from the input layer to the\nhidden layers, and from the hidden layers to the output layer . This representation is slightly different from\nthe one in standard fully-connected neural networks where c onnections only exist between adjacent layers.\nHowever, we can also easily represent such fusing a standard network without skip connection.\nProposition 2. A function f∈ FL,M/parenleftbig\nRd/parenrightbig\ncan be represented by a ReLU network with depth L+ 1and\nwidthM+d+1.\nProof. Let{y1:L,1:M}be the hidden variables of fthat satisfies (1), where\nf=α0+d/summationdisplay\ni=1αixi+L/summationdisplay\nl=1M/summationdisplay\nm=1βl,myl,m.\nConsider the following variables {h1:L,1:M}:\nhl,1:M=yl,1:M, hl,M+1:M+d=x1:d\nforl= 1,...,L , and\nh1,M+d+1=α0+d/summationdisplay\ni=1αixi, hl+1,M+d+1=hl,M+d+1+M/summationdisplay\nm=1βl,mhl,m\nforl= 1,...,,L−1. One can see that h1,m∈˜L(x1:d),hl+1,m∈˜L(hl,1:M+d+1),m= 1,...,M+d+ 1,\nl= 1,...,L−1, andf∈ L(hL,1:M+d+1), which is a representation of a standard neural net.\nProposition 3. (Addition and composition of neural net class FL,M)\n1.\nFL1,M+FL2,M⊆ FL1+L2,M, (2)\ni.e. iff1∈ FL1,M/parenleftbig\nRd/parenrightbig\nandf2∈ FL2,M/parenleftbig\nRd/parenrightbig\n, thenf1+f2∈ FL1+L2,M.\n2.\nFL2,M◦FL1,M+1⊆ FL1+L2,M+1, (3)\ni.e iff1(x1,...,x d)∈ FL1,M+1/parenleftbig\nRd/parenrightbig\nandf2(y,x1,...,x d)∈ FL2,M/parenleftbig\nRd+1/parenrightbig\n, then\nf2(f1(x1,...,x d),x1,...,x d)∈ FL1+L2,M+1/parenleftbig\nRd/parenrightbig\n.\n2\nProof. For the addition property, denote the hidden variables of f1andf2as{y(1)\n1:L1,1:M}and{y(2)\n1:L2,1:M}.\nLet\ny1:L1,1:M=y(1)\n1:L1,1:M, yL1+1:L1+L2,1:M=y(2)\n1:L2,1:M.\nBy definition, {y1:L1+L2,1:M}is a set of hidden variables of f1+f2. Thusf1+f2∈ FL1+L2,M.\nFor the composition property, let the hidden variables of f1andf2as{y(1)\n1:L1,1:M+1}and{y(2)\n1:L2,1:M}. Let\ny1:L1,1:M+1=y(1)\n1:L1,1:M+1, yL1+1:L1+L2,1:M=y(2)\n1:L2,1:M,\nyL1+1,M+1=yL1+2,M+1=···=yL1+L2,M+1=f1(x1,...,x d).\nOne can see that {y1:L1+L2,1:M+1}is a set of hidden variables of f2(f1(x),x), thus the composition property\n(3) holds.\nDefinition 4. Given a continuous function ϕ(x),x∈[−1,1]dand a continuous function class F/parenleftbig\n[−1,1]d/parenrightbig\n,\ndefine the L∞distance\ndist(ϕ,F) = inf\nf∈Fmax\nx∈[−1,1]d|ϕ(x)−f(x)|. (4)\nProposition 5. (Addition and composition properties for distance function )\n1. Letϕ1andϕ2be continuous functions. Let F1andF2be two continuous function classes, then\ndist(α1ϕ1+α2ϕ2,F1+F2)≤ |α1|dist(ϕ1,F1)+|α2|dist(ϕ2,F2), (5)\nwhereα1andα2are two real numbers.\n2. Assume that ϕ1(x) =ϕ1(x1,...,x d),ϕ2(y,x) =ϕ2(y,x1,...,x d)satisfyϕ1/parenleftbig\n[−1,1]d/parenrightbig\n⊆[−1,1]. Let\nF1/parenleftbig\n[−1,1]d/parenrightbig\n,F2/parenleftbig\n[−1,1]d+1/parenrightbig\nbe two continuous function classes, then\ndist(ϕ2(ϕ1(x),x),F2◦F1)≤Lϕ2dist(ϕ1,F1)+dist( ϕ2,F2), (6)\nwhereLϕ2is the Lipschitz norm of ϕ2with respect to y.\nProof. The additional property obviously holds. Now we prove the co mposition property. For any f1∈ F1,\nf2∈ F2, one has\n|ϕ2(ϕ1(x),x)−f2(f1(x),x)| ≤ |ϕ2(ϕ1(x),x)−ϕ2(f1(x),x)|+|ϕ2(f1(x),x)−f2(f1(x),x)|\n≤Lϕ2/bardblϕ1(x)−f1(x)/bardbl∞+/bardblϕ2(y,x)−f2(y,x)/bardbl∞.\nTakef⋆\n1= argminf/bardblϕ1(x)−f(x)/bardbl∞andf⋆\n2= argminf/bardblϕ2(y,x)−f(y,x)/bardbl∞, then\n|ϕ2(ϕ1(x),x)−f⋆\n2(f⋆\n1(x),x)| ≤Lϕ2dist(ϕ1,F1)+dist( ϕ2,F2),\nthus the composition property (6) holds.\nNow we are ready to state the main theorem for the approximati on of analytic functions.\nTheorem 6. Letfbe an analytic function over (−1,1)d. Assume that the power series f(x) =/summationtext\nk∈Ndakxk\nis absolutely convergent in [−1,1]d. Then for any δ >0, there exists a function ˆfthat can be represented by\na deep ReLU network with depth Land width d+4, such that\n|f(x)−ˆf(x)|<2/summationdisplay\nk∈Nd|ak|·exp/parenleftBig\n−dδ/parenleftBig\ne−1L1/2d−1/parenrightBig/parenrightBig\n(7)\nfor allx∈[−1+δ,1−δ]d.\n3\n3 Proof\nThe construction of ˆfis motivated by the approximation of the square function ϕ(x) =x2and multiplication\nfunction ϕ(x,y) =xyproposed in Yarotsky (2017), Liang and Srikant (2016). We us e this as the basic\nbuilding block to construct approximations to monomials, p olynomials, and analytic functions.\nLemma 7. The function ϕ(x) =x2,x∈[−1,1]can be approximated by deep neural nets with an exponential\nconvergence rate:\ndist/parenleftbig\nϕ=x2,FL,2/parenrightbig\n≤2−2L. (8)\nProof. Consider the function\ng(y) =/braceleftBigg\n2y, 0≤y <1/2,\n2(1−y),1/2≤y≤1,\ntheng(y) =y−4ReLU(y−1/2)in[0,1]. Define the hidden variables {y1:L,1:2}as follows:\ny1,1= ReLU( x), y 1,2= ReLU( −x),\ny2,1= ReLU( y1,1+y1,2), y 2,2= ReLU( y1,1+y1,2−1/2),\nyl+1,1= ReLU(2 yl,1−4yl,2), yl+1,2= ReLU(2 yl,1−4yl,2−1/2)\nforl= 2,3,...,L−1. Using induction, one can see that |x|=y1,1+y1,2andgl(|x|) =g◦g◦···◦g/bracehtipupleft/bracehtipdownright/bracehtipdownleft/bracehtipupright\nl(|x|) =\n2yl+1,1−4yl+1,2,l= 1,...,L−1forx∈[−1,1]. Now let\nf(x) =|x|−L−1/summationdisplay\nl=1gl(|x|)\n22l,\nthenf∈ FL,2, and/vextendsingle/vextendsinglex2−f(x)/vextendsingle/vextendsingle≤2−2Lforx∈[−1,1].\nLemma 8. For multiplication function ϕ(x,y) =xy, we have\ndist(ϕ=xy,F3L,2)≤3·2−2L. (9)\nProof. Notice that\nϕ=xy= 2/parenleftbiggx+y\n2/parenrightbigg2\n−1\n2x2−1\n2y2.\nApplying the addition properties (2)(5) and lemma 7, we obta in (9).\nNow we use the multiplication function as the basic block to c onstruct monomials and polynomials.\nLemma 9. For a monomial Mp(x)ofdvariables with degree p, we have\ndist/parenleftbig\nMp,F3(p−1)L,3/parenrightbig\n≤3(p−1)·2−2L. (10)\nProof. LetMp(x) =xi1xi2···xip,i1,...,ip∈ {1,...,d}. Using induction, assume that the lemma holds for\nthe degree- pmonomial Mp, consider a degree- (p+1)monomial Mp+1(x) =Mp(x)·xip+1. Letϕ(y,x) =yx,\nthenMp+1(x) =ϕ(Mp(x),xip+1). From composition properties (3)(6) and lemma 8, we have\ndist(Mp+1,F3pL,3)≤dist/parenleftbig\nϕ(Mp(x),xip+1),F3L,2◦F3(p−1)L,3/parenrightbig\n≤Lϕdist/parenleftbig\nMp,F3(p−1)L,3/parenrightbig\n+dist(ϕ,F3L,2)≤3p·2−2L.\nNote that the Lipschitz norm Lϕ= 1sincexip+1∈[−1,1].\nLemma 10. For a degree- ppolynomial Pp(x) =/summationtext\n|k|≤pakxk,x∈[−1,1]d,k= (k1,...,k d)∈Nd, we have\ndist/parenleftBig\nPp,F(p+d\nd)(p−1)L,3/parenrightBig\n<3(p−1)·2−2L/summationdisplay\n|k|≤p|ak|. (11)\n4\nProof. The lemma can be proved by applying the addition property (2) (5) and lemma 9.\nNote that the number of monomials of dvariables with degree less or equal to pis/parenleftbigp+d\nd/parenrightbig\n.\nNow we are ready to prove theorem 6.\nProof. Letε= exp/parenleftbig\n−dδ/parenleftbig\ne−1L1/2d−1/parenrightbig/parenrightbig\n, thenL=/bracketleftbig\ne/parenleftbig1\ndδlog1\nε+1/parenrightbig/bracketrightbig2d. Without loss of generality, assume/summationtext\nk|ak|= 1. We will show that there exists ˆf∈ FL,3such that /bardblf−ˆf/bardbl∞<2ε.\nDenote\nf(x) =Pp(x)+R(x):=/summationdisplay\n|k|≤pakxk+/summationdisplay\n|k|>pakxk.\nForx∈[−1+δ,1−δ]d, we have |R(x)|<(1−δ)p, thus truncation to p=1\nδlog1\nεwill ensure |R(x)|< ε.\nFrom lemma 10, we have dist(Pp,FL,3)<3(p−1)·2−2L′, where\nL′=L/parenleftbiggp+d\np/parenrightbigg−1\n(p−1)−1> L/bracketleftbigg(p+d)d\n(d/e)d/bracketrightbigg−1\np−1\n=L/bracketleftbigg\ne/parenleftbigg1\ndδlog1\nε+1/parenrightbigg/bracketrightbigg−d/parenleftbigg1\nδlog1\nε/parenrightbigg−1\n=/bracketleftbigg\ne/parenleftbigg1\ndδlog1\nε+1/parenrightbigg/bracketrightbiggd/parenleftbigg1\nδlog1\nε/parenrightbigg−1\n≫log1\nε+log1\nδ\nford≥2andε≪1, thendist(Pp,FL,3)<3(p−1)·2−2L′≪ε. Thus there exists ˆf∈ FL,3such that\n/bardblPp−ˆf/bardbl∞< ε, and/bardblf−ˆf/bardbl∞≤ /bardblf−Pp/bardbl∞+/bardblPp−ˆf/bardbl∞<2ε.\nOne can also formulate theorem 6 as follows:\nCorollary 11. Assume that the analytic function f(x) =/summationtext\nk∈Ndakxkis absolutely convergent in [−1,1]d.\nThen for any ε,δ >0, there exists a function ˆfthat can be represented by a deep ReLU network with depth\nL=/bracketleftbig\ne/parenleftbig1\ndδlog1\nε+1/parenrightbig/bracketrightbig2dand width d+4, such that |f(x)−ˆf(x)|<2ε/summationtext\nk|ak|for allx∈[−1+δ,1−δ]d.\n4 The convergence rate for the general case\nHere we prove that for deep neural networks, the approximati on error decays like O/parenleftbig\n(N/d)−1/2/parenrightbig\nwhereN\nis the number of parameters in the model. The proof is quite si mple but the result does not seem to be\navailable in the existing literature.\nTheorem 12. Given a function f:Rd→Rwith Fourier representation\nf(x) =/integraldisplay\nRdeiω·xˆf(ω)dω,\nand a compact domain B⊂Rdcontaining 0, let\nCf,B=/integraldisplay\nB|ω|B|ˆf(ω)|dω,\nwhere|ω|B= supx∈B|ω·x|. Then there exists a ReLU network fL,Mwith width M+d+1and depth L,\nsuch that/integraldisplay\nB|f(x)−fL,M(x)|2dµ(x)≤8C2\nf,B\nML, (12)\nwhereµis an arbitrary probability measure.\n5\nHere the number of parameters\nN= (d+1)(M+d+1)+(M+d+2)(M+d+2)(L−1)+(M+d+2) =O/parenleftbig\n(M+d)2L/parenrightbig\n.\nTakingM=d, we will have L=O/parenleftbig\nN/d2/parenrightbig\nand the convergence rate becomes O/parenleftbig\n(ML)−1/2/parenrightbig\n=O/parenleftbig\n(N/d)−1/2/parenrightbig\n.\nNote that in the universal approximation theorem for shallo w networks with one hidden layer , one can\nprove the same convergence rate O(n−1/2) =O((N/d)−1/2). Herenis the number of hidden nodes and\nN= (d+2)n+1is the number of parameters.\nTheorem 12 is a direct consequence of the following theorem b y Barron (1993) for networks with one\nhidden layer and sigmoidal type of activation function. Her e a function σissigmoidal if it is bounded\nmeasurable on Rwithσ(+∞) = 1 andσ(−∞) = 0.\nTheorem 13. Given a function fand a domain Bsuch that Cf,Bfinite, given a sigmoidal function σ, there\nexists a linear combination\nfn(x) =n/summationdisplay\nj=1cjσ(aj·x+bj)+c0,aj∈Rd, bj,cj∈R,\nsuch that/integraldisplay\nB|f(x)−fn(x)|2dµ(x)≤4C2\nf,B\nn. (13)\nNotice that\nσ(z) = ReLU( z)−ReLU(z−1)\nis sigmoidal, so we have:\nCorollary 14. Given a function fand a set BwithCf,Bfinite, there exists a linear combination of nReLU\nfunctions\nfn(x) =n/summationdisplay\nj=1cjReLU(aj·x+bj)+c0,\nsuch that/integraldisplay\nB|f(x)−fn(x)|2dµ(x)≤8C2\nf,B\nn.\nNext we convert this shallow network to a deep one.\nLemma 15. Letfn:Rd→Rbe a ReLU network with one hidden layer (as shown in the previou s corollary).\nFor any decomposition n=m1+···+mL,nk∈N⋆,fncan also be represented by a ReLU network with L\nhidden layers, where the l-th layer has ml+d+1nodes.\nProof. Denote the input by x= (x1,...,x d). We construct a network with Lhidden layers in which the l-th\nlayer has ml+d+1nodes{hl,1:ml+d+1}. Similar to the construction in proposition 2, let\nhL,1:d=hL−1,1:d=···=h1,1:d=x1:d, hl,d+j= ReLU( al,j·x+bl,j)\nforj= 1,...,m l,l= 1,...,L , and\nh1,d+m1+1=c0, hl+1,d+ml+1+1=hl,d+ml+1+ml/summationdisplay\nj=1cl,jhl,d+j\nforl= 1,...,L−1. Here we use the notation al,j=am1+···+ml−1+j(the same for bl,jandcl,j). One can\nsee that h1,m∈ˆL(x1:d),hl+1,m∈ˆL(hl,1:d+ml+1),m= 1,...,d+ml+1,l= 1,...,L−1and\nhl,d+ml+1=c0+m1+···+ml−1/summationdisplay\nj=1cjReLU(aj·x+bj).\n6\nThus\nfn=hL,d+mL+1+mL/summationdisplay\nj=1cL,jhL,d+j∈ L(hL,1:d+mL+1)\ncan be represented by this deep network.\nNow consider a network with Llayers where each layer has the same width M+d+1. From lemma 15,\nthis network is equivalent to a one-layer network with MLhidden nodes. Apply corollary 14, we obtain the\ndesired approximation result for deep networks stated in th eorem 12.\nAcknowledgement. We are grateful to Chao Ma for very helpful discussions durin g the early stage of\nthis work. We are also grateful to Jinchao Xu for his interest , which motivated us to write up this paper.\nThe work is supported in part by ONR grant N00014-13-1-0338 a nd Major Program of NNSFC under grant\n91130005.\nReferences\nAndrew R Barron. Universal approximation bounds for superp ositions of a sigmoidal function. IEEE\nTransactions on Information theory , 39(3):930–945, 1993.\nGeorge Cybenko. Approximation by superpositions of a sigmo idal function. Mathematics of control, signals\nand systems , 2(4):303–314, 1989.\nShiyu Liang and R Srikant. Why deep neural networks for funct ion approximation? arXiv preprint\narXiv:1610.04161 , 2016.\nZhou Lu, Hongming Pu, Feicheng Wang, Zhiqiang Hu, and Liwei W ang. The expressive power of neural\nnetworks: A view from the width. In Advances in Neural Information Processing Systems , pages 6232–\n6240, 2017.\nHadrien Montanelli and Qiang Du. Deep relu networks lessen t he curse of dimensionality. arXiv preprint\narXiv:1712.08688 , 2017.\nTomaso Poggio, Hrushikesh Mhaskar, Lorenzo Rosasco, Brand o Miranda, and Qianli Liao. Why and when\ncan deep-but not shallow-networks avoid the curse of dimens ionality: A review. International Journal of\nAutomation and Computing , 14(5):503–519, 2017.\nMatus Telgarsky. Benefits of depth in neural networks. arXiv preprint arXiv:1602.04485 , 2016.\nDmitry Yarotsky. Error bounds for approximations with deep relu networks. Neural Networks , 94:103–114,\n2017.\n7",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "o4Na7Bxf3tw",
"year": null,
"venue": "CoRR 2019",
"pdf_link": "http://arxiv.org/pdf/1903.02154v2",
"forum_link": "https://openreview.net/forum?id=o4Na7Bxf3tw",
"arxiv_id": null,
"doi": null
}
|
{
"title": "A Priori Estimates of the Population Risk for Residual Networks",
"authors": [
"Weinan E",
"Chao Ma",
"Qingcan Wang"
],
"abstract": "Optimal a priori estimates are derived for the population risk, also known as the generalization error, of a regularized residual network model. An important part of the regularized model is the usage of a new path norm, called the weighted path norm, as the regularization term. The weighted path norm treats the skip connections and the nonlinearities differently so that paths with more nonlinearities are regularized by larger weights. The error estimates are a priori in the sense that the estimates depend only on the target function, not on the parameters obtained in the training process. The estimates are optimal, in a high dimensional setting, in the sense that both the bound for the approximation and estimation errors are comparable to the Monte Carlo error rates. A crucial step in the proof is to establish an optimal bound for the Rademacher complexity of the residual networks. Comparisons are made with existing norm-based generalization error bounds.",
"keywords": [],
"raw_extracted_content": "arXiv:1903.02154v2 [cs.LG] 31 May 2019A Priori Estimates of the Population Risk for Residual Netwo rks\nWeinan E1,2,3, Chao Ma2, and Qingcan Wang2\n1Department of Mathematics, Princeton University\n2Program in Applied and Computational Mathematics, Princeton Unive rsity\n3Beijing Institute of Big Data Research\[email protected], {chaom,qingcanw }@princeton.edu\nAbstract\nOptimal a priori estimates are derived for the population ri sk, also known as the generalization\nerror, of a regularized residual network model. An importan t part of the regularized model is the usage\nof anewpathnorm, called the weighted path norm , as theregularization term. The weighted pathnorm\ntreats the skip connections and the nonlinearities differen tly so that paths with more nonlinearities are\nregularized by larger weights. The error estimates are a pri ori in the sense that the estimates depend\nonly on the target function, not on the parameters obtained i n the training process. The estimates\nare optimal, in a high dimensional setting, in the sense that both the bound for the approximation\nand estimation errors are comparable to the Monte Carlo erro r rates. A crucial step in the proof is to\nestablish an optimal bound for the Rademacher complexity of the residual networks. Comparisons are\nmade with existing norm-based generalization error bounds .\nKey words a priori estimate, residual network, weighted path norm\n1 Introduction\nOne of the major theoretical challenges in machine learning is to unde rstand, in a high dimensional setting,\nthe generalization error for deep neural networks, especially res idual networks [12] which have become one\nof the default choices for many machine learning tasks. Since the ne tworks used in practice are usually\nover-parameterized, many recent attempts have been made to d erive bounds that do not deteriorate as\nthe number of parameters grows. In this regard, the norm-base d bounds use some appropriate norms\nof the parameters to control the generalization error [17, 6, 11 , 5]. Other bounds based on the idea\nof compressing the networks [3] or the use of the Fisher-Rao info rmation [15] have also been proposed.\nWhile these generalization bounds differ in many ways, they have one t hing in common: they depend on\ninformation about the final parameters obtained in the training pro cess. Following [10], we call them a\nposteriori bounds. In this paper, we derive a priori estimates of the population risk for deep residual\nnetworks. Compared to the a posteriori estimates mentioned abo ve, our bounds depend only on the target\nfunction and the network structure (e.g. the depth and the width ). In addition, our bounds scale optimally\nwith the network depths and the size of the training data: the appr oximation error term scales as O(1/L)\nwith the depth L, while the estimation error term scales as O(1/√n) with the size of the training data n\n(independent of the depth), both are comparable to the Monte Ca rlo rate.\nOurinterestinderivingaprioriestimatescomesfromananalogywith finite elementmethods(FEM) [8,\n1]. Both a priori and a posteriorierror estimates are common in the theoretical analysisof FEM. In fact, in\nFEM a priori estimates appeared much earlier and are still more comm on than a posteriori estimates [8],\n1\ncontrary to the situation in machine learning. Although a priori boun ds can not be readily evaluated due\nto the fact that the required information about the target funct ion is not available to us, they provide\nmuch insight about the qualitative behavior of different methods. In the context of machine learning, they\nalso provide a qualitative comparison between different norms, as we show later. Most importantly, one\ncan only expect the generalization error to be small for certain clas s of target functions, a priori estimates\nis the most natural way to encode such information in the error ana lysis.\nThe second important point of our approach is to regularize the mod el. Even though regularization\nis quite common in machine learning, neural network models seem to pe rform quite well without explicit\nregularization, as long as one is good at tuning the hyper-paramete rs in the training process. For this\nreason, there has been some special interest in studying the so-c alled “implicit regularization” effect.\nNevertheless, we feel that the study of properly regularized mod els is still of interest, particularly in the\nover-parametrized regime, for several reasons:\n1. These regularized models are much more robust. In other words , one does not have to search for the\nbetter ones among all the global minimizers using excessive tuning.\n2. They allow us to get an idea about how small the test accuracy can be among all the global mini-\nmizers.\n3. Theycanpotentiallyhelpustofindgoodminimizers(intermsoftest accuracy)fortheun-regularized\nmodel.\nFor the case of two-layer neural network models, the analytical a nd practical advantages of a priori\nanalysis have already been demonstrated in [10]. It was shown there that optimal error rates can be\nestablished for appropriately regularized two-layer neural netwo rks models, and the accuracy of these\nmodels behaves in a much more robust fashion than the vanilla models w ithout regularization. In this\npaper, we set out to extend the work in [10] for shallow neural netw ork models to deep ones. We choose\nresidual network as a starting point.\nToderiveouraprioriestimate, wedesignanew parameter-basedn ormfordeep residualnetworkscalled\ntheweighted path norm , and use this norm as a regularization term to formulate a regularize d problem.\nUnlike traditional path norms, our weighted path norm puts more we ight on paths that go through more\nnonlinearities. In this way, we penalize paths with many nonlinearities a nd hence control the complexity\nof the functions represented by networks with a bounded norm. B y using the weighted path norm as the\nregularization term, we can strike a balance between the empirical r isk and the complexity of the model,\nand thus a balance between the approximation error and the estima tion error. This allows us to prove\nthat the minimizer of the regularized model has the optimal error ra te in terms of the population risk. A\ncomparison with existing parameter-based norms shows that it is no ntrivial to find such balance.\nThe rest of the paper is organized as follows. In Section 2, we set up the problem and state our main\ntheorem. We also sketch the main ideas in the proof. In Section 3 we g ive the full proof of the theorems.\nIn Section 4, we compare our result with related works and put thing s into perspective. Conclusions are\ndrawn in Section 5.\nNotations In this paper, we let Ω = [0 ,1]dbe the unit hypercube, and consider target functions with\ndomain Ω. Let πbe a probability measure on Ω, for any function f: Ω→R, let/bar⌈blf/bar⌈blbe thel2norm off\nbased onπ,\n/bar⌈blf/bar⌈bl2=/integraldisplay\nΩf2(x)π(dx). (1.1)\nLetσbe the ReLU activation function used in the neural network models: σ(x) = max{x,0}. For a vector\nx,σ(x) is a vector of the same size obtained by applying ReLU component-w ise.\n2\n2 Setup of the problem and the main theorem\n2.1 Setup\nWe consider the regression problem and residual networks with ReL U activation σ(·). Assume that the\ntarget function f∗: Ω→[0,1]. Let the training set be {(xi,yi)}n\ni=1, where the xi’s are independently\nsampled from an underlying distribution πandyi=f∗(xi). Later we will consider problems with noise.\nConsider the following residual network architecture with skip conn ection in each layer1\nh0=V x,\ngl=σ(Wlhl−1),\nhl=hl−1+Ulgl, l= 1,...,L,\nf(x;θ) =u⊺hL. (2.1)\nHere the set of parameters θ={V,Wl,Ul,u},V∈RD×d,Wl∈Rm×D,Ul∈RD×m,u∈RD,Lis the\nnumber of layers, mis the width of the residual blocks and Dis the width of skip connections. Note that\nwe omit the bias term in the network by assuming that the first elemen t of the input xis always 1.\nTo simplify the proof we will consider the truncated square loss\nℓ(x;θ) =/vextendsingle/vextendsingleT[0,1]f(x;θ)−f∗(x)/vextendsingle/vextendsingle2, (2.2)\nwhereT[0,1]is the truncation operator: for any function h(·)\nT[0,1]h(x) = min{max{h(x),0},1}. (2.3)\nThe truncated population risk and empirical risk functions are\nL(θ) =Ex∼πℓ(x;θ),ˆL(θ) =1\nnn/summationdisplay\ni=1ℓ(xi;θ), (2.4)\nRemark. The truncation is used in order to simplify the proof for the complexit y control (Theorem 2.10).\nOther truncation methods can also be used. For example, we can tr uncate the loss function ℓ, instead of\nf.\n2.2 Function space and norms\nIn this paper, we consider target functions belonging to the Barro n spaceB. The following definitions of\nthe Barron space and the corresponding norm are adopted from [1 0].\nDefinition 2.1 (Barron space) .LetSd−1be the unit sphere in Rd, andFbe the Borel σ-algebra on Sd−1.\nFor any function f: Ω→R, define the Barron norm offas\n/bar⌈blf/bar⌈blB= inf/bracketleftbigg/integraldisplay\nSd−1|a(ω)|2π(dω)/bracketrightbigg1/2\n, (2.5)\nwhere the infimum is taken over all measurable function a(ω) and probability distribution πon (Sd−1,F)\nthat satisfies\nf(x) =/integraldisplay\nSd−1a(ω)σ(ω⊺x)π(dω), (2.6)\n1In practice, residual networks may use skip connections eve ry several layers. We consider skip connections every layer\nfor the sake of simplicity. It is easy to extend the analysis t o the more general cases.\n3\nfor anyx∈Ω.\nTheBarron space Bis the set of continuous functions with finite Barron norm,\nB={f: Ω→R| /bar⌈blf/bar⌈blB<∞}. (2.7)\nThe Barron space is large enough to contain many functions of inter est. For example, it was shown\nin [13] that if a function has finite spectral norm, then it belongs to t he Barron space.\nDefinition 2.2 (Spectral norm) .Letf∈L2(Ω), and let F∈L2(Rd) be an extension of ftoRd, andˆF\nbe the Fourier transform of F. Define the spectral norm offas\nγ(f) = inf/integraldisplay\nRd/bar⌈blω/bar⌈bl2\n1|ˆF(ω)|dω, (2.8)\nwhere the infimum is taken over all possible extensions F.\nCorollary 2.3. Letf: Ω→Rbe a function that satisfies γ(f)<∞, then\n/bar⌈blf/bar⌈blB≤γ(f)<∞. (2.9)\nOn the other hand, for residual networks, we define the following p arameter-based norm to control the\nestimation error. We call this norm the weighted path norm since it is a weighted version of the l1path\nnorm studied in [16, 20].\nDefinition 2.4 (Weighted path norm) .Given a residual network f(·;θ) with architecture (2.1), define\ntheweighted path norm offas\n/bar⌈blf/bar⌈blP=/bar⌈blθ/bar⌈blP=/vextenddouble/vextenddouble|u|⊺(I+3|UL||WL|)···(I+3|U1||W1|)|V|/vextenddouble/vextenddouble\n1, (2.10)\nwhere|A|withAbeing a vector or matrix means taking the absolute values of all the e ntries of the vector\nor matrix.\nOur weighted path norm is a weighted sum over all paths in the neural network flowing from the input\nto the output, and gives larger weight to the paths that go throug h more nonlinearities. More precisely,\ngiven a path P, letwP\n1,wP\n2,...,wP\nLbe the weights on this path, let pbe the number of non-linearities that\nPgoes through. Then, it is straightforward to see that our weighte d path norm can also be expressed as\n/bar⌈blf/bar⌈blP=/summationdisplay\nPis activated3pL/productdisplay\nl=1|wP\nl|. (2.11)\nRemark. The advantage of our weighted path norm can be seen from an “effe ctive depth” viewpoint. It\nhas been observed that although residual networks can be veryd eep, most information is processedby only\na small number of nonlinearities. This has been explored for example in [19], where the authors observed\nnumerically that residual networks behave like ensembles of networ ks with fewer layers. Our weighted\npath norm naturally takes this into account.\n2.3 Main theorem\nTheorem 2.5 (A priori estimate) .Letf∗: Ω→[0,1]and assume that the residual network f(·;θ)has\narchitecture (2.1). Let nbe the number of training samples, Lbe the number of layers and mbe the width\nof the residual blocks. Let L(θ)andˆL(θ)be the truncated population risk and empirical risk defined i n\n(2.4) respectively; let /bar⌈blf/bar⌈blBbe the Barron norm of f∗and/bar⌈blθ/bar⌈blPbe the weighted path norm of f(·;θ)in\n4\nDefinition 2.1 and 2.4. For any λ≥4 + 2/[3/radicalbig\n2log(2d)], assume that ˆθis an optimal solution of the\nregularized model\nmin\nθJ(θ) :=ˆL(θ)+3λ/bar⌈blθ/bar⌈blP/radicalbigg\n2log(2d)\nn. (2.12)\nThen for any δ∈(0,1), with probability at least 1−δover the random training samples, the population\nrisk satisfies\nL(ˆθ)≤16/bar⌈blf/bar⌈bl2\nB\nLm+(12/bar⌈blf/bar⌈blB+1)3(4+λ)/radicalbig\n2log(2d)+2√n+4/radicalbigg\n2log(14/δ)\nn. (2.13)\nRemark. 1. The estimate is a priori in nature since the right hand side of (2.13) depends only on the\nBarron norm of the target function without knowing the norm of ˆθ.\n2. We want to emphasize that our estimate is nearly optimal. The first term in (2.13) shows that the\nconvergencerate with respect to the size of the neural network isO(1/(Lm)), which matches the rate\nin universal approximation theory for shallow networks [4]. The last t wo terms show that the rate\nwith respect to the number of training samples is O(1/√n), which matches the classical estimates\nof the generalization gap.\n3. The second term depends only on /bar⌈blf/bar⌈blBinstead of the network architecture, thus there is no need to\nincrease the sample size nwith respect to the network size parameters Landmto ensure that the\nmodel generalizes well. This is not the case for existing error bounds (see Section 4).\n2.4 Extension to the case with noise\nOuraprioriestimates can be extended to problemswith sub-gauss iannoise. Assume that yiin the training\ndata are given by yi=f∗(xi)+εiwhere{εi}are i.i.d. random variables such that Eεi= 0 and\nPr{|εi|>t} ≤ce−t2\n2σ2,∀t≥τ, (2.14)\nfor some constants c,σandτ. LetℓB(x;θ) =ℓ(x;θ)∧B2be the square loss truncated by B2, and define\nLB(θ) =Ex∼πℓB(x;θ),ˆLB(θ) =1\nnn/summationdisplay\ni=1ℓB(xi;θ). (2.15)\nThen, we have\nTheorem 2.6 (Aprioriestimatefornoisyproblems) .In addition to the conditions in Theorem 2.5, assume\nthat the noise satisfies (2.14). Let LB(θ)andˆLB(θ)be the truncated population risk and empirical risk\ndefined in (2.15). For B≥1 + max{τ,σ√logn}andλ≥4 + 2B/[3/radicalbig\n2log(2d)], assume that ˆθis an\noptimal solution of the regularized model\nmin\nθJ(θ) :=ˆL(θ)+λB/bar⌈blθ/bar⌈blP·3/radicalbigg\n2log(2d)\nn. (2.16)\nThen for any δ∈(0,1), with probability at least 1−δover the random training sample, the population risk\nsatisfies\nL(ˆθ)≤16/bar⌈blf/bar⌈bl2\nB\nLm+(12/bar⌈blf/bar⌈blB+1)3(4+λ)B/radicalbig\n2log(2d)+2B2\n√n+4B2/radicalbigg\n2log(14/δ)\nn+2c(4σ2+1)√n.(2.17)\nWe see that the a priori estimates for problems with noise only differ f rom that for problems without\nnoise by a logarithmic term. In particular, the estimates of the gene ralization errorare still nearly optimal.\n5\n2.5 Proof sketch\nWe prove the main theorem in 3 steps. We list the main intermediate res ults in this section, and leave the\nfull proof to Section 3.\nFirst, we show that any function fin the Barron space can be approximated by residual networks with\nincreasing depth or width, and with weighted path norm uniformly bou nded.\nTheorem 2.7. For any target function f∗∈ B, and anyL,m≥1, there exists a residual network f(·;˜θ)\nwith depthLand widthm, such that\n/bar⌈blf(x;˜θ)−f∗/bar⌈bl2≤16/bar⌈blf∗/bar⌈bl2\nB\nLm(2.18)\nand\n/bar⌈bl˜θ/bar⌈blP≤12/bar⌈blf∗/bar⌈blB.\nSecondly, we show that the weighted path norm helps to bound the R ademacher complexity. Since\nthe Rademacher complexity can bound the generalization gap, this g ives an a posteriori bound on the\ngeneralization error.\nRecall the definition of Rademacher complexity:\nDefinition 2.8 (Rademacher complexity) .Given a function class Hand sample set S={xi}n\ni=1, the\n(empirical) Rademacher complexity ofHwith respect to Sis defined as\nˆR(H) =1\nnEξ/bracketleftBigg\nsup\nh∈Hn/summationdisplay\ni=1ξih(xi)/bracketrightBigg\n, (2.19)\nwhere theξi’s are independent random variables with Pr {ξi= 1}= Pr{ξi=−1}= 1/2.\nIt is well-known that the Rademacher complexity can be used to cont rol the generalization gap [18].\nTheorem 2.9. Given a function class H, for anyδ∈(0,1), with probability at least 1−δover the random\nsamples{xi}n\ni=1,\nsup\nh∈H/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleEx[h(x)]−1\nnn/summationdisplay\ni=1h(xi)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle≤2ˆR(H)+2 sup\nh,h′∈H/bar⌈blh−h′/bar⌈bl∞/radicalbigg\n2log(4/δ)\nn. (2.20)\nThe following theorem is a crucial step in our analysis. It shows that t he Rademacher complexity of\nresidual networks can be controlled by the weighted path norm.\nTheorem 2.10. LetFQ={f(·;θ) :/bar⌈blθ/bar⌈blP≤Q}where thef(·,θ)’s are residual networks defined by (2.1).\nAssume that the samples {xi}n\ni=1⊂Ω, then we have\nˆR(FQ)≤3Q/radicalbigg\n2log(2d)\nn. (2.21)\nNote that the definition of FQdoes not specify the depth or width of the network. Consequently\nour Rademacher complexity bound does not depend on the depth an d width of the network. Hence, the\nresulted a-posteriori estimate has no dependence on Landmeither.\nTheorem 2.11 (A posteriori estimates) .Let/bar⌈blθ/bar⌈blPbe the weighted path norm of residual network f(·;θ).\nLetnbe the number of training samples. Let L(θ)andˆL(θ)be the truncated population risk and empirical\nrisk defined in (2.4). Then for any δ∈(0,1), with probability at least 1−δover the random training\nsamples, we have\n/vextendsingle/vextendsingle/vextendsingleL(θ)−ˆL(θ)/vextendsingle/vextendsingle/vextendsingle≤2(/bar⌈blθ/bar⌈blP+1)6/radicalbig\n2log(2d)+1√n+2/radicalbigg\n2log(7/δ)\nn. (2.22)\n6\nConsider the decomposition\nL(ˆθ)−L(˜θ) =/bracketleftBig\nL(ˆθ)−J(ˆθ)/bracketrightBig\n+/bracketleftBig\nJ(ˆθ)−J(˜θ)/bracketrightBig\n+/bracketleftBig\nJ(˜θ)−L(˜θ)/bracketrightBig\n. (2.23)\nRecall that ˆθis the optimal solution of the minimization problem (2.12), and ˜θcorresponds to the ap-\nproximation in Theorem 2.7.\nBy the definition of J(2.12),\nL(ˆθ)−J(ˆθ)≤/vextendsingle/vextendsingle/vextendsingleL(ˆθ)−ˆL(ˆθ)/vextendsingle/vextendsingle/vextendsingle−3λ/bar⌈blˆθ/bar⌈blP/radicalbigg\n2log(2d)\nn,\nJ(˜θ)−L(˜θ)≤/vextendsingle/vextendsingle/vextendsingleL(˜θ)−ˆL(˜θ)/vextendsingle/vextendsingle/vextendsingle+3λ/bar⌈bl˜θ/bar⌈blP/radicalbigg\n2log(2d)\nn.\nFrom the a posteriori estimate (2.22), both |L(ˆθ)−ˆL(ˆθ)|and|L(˜θ)−ˆL(˜θ)|are bounded with high\nprobability, thus both L(ˆθ)− J(ˆθ) andJ(˜θ)− L(˜θ) are bounded with high probability. In addition,\nJ(ˆθ)−J(˜θ)≤0, and the approximation result (2.18) bounds L(˜θ). Plugging all of the above into (2.23)\nwill give us the a priori estimates in Theorem 2.5.\nFor problems with noise, we can similarly bound LB(θ)− J(θ) instead of L(θ)− J(θ). Hence, to\nformulate an a priori estimate, we also need to control L(θ)−LB(θ). This is given by the following lemma:\nLemma 2.12. Assume that the noise εhas zero mean and satisfies (2.14), and B≥1+max/braceleftbig\nτ,σ√logn/bracerightbig\n.\nFor any θwe have\n|L(θ)−LB(θ)| ≤c(4σ2+1)√n. (2.24)\n3 Proof\n3.1 Approximation error\nFor the approximation error, [10] proved the following result for s hallow networks.\nTheorem 3.1. For any target function f∗∈ Band anyM≥1, there exists a two-layer network with\nwidthM, such that/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddoubleM/summationdisplay\nj=1ajσ(b⊺\njx)−f∗(x)/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble2\n≤16/bar⌈blf∗/bar⌈bl2\nB\nM(3.1)\nand\nM/summationdisplay\nj=1|aj|/bar⌈blbj/bar⌈bl1≤4/bar⌈blf∗/bar⌈blB. (3.2)\nWe have omitted writing out the bias term. This can be accommodated by assuming that the first\nelement of input xis always 1. For residual networks, we prove the approximation res ult (Theorem 2.7)\nby splitting the shallow network into several parts and stack them v ertically [9]. This is allowed by the\nspecial structure of residual networks.\nProof of Theorem 2.7. We construct a residual network f(·;˜θ) with input dimension d, depthL, widthm,\n7\nandD=d+1 using\nV=/bracketleftbigId0/bracketrightbig⊺,u=/bracketleftbig0 0···0 1/bracketrightbig⊺,\nWl=\nb⊺\n(l−1)m+10\nb⊺\n(l−1)m+20\n......\nb⊺\nlm0\n,Ul=\n0 0 ···0\n............\n0 0 ···0\na(l−1)m+1a(l−1)m+2···alm\n\nforl= 1,...,L. Then it is easy to verify that f(x;˜θ) =/summationtextLm\nj=1ajσ(b⊺\njx), and\n/bar⌈bl˜θ/bar⌈blP= 3Lm/summationdisplay\nj=1|aj|/bar⌈blbj/bar⌈bl1≤12/bar⌈blf∗/bar⌈blB.\n3.2 Rademacher complexity\nWe use the method of induction to bound the Rademacher complexity of residual networks. We first\nextend the definition of weighted path norm to hidden neurons in the residual network.\nDefinition 3.2. Given a residual network defined by (2.1), recall the definition of gl,\ngl(x) =σ(Wlhl−1), l= 1,...,L. (3.3)\nLetgi\nlbe thei-th element of gl, define the weighted path norm\n/bar⌈blgi\nl/bar⌈blP=/vextenddouble/vextenddouble/vextenddouble3|Wi,:\nl|(I+3|Ul−1||Wl−1|)···(I+3|U1||W1|)|V|/vextenddouble/vextenddouble/vextenddouble\n1, (3.4)\nwhereWi,:\nlis thei-th row of Wl.\nThe following lemma establishes the relationship between /bar⌈blf/bar⌈blPand/bar⌈blgi\nl/bar⌈blP. Lemma 3.4 gives properties\nof the corresponding function class.\nLemma 3.3. For the weighted path norm defined in (2.10) and (3.4), we have\n/bar⌈blf/bar⌈blP=L/summationdisplay\nl=1m/summationdisplay\nj=1/parenleftBig\n|u|⊺|U:,j\nl|/parenrightBig\n/bar⌈blgj\nl/bar⌈blP+/vextenddouble/vextenddouble|u|⊺|V|/vextenddouble/vextenddouble\n1, (3.5)\nand\n/bar⌈blgi\nl/bar⌈blP=l/summationdisplay\nk=1m/summationdisplay\nj=13/parenleftBig\n|Wi,:\nl||U:,j\nk|/parenrightBig\n/bar⌈blgj\nk/bar⌈blP+3/vextenddouble/vextenddouble|Wi,:\nl||V|/vextenddouble/vextenddouble\n1, (3.6)\nwhereU:,j\nlis thej-th column of Ul.\nProof.Recall the definition of /bar⌈blf/bar⌈blP, we have\n/bar⌈blf/bar⌈blP=/vextenddouble/vextenddouble|u|⊺(I+3|UL||WL|)···(I+3|U1||W1|)|V|/vextenddouble/vextenddouble\n1\n=/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddoubleL/summationdisplay\nl=1|u|⊺|Ul|·3|Wl|l−1/productdisplay\nj=1(I+3|Ul−j||Wl−j|)|V|+|u|⊺|V|/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble\n1\n=L/summationdisplay\nl=1m/summationdisplay\nj=1/parenleftBig\n|u|⊺|U:,j\nl|/parenrightBig\n/bar⌈blgj\nl/bar⌈blP+/vextenddouble/vextenddouble|u|⊺|V|/vextenddouble/vextenddouble\n1,\n8\nwhich gives (3.5). Similarly we obtain (3.6).\nLemma 3.4. LetGQ\nl={gi\nl:/bar⌈blgi\nl/bar⌈blP≤Q}, then\n1.GQ\nk⊆ GQ\nlfork≤l;\n2.Gq\nl⊆ GQ\nlandGq\nl=q\nQGQ\nlforq≤Q.\nProof.For anygk∈ GQ\nk, letV,{Uj,Wj}k\nj=1andwbe the parameters of gk, wherewis the vector of\nthe parameters in the output layer (the Wi,:\nkin the definition of gi\nl). Then, for any l≥k, considergl\ngenerated by parameters V,{Uj,Wj}l\nj=1andw, withUj= 0 and Wj= 0 for any k <j≤l. Now it is\neasy to verify that gl=gkand/bar⌈blgl/bar⌈blP=/bar⌈blgk/bar⌈blP≤Q. Hence, we have GQ\nk⊆ GQ\nl.\nOn the other hand, obviously we have Gq\nl⊆ GQ\nlfor anyq≤Q. For anygl∈ Gq\nl, define ˜glby replacing\nthe output parameters wbyQ\nqw, then we have /bar⌈bl˜gl/bar⌈blP=Q\nq/bar⌈blgl/bar⌈blP≤Q, and hence ˜ gl∈ GQ\nl. Therefore, we\nhaveQ\nqGq\nl⊆ GQ. Similarly we can obtainq\nQGQ\nl⊆ Gq. Consequently, we have Gq\nl=q\nQGQ\nl.\nWe will also use the following two lemmas about Rademacher complexity [1 8]. Lemma 3.5 bounds\nthe Rademacher complexity of linear functions, and Lemma 3.6 gives t he contraction property of the\nRademacher complexity.\nLemma 3.5. LetH={h(x) =u⊺x:/bar⌈blu/bar⌈bl1≤1}. Assume that the samples {xi}n\ni=1⊂Rd, then\nˆR(H)≤max\ni/bar⌈blxi/bar⌈bl∞/radicalbigg\n2log(2d)\nn. (3.7)\nLemma 3.6. Assume that φi,i= 1,...,nare Lipschitz continuous functions with uniform Lipschitz\nconstantLφ, i.e.,|φi(x)−φi(x′)| ≤Lφ|x−x′|fori= 1,...,n, then\nEξ/bracketleftBigg\nsup\nh∈Hn/summationdisplay\ni=1ξiφi(h(xi))/bracketrightBigg\n≤LφEξ/bracketleftBigg\nsup\nh∈Hn/summationdisplay\ni=1ξih(xi)/bracketrightBigg\n. (3.8)\nWith Lemma 3.3–3.6, we can come to prove Theorem 2.10.\nProof of Theorem 2.10. We first estimate the Rademacher complexity of GQ\nl,\nˆR(GQ\nl)≤Q/radicalbigg\n2log(2d)\nn. (3.9)\nThis is done by induction. By definition, gi\n1(x) =σ(Wi,:\n1V x). Hence, using Lemma 3.5 and 3.6, we\nconclude that the statement (3.9) holds for l= 1. Now, assume that the result holds for 1 ,2,...,l. Then,\n9\nforl+1 we have\nnˆR(GQ\nl+1) =Eξsup\ngl+1∈GQ\nl+1n/summationdisplay\ni=1ξigl+1(xi)\n=Eξsup\n(1)n/summationdisplay\ni=1ξiσ(w⊺\nl(Ulgl+Ul−1gl−1+···+U1g1+h0))\n≤Eξsup\n(1)n/summationdisplay\ni=1ξi(w⊺\nl+1(Ulgl+Ul−1gl−1+···+U1g1+h0))\n≤Eξsup\n(2)/braceleftBiggl/summationdisplay\nk=1aksup\ng∈G1\nk/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsinglen/summationdisplay\ni=1ξig(xi)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle+bsup\n/bardblu/bardbl1≤1/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsinglen/summationdisplay\ni=1ξiu⊺xi/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/bracerightBigg\n≤Eξsup\na+b≤Q\n3\na,b≥0/braceleftBigg\nasup\ng∈G1\nl/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsinglen/summationdisplay\ni=1ξig(xi)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle+bsup\n/bardblu/bardbl1≤1/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsinglen/summationdisplay\ni=1ξiu⊺xi/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/bracerightBigg\n≤Q\n3/bracketleftBigg\nEξsup\ng∈G1\nl/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsinglen/summationdisplay\ni=1ξig(xi)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle+Eξsup\n/bardblu/bardbl1≤1/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsinglen/summationdisplay\ni=1ξiu⊺xi/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/bracketrightBigg\nwherecondition(1)isl/summationtext\nk=1m/summationtext\nj=13/parenleftBig\n|wl+1|⊺|U:,j\nk|/parenrightBig\n/bar⌈blgj\nk/bar⌈blP+3/bar⌈bl|wl+1|⊺|V|/bar⌈bl1≤Q,andcondition(2)is3/summationtextl\nk=1ak+\n3b≤Q. Thefirstinequalityisduetothecontractionlemma, whilethethirdine qualityisduetoLemma3.4.\nOn the one hand, we have\nEξsup\n/bardblu/bardbl1≤1/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsinglen/summationdisplay\ni=1ξiu⊺xi/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle=Eξsup\n/bardblu/bardbl1≤1n/summationdisplay\ni=1ξiu⊺xi≤n/radicalbigg\n2log(2d)\nn.\nOn the other hand, since 0 ∈ G1\nl, for any {ξ1,...,ξ n}, we have\nsup\ng∈G1\nln/summationdisplay\ni=1ξig(xi)≥0.\nHence, we have\nsup\ng∈G1\nl/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsinglen/summationdisplay\ni=1ξig(xi)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle≤max/braceleftBigg\nsup\ng∈G1\nln/summationdisplay\ni=1ξig(xi),sup\ng∈G1\nln/summationdisplay\ni=1−ξig(xi)/bracerightBigg\n≤sup\ng∈G1\nln/summationdisplay\ni=1ξig(xi)+ sup\ng∈G1\nln/summationdisplay\ni=1−ξig(xi),\nwhich gives\nEξsup\ng∈G1\nl/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsinglen/summationdisplay\ni=1ξig(xi)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle≤2Eξsup\ng∈G1\nln/summationdisplay\ni=1ξig(xi) = 2nˆR(G1\nl).\nTherefore, we have\nˆR(GQ\nl+1)≤Q\n3/bracketleftBigg\n2/radicalbigg\n2log(2d)\nn+/radicalbigg\n2log(2d)\nn/bracketrightBigg\n≤Q/radicalbigg\n2log(2d)\nn.\n10\nSimilarly, based on the control for the Rademacher complexity of GQ\n1,...,GQ\nL, we get\nˆR(FQ)≤3Q/radicalbigg\n2log(2d)\nn.\n3.3 A posteriori estimates\nProof of Theorem 2.11. LetH={ℓ(·;θ) :/bar⌈blθ/bar⌈blP≤Q}. Notice that for all x,\n|ℓ(x;θ)−ℓ(x;θ′)| ≤2|f(x;θ)−f(x;θ′)|.\nBy Lemma 3.6,\nˆR(H) =1\nnEξ/bracketleftBigg\nsup\n/bardblθ/bardblP≤Qn/summationdisplay\ni=1ξiℓ(xi;θ)/bracketrightBigg\n≤2\nnEξ/bracketleftBigg\nsup\n/bardblθ/bardblP≤Qn/summationdisplay\ni=1ξif(xi;θ)/bracketrightBigg\n= 2ˆR(FQ).\nFrom Theorem 2.9, with probability at least 1 −δ,\nsup\n/bardblθ/bardblP≤Q/vextendsingle/vextendsingle/vextendsingleL(θ)−ˆL(θ)/vextendsingle/vextendsingle/vextendsingle≤2ˆR(H)+2 sup\nh,h′∈H/bar⌈blh−h′/bar⌈bl∞/radicalbigg\n2log(4/δ)\nn\n≤12Q/radicalbigg\n2log(2d)\nn+2/radicalbigg\n2log(4/δ)\nn. (3.10)\nNow takeQ= 1,2,3,...andδQ=6δ\n(πQ)2, then with probability at least 1 −/summationtext∞\nQ=1δQ= 1−δ, the\nbound\nsup\n/bardblθ/bardblP≤Q/vextendsingle/vextendsingle/vextendsingleL(θ)−ˆL(θ)/vextendsingle/vextendsingle/vextendsingle≤12Q/radicalbigg\n2log(2d)\nn+2/radicalBigg\n2\nnlog2(πQ)2\n3δ\nholds for all Q∈N∗. In particular, for given θ, the inequality holds for Q=⌈/bar⌈blθ/bar⌈bl⌉</bar⌈blθ/bar⌈blP+1, thus\n/vextendsingle/vextendsingle/vextendsingleL(θ)−ˆL(θ)/vextendsingle/vextendsingle/vextendsingle≤12(/bar⌈blθ/bar⌈blP+1)/radicalbigg\n2log(2d)\nn+2/radicalBigg\n2\nnlog7(/bar⌈blθ/bar⌈blP+1)2\nδ\n≤12(/bar⌈blθ/bar⌈blP+1)/radicalbigg\n2log(2d)\nn+2/bracketleftBigg\n/bar⌈blθ/bar⌈blP+1√n+/radicalbigg\n2log(7/δ)\nn/bracketrightBigg\n= 2(/bar⌈blθ/bar⌈blP+1)6/radicalbig\n2log(2d)+1√n+2/radicalbigg\n2log(7/δ)\nn.\n3.4 A priori estimates\nNow we are ready to prove the main Theorem 2.5.\nProof of Theorem 2.5. Letˆθbe the optimal solution of the regularized model (2.12), and ˜θbe the approx-\nimation in Theorem 2.7. Consider\nL(ˆθ) =L(˜θ)+/bracketleftBig\nL(ˆθ)−J(ˆθ)/bracketrightBig\n+/bracketleftBig\nJ(ˆθ)−J(˜θ)/bracketrightBig\n+/bracketleftBig\nJ(˜θ)−L(˜θ)/bracketrightBig\n. (3.11)\n11\nFrom (2.18) in Theorem 2.7, we have\nL(˜θ)≤16/bar⌈blf∗/bar⌈bl2\nB\nLm. (3.12)\nCompare the definition of Jin (2.12) and the gap L−ˆLin (2.22), with probability at least 1 −δ/2,\nL(ˆθ)−J(ˆθ)≤/parenleftBig\n/bar⌈blˆθ/bar⌈blP+1/parenrightBig3(4−λ)/radicalbig\n2log(2d)+2√n+3λ/radicalbigg\n2log(2d)\nn+2/radicalbigg\n2log(14/δ)\nn\n≤3λ/radicalbigg\n2log(2d)\nn+2/radicalbigg\n2log(14/δ)\nn(3.13)\nsinceλ≥4+2/[3/radicalbig\n2log(2d)]; with probability at least 1 −δ/2, we have\nJ(˜θ)−L(˜θ)≤/parenleftBig\n/bar⌈bl˜θ/bar⌈blP+1/parenrightBig3(4+λ)/radicalbig\n2log(2d)+2√n−3λ/radicalbigg\n2log(2d)\nn+2/radicalbigg\n2log(14/δ)\nn(3.14)\nThus with probability at least 1 −δ, (3.13) and (3.14) hold simultaneously. In addition, we have\nJ(ˆθ)−J(˜θ)≤0 (3.15)\nsinceˆθ= argminθJ(θ).\nNow plugging (3.12–3.15) into (3.11), and noticing that /bar⌈bl˜θ/bar⌈blP≤12/bar⌈blf∗/bar⌈blBfrom Theorem 2.7, we see\nthat the main theorem (2.13) holds with probability at least 1 −δ.\nFinally, wedealwiththecasewithnoiseandproveTheorem2.6. Forpr oblemswithnoise,wedecompose\nL(ˆθ)−L(˜θ) as\nL(ˆθ)−L(˜θ) =/bracketleftBig\nL(ˆθ)−LB(ˆθ)/bracketrightBig\n+/bracketleftBig\nLB(ˆθ)−JB(ˆθ)/bracketrightBig\n+/bracketleftBig\nJB(ˆθ)−JB(˜θ)/bracketrightBig\n+/bracketleftBig\nJB(˜θ)−LB(˜θ)/bracketrightBig\n+/bracketleftBig\nLB(˜θ)−L(˜θ)/bracketrightBig\n. (3.16)\nBased on the results we had for the case without noise, in (3.16) we o nly have to estimate the first and\nthe last terms. This is given by Lemma 2.12. Finally, we prove Lemma 2.12 .\nProof of Lemma 2.12. LetZ=f(x;θ)−f∗(x)−ε, then we have\n|L(θ)−LB(θ)|=E/bracketleftbig\n(Z2−B2)1|Z|≥B/bracketrightbig\n=/integraldisplay∞\n0Pr/braceleftbig\nZ2−B2≥t2/bracerightbig\ndt2\n=/integraldisplay∞\n0Pr/braceleftBig\n|Z| ≥/radicalbig\nB2+t2/bracerightBig\ndt2.\nAs 0≤f(x;θ)≤1 and 0≤f∗(x;θ)≤1, we have\n/integraldisplay∞\n0Pr/braceleftBig\n|Z| ≥/radicalbig\nB2+t2/bracerightBig\ndt2≤/integraldisplay∞\n0Pr/braceleftBig\n|ε| ≥/radicalbig\nB2+t2−1/bracerightBig\ndt2.\n12\nLets=√\nB2+t2, then\n/integraldisplay∞\n0Pr/braceleftBig\n|ε| ≥/radicalbig\nB2+t2−1/bracerightBig\ndt2≤/integraldisplay∞\nBce−(s−1)2\n2σ2ds2\n=/integraldisplay∞\nB−12ce−s2\n2σ2ds2+/integraldisplay∞\nB−14ce−s2\n2σ2ds\n≤4cσ2e−(B−1)2\n2σ2+/radicalbigg\n2\nπce−(B−1)2\n2σ2\n≤c(4σ2+1)√n.\n4 Comparison with norm-based a posteriori estimates\nDifferent norms have been used as a vehicle to bound the generalizat ion error of deep neural networks,\nincluding the group norm and path norm given in [17], the spectral nor m in [6], and the variational norm\nin [5]. In these works, the bounds for the generalization gap L(θ)−ˆL(θ) is derived from a Rademacher\ncomplexity bound of the set FQ={f(x;θ) :/bar⌈blθ/bar⌈blN≤Q}, as in Theorem 2.10, where /bar⌈blθ/bar⌈blNis some norm\nor value computed from the parameter θ. These estimates are a posteriori estimates. They are shown to\nbe valid once the complexity of FQis controlled.\nHowever, finding a set of functions with small complexity is not enoug h to explain the generalization\nof neural networks. The population risk contains two parts—the a pproximation error and the estimation\nerror. In general, the approximation error bounds require the hy pothesis space to be large enough and the\nestimation error bounds require the hypothesis space to be small e nough. A posteriori estimates only deal\nwith the estimation error. In a priori estimates, both effects are p resent and we have to strike a balance\nbetween approximation and estimation. In this sense, a priori estim ates can better reflect the quality of\nthe norm or the hypothesis space selected. Therefore in order to compare our estimates with previous\nresults, we turn the previous a posteriori estimates into a priori e stimates by building approximation error\nbounds for the other approaches that have been proposed in the same way as we did for ours. These\napproximation error bounds allow us to translate existing a posterio ri estimates to a priori estimates and\nthereby put previous results on the same footing as ours.\nTo start with, based on the analysis in Section 2 and 3, we provide a ge neral framework for establishing\na priori estimates from norm-based a posteriori estimates. It ho lds for both residual networks and deep\nfully-connected networks:\nf(x;θ) =WLσ(WL−1σ(···σ(W1x))) (4.1)\nwhereW1∈Rm×d,Wl∈Rm×m,l= 2,...,L−1 andWL∈R1×m, andmis the width of the network.\nLet/bar⌈blθ/bar⌈blNbe a general norm of the parameters θ, we make the following assumptions about /bar⌈blθ/bar⌈blN.\nAssumption 4.1. For any set of parameters θ, letf(·;θ)be a neural network associated with θ. Then,\nthere exists a function ψ(d,L,m), such that the Rademacher complexity of the set FQ\nL,m={f(·;θ) :/bar⌈blθ/bar⌈blN≤\nQ}can be bounded by\nˆR(FQ\nL,m)≤Q·ψ(d,L,m)√n, (4.2)\nwheredis the dimension of x,Landmare the neural network depth and width respectively.\nThe above Rademacher complexity bound implies the following a poster iori estimate.\n13\nTheorem 4.2 (A posteriori estimate) .Letnbe the number of training samples. Consider parameters θ\nof a network with depth Land widthm. LetL(θ)andˆL(θ)be the truncated population risk and empirical\nrisk defined in (2.4). Then for any δ∈(0,1), with probability at least 1−δover the random choice of\ntraining samples, we have\n/vextendsingle/vextendsingle/vextendsingleL(θ)−ˆL(θ)/vextendsingle/vextendsingle/vextendsingle≤2(/bar⌈blθ/bar⌈blN+1)2ψ(d,L,m)+1√n+2/radicalbigg\n2log(7/δ)\nn. (4.3)\nThe proof of Theorem 4.2 follows the same way as for the proof of Th eorem 2.11. With the a posteriori\nestimate, we obtain an a priori estimate by formulating a regularized problem, and comparing the solution\nof the regularized problem to a reference solution with good approx imation property.\nTheorem 4.3 (Aprioriestimate) .Under the same conditions as in Theorem 4.2, for λ≥4+2/ψ(d,L,m),\nassume that ˆθis an minimizer of the regularized model\nmin\nθJ(θ) :=ˆL(θ)+λ/bar⌈blθ/bar⌈blN·ψ(d,L,m)√n, (4.4)\nThen, for any δ∈(0,1), with probability at least 1−δover the random training samples,\nL(ˆθ)≤ L(˜θ)+/parenleftBig\n/bar⌈bl˜θ/bar⌈blN+1/parenrightBig(4+λ)ψ(d,L,m)+2√n+4/radicalbigg\n2log(14/δ)\nn. (4.5)\nwhere˜θis an arbitrary set of parameters for the same hypothesis spa ce.\nNext, we apply this general framework to the l1path norm [17], spectral complexity norm [6] and\nvariational norm [5]. The definitions of the norms are given below.\nl1path norm For a residual network defined by (2.1), the l1path norm [17] is defined as\n/bar⌈blθ/bar⌈bl=/vextenddouble/vextenddouble|u|⊺(I+|UL||WL|)···(I+|U1||W1|)|V|/vextenddouble/vextenddouble\n1, (4.6)\nSpectral complexity norm For a fully-connected network(4.1), the spectral complexity nor m proposed\nin [6] is given by\n/bar⌈blθ/bar⌈blN=/bracketleftBiggL/productdisplay\nl=1/bar⌈blWl/bar⌈blσ/bracketrightBigg/bracketleftBiggL/summationdisplay\nl=1/bar⌈blW⊺\nl/bar⌈bl2/3\n2,1\n/bar⌈blWl/bar⌈bl2/3\nσ/bracketrightBigg3/2\n, (4.7)\nwhere/bar⌈bl·/bar⌈blσdenotes the matrix spectral norm and /bar⌈bl·/bar⌈blp,qdenotes the ( p,q) matrix norm /bar⌈blW/bar⌈blp,q=\n/bar⌈bl(/bar⌈blW:,1/bar⌈blp,...,/bar⌈blW:,m/bar⌈blp)/bar⌈blq.\nVariational norm For a fully-connected network (4.1), the variational norm propos ed in [5] is\n/bar⌈blθ/bar⌈blN=1\nL√\nVL/summationdisplay\nl=1/summationdisplay\njl/radicalBig\nVin\njlVout\njl, (4.8)\nwhere\nV=/vextenddouble/vextenddouble|WL|···|W1|/vextenddouble/vextenddouble\n1,\nVin\njl=/vextenddouble/vextenddouble|Wjl,:\nl||Wl−1|···|W1|/vextenddouble/vextenddouble\n1,\nVout\njl=/vextenddouble/vextenddouble|WL|···|Wl+1||W:,jl\nl|/vextenddouble/vextenddouble\n1.\n14\nTable 1: Comparison of the a posteriori and a priori estimates for d ifferent norms\nNorm Weighted path norm l1path norm Spectral norm Variational norm\nA posteriori O/parenleftBig\n1√n/parenrightBig\nO/parenleftBig\n2L\n√n/parenrightBig\nO/parenleftBig\n1√n/parenrightBig\nO/parenleftBig\nL3/2\n√n/parenrightBig\nA priori O/parenleftBig\n1\nLm+1√n/parenrightBig\nO/parenleftBig\n1\nLm+2L\n√n/parenrightBig\nO/parenleftBig\n1\nLm+(Lm)3/2\n√n/parenrightBig\nO/parenleftBig\n1\nLm+L3/2√m√n/parenrightBig\nWhenapplyingTheorem4.3, forresidualnetworks,wechoose ˜θtobethesolutiongivenbyTheorem2.7,\nwhich is the same solution used in ourmain theorem in Section 2. For fully -connected networks, we slightly\nmodify the construction of ˜θ(see the appendix for details), such that the a priori estimates we obtain\nfor different norms all have the same approximation error. But as /bar⌈bl˜θ/bar⌈blNandψvary for different norms,\nthe estimation error comes out differently. To this end, let us recall the expressions of ψfor the norms\nmentioned above\nl1path norm : ψ(d,L,m) = 2L/radicalbig\n2log2m,\nSpectral norm : ψ(d,L,m) = 12logn/radicalbig\n2log2m,\nVariational norm : ψ(d,L,m) =Llogn/radicalbig\n(L−2)logm+log(8ed).\nOn the other hand, one can derive following bounds for /bar⌈bl˜θ/bar⌈blN(see the appendix for details):\nl1path norm : /bar⌈bl˜θ/bar⌈blN≤4/bar⌈blf∗/bar⌈blB,\nSpectral norm : /bar⌈bl˜θ/bar⌈blN≤16(Lm)3/2/bar⌈blf∗/bar⌈blB,\nVariational norm : /bar⌈bl˜θ/bar⌈blN≤4√m/bar⌈blf∗/bar⌈blB.\nPlugging the results above into Theorem 4.3, we get a priori estimate s of the regularized model using\ndifferent norms. The results are summarized in Table 1. They are sho wn in the order of L,mandn, the\nlogarithmic terms are ignored. The notation O(·) hides constants that depend only on the target function.\nWe see that the weighted path norm is the only one in which the second term in the a priori error bound\nscales cleanly as O(1/√n), i.e., it is independent of the depth L.\nNote that in Table 1 the standard l1path norm gives an a priori estimate with an exponential depen-\ndence onL, different from the case for the weighted path norm. To see why, c onsider a network f(·;θ)\nwithθ={V,Wl,Ul,u}. By the Rademacher complexity bound associated with the weighted path norm\n(2.21), this function is contained in a set with Rademacher complexity smaller than\nC1√n/vextenddouble/vextenddouble|u|⊺(I+3|UL||WL|)···(I+3|U1||W1|)|V|/vextenddouble/vextenddouble\n1. (4.9)\nOn the other hand, if we use the l1path norm, this function is contained in a set with Rademacher\ncomplexity smaller than\nC2√n/vextenddouble/vextenddouble|u|⊺(2I+2|UL||WL|)···(2I+2|U1||W1|)|V|/vextenddouble/vextenddouble\n1, (4.10)\nwhereC1andC2are constants. This gives rise to the exponential dependence. Th is is not the case in\n(4.9) as long as the weighted path norm is controlled.\nThe use of the variational norm eliminates the exponential depende nce for the complexity bound, but\nstill retains an algebraic dependence.\nThe story for the spectral norm is different. It was shown in [6] tha t the Rademacher complexity of the\nhypothesis space with bounded spectral norm has an optimal scalin g (1/√n). However, as the depth of\n15\nthe network goes to infinity, this hypothesis space shrinks to 0 if th e bound on the spectral norm is fixed.\nTherefore, in order to get the desired bound on the approximation error, one has to increase the bound\non the spectral norm (the value of Q). This again results in the Ldependence in the estimation error.\nWhen deriving the results in Table 1, we used a specific construction ˜θto control the approximation\nerror. Other constructions may exist. However, they will not cha nge the qualitative dependence of the\nestimation error, specifically the dependence (or the lack thereof ) onL,min the second term of these\nbounds, the term that controls the estimation error.\n5 Conclusion\nWe have shown that by designing proper regularized model, one can g uarantee optimal rate of the popula-\ntion risk for deep residual networks. This result generalizes the re sult in [10] for shallow neural networks.\nHowever, for deep residual networks, the norm used in the regula rized model is much less obvious.\nFrom a practical viewpoint, it was demonstrated numerically in [10] th at regularization improves the\nrobustness of the performance of the model. Specifically, the num erical results in [10] suggest that the\nperformance of the regularized model is much less sensitive to the d etails of the optimization algorithm,\nsuch as the choice of the hyper parameters for the algorithm, the initialization, etc. We expect the same\nto be true in the present case for regularized deep residual netwo rks. In this sense, the regularized models\nbehavemuchmorenicelythanun-regularizedones. Oneshouldalson otethatthe additionalcomputational\ncost for the regularized model is really negligible.\nThe present work still does not explain why vanilla deep residual netw orks, without regularization,\ncan still perform quite well. This issue of “implicit regularization” still re mains quite mysterious, though\nthere has been some recent progress for understanding this issu e for shallow networks [7, 14, 2]. Regarding\nwhether one should add regularization or not, we might be able to lear n something from the example of\nlinear regression. There it is a standard practice to add regularizat ion in the over-parametrized regime,\nthe issue is what kind of regularized terms one should add. It has bee n proven, both in theory and in\npractice, that proper regularization does help to extract the app ropriate solutions that are of particular\ninterest, such as the ones that are sparse. For neural network s, even though regularization techniques\nsuch as dropout have been used sometimes in practice, finding the a ppropriate regularized models and\nunderstanding their effects has not been the most popular resear ch theme until now. We hope that the\ncurrent paper will serve to stimulate much more work in this direction .\nReferences\n[1] Mark Ainsworth and J Tinsley Oden. A posteriori error estimation in finite element analysis , vol-\nume 37. John Wiley & Sons, 2011.\n[2] Zeyuan Allen-Zhu, Yuanzhi Li, and Yingyu Liang. Learning and gen eralization in overparameterized\nneural networks, going beyond two layers. arXiv preprint arXiv:1811.04918 , 2018.\n[3] Sanjeev Arora, Rong Ge, Behnam Neyshabur, and Yi Zhang. St ronger generalization bounds for deep\nnets via a compression approach. arXiv preprint arXiv:1802.05296 , 2018.\n[4] Andrew R Barron. Universal approximation bounds for superpo sitions of a sigmoidal function. IEEE\nTransactions on Information theory , 39(3):930–945, 1993.\n[5] Andrew R Barron and Jason M Klusowski. Approximation and estima tion for high-dimensional deep\nlearning networks. arXiv preprint arXiv:1809.03090 , 2018.\n[6] Peter L Bartlett, Dylan J Foster, and Matus J Telgarsky. Spect rally-normalized margin bounds for\nneural networks. In Advances in Neural Information Processing Systems , pages 6240–6249, 2017.\n16\n[7] Alon Brutzkus, Amir Globerson, Eran Malach, and Shai Shalev-Sh wartz. Sgd learns over-\nparameterized networks that provably generalize on linearly separ able data. arXiv preprint\narXiv:1710.10174 , 2017.\n[8] Philippe G Ciarlet. The finite element method for elliptic problems. Classics in applied mathematics ,\n40:1–511, 2002.\n[9] Weinan E and Qingcan Wang. Exponential convergence of the dee p neural network approximationfor\nanalytic functions. Sci China Math , 61(10):1733, 2018.\n[10] Weinan E, Chao Ma, and Lei Wu. A priori estimates of the genera lization error for two-layer neural\nnetworks. arXiv preprint arXiv:1810.06397 , 2018.\n[11] Noah Golowich, Alexander Rakhlin, and Ohad Shamir. Size-indepen dent sample complexity of neural\nnetworks. arXiv preprint arXiv:1712.06541 , 2017.\n[12] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep res idual learning for image recog-\nnition. In Proceedings of the IEEE conference on computer vision and pa ttern recognition , pages\n770–778, 2016.\n[13] Jason M Klusowski and Andrew R Barron. Risk bounds for high-d imensional ridge function combi-\nnations including neural networks. arXiv preprint arXiv:1607.01434 , 2016.\n[14] Yuanzhi Li and Yingyu Liang. Learning overparameterized neu ral networks via stochastic gradient\ndescent on structured data. In Advances in Neural Information Processing Systems , pages 8168–8177,\n2018.\n[15] Tengyuan Liang, Tomaso Poggio, Alexander Rakhlin, and James S tokes. Fisher-rao metric, geometry,\nand complexity of neural networks. arXiv preprint arXiv:1711.01530 , 2017.\n[16] Behnam Neyshabur, Ruslan R Salakhutdinov, and Nati Srebro. Path-sgd: Path-normalized opti-\nmization in deep neural networks. In Advances in Neural Information Processing Systems , pages\n2422–2430, 2015.\n[17] Behnam Neyshabur, Ryota Tomioka, and Nathan Srebro. Norm -based capacity control in neural\nnetworks. In Conference on Learning Theory , pages 1376–1401, 2015.\n[18] Shai Shalev-Shwartz and Shai Ben-David. Understanding machine learning: From theory to algo-\nrithms. Cambridge university press, 2014.\n[19] Andreas Veit, Michael J Wilber, and Serge Belongie. Residual net works behave like ensembles of\nrelatively shallow networks. In Advances in Neural Information Processing Systems , pages 550–558,\n2016.\n[20] Shuxin Zheng, Qi Meng, Huishuai Zhang, Wei Chen, Nenghai Yu , and Tie-Yan Liu. Capacity control\nof relu neural networks by basis-path norm. arXiv preprint arXiv:1809.07122 , 2018.\nA The missing details in Section 4\nA.1 Approximation properties of deep fully-connected netw orks\nConsider a deep fully-connected network with depth Land widthm(4.1) in the form:\nf(x;θ) =WLσ(WL−1σ(···σ(W1x)))\n17\nwhereW1∈Rm×d,Wl∈Rm×m,l= 2,...,L−1 andWL∈R1×m. Taking the same approach as\nin Theorem 2.7 and [9], we construct the deep fully-connected netwo rk from a two-layer network. From\nTheorem 3.1, there exists a two-layer network with width M, such that\n/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddoubleM/summationdisplay\nj=1ajσ(b⊺\njx)−f∗(x)/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble2\n≤16/bar⌈blf∗/bar⌈bl2\nB\nM\nand\nM/summationdisplay\nj=1|aj|/bar⌈blbj/bar⌈bl1≤4/bar⌈blf∗/bar⌈blB.\nSince the ReLU activation σ(·) is positively homogeneous, we can assume without loss of generality that\na1=a2=···=aM=a≤4/bar⌈blf∗/bar⌈blBand/bar⌈blb1/bar⌈bl1+/bar⌈blb2/bar⌈bl1+···+/bar⌈blbM/bar⌈bl1= 1. Now let M= (m−d)(L−1), and\nrewrite the subscripts as bl,j=b(m−d)(l−1)+j,l= 1,...,L−1,j= 1,...,m−d. Define a fully-connected\nnetworkf(·;˜θ) by\nW1=\nId\nb⊺\n1,1\n...\nb⊺\n1,m−d\n,Wl=\nId0\nb⊺\nl,1\n...Im−d\nb⊺\nl,m−d\n, l= 2,...,L−1,\nWL=/bracketleftbig\n0 0···0a a···a/bracketrightbig\n,\nthen it is easy to verify that f(x;˜θ) =a/summationtextM\nj=1σ(b⊺\njx). This ensures that the approximation property of\nfully-connected multi-layer neural network is at least as good as th e two-layer network.\nA.2 Calculation of the spectral complexity norm\nRecall the spectral complexity norm (4.7) proposed in [6]\n/bar⌈blθ/bar⌈blN=/bracketleftBiggL/productdisplay\nl=1/bar⌈blWl/bar⌈blσ/bracketrightBigg/bracketleftBiggL/summationdisplay\nl=1/bar⌈blW⊺\nl/bar⌈bl2/3\n2,1\n/bar⌈blWl/bar⌈bl2/3\nσ/bracketrightBigg3/2\n.\nForl= 1,...,L−1, the matrix spectral norm satisfies /bar⌈blWl/bar⌈blσ≥1, and\n/bar⌈blWl/bar⌈blσ−1≤ /bar⌈blWl−I/bar⌈blσ≤ /bar⌈blWl−I/bar⌈blF=\nm−d/summationdisplay\nj=1/bar⌈blbl,j/bar⌈bl2\n2\n1/2\n≤m−d/summationdisplay\nj=1/bar⌈blbl,j/bar⌈bl1,\nthus\nL−1/productdisplay\nl=1/bar⌈blWl/bar⌈blσ≤L−1/productdisplay\nl=1\n1+m−d/summationdisplay\nj=1/bar⌈blbl,j/bar⌈bl1\n<e\nsince/summationtextL−1\nl=1/summationtextm−d\nj=1/bar⌈blbl,j/bar⌈bl1= 1. The (p,q) = (2,1) matrix norm satisfies\n/bar⌈blW⊺\nl/bar⌈bl2,1=/vextenddouble/vextenddouble(/bar⌈blW1,:\nl/bar⌈bl2,...,/bar⌈blW:,m\nl/bar⌈bl2)/vextenddouble/vextenddouble\n1=d+m−d/summationdisplay\nj=1/radicalBig\n1+/bar⌈blbl,j/bar⌈bl2\n2<√\n2m.\n18\nIn addition,\n/bar⌈blWL/bar⌈blσ=/bar⌈blWL/bar⌈bl2,1=/bar⌈blWL/bar⌈bl2=a√\nm−d≤4/bar⌈blf∗/bar⌈blB√\nm−d.\nTherefore, the spectral complexity norm satifies\n/bar⌈bl˜θ/bar⌈blN≤e·4/bar⌈blf∗/bar⌈blB√\nm−d·L3/2·√\n2m≤16(Lm)3/2/bar⌈blf∗/bar⌈blB.\nA.3 Calculation of the the variational norm\nRecall the variational norm (4.8) proposed in [5]\n/bar⌈blθ/bar⌈blN=1\nL√\nVL/summationdisplay\nl=1/summationdisplay\njl/radicalBig\nVin\njlVout\njl,\nwhere\nV=/vextenddouble/vextenddouble|WL|···|W1|/vextenddouble/vextenddouble\n1,\nVin\njl=/vextenddouble/vextenddouble|Wjl,:\nl||Wl−1|···|W1|/vextenddouble/vextenddouble\n1,\nVout\njl=/vextenddouble/vextenddouble|WL|···|Wl+1||W:,jl\nl|/vextenddouble/vextenddouble\n1.\nNotice that for any l,\nm/summationdisplay\njl=1Vin\njlVout\njl=V.\nTherefore\n/bar⌈blθ/bar⌈blN≤1\nL√\nV·L·√\nmV=√mV.\nNow it is easy to verify that\nV=aL−1/summationdisplay\nl=1m−d/summationdisplay\nj=1/bar⌈blbl,j/bar⌈bl1=a≤4/bar⌈blf∗/bar⌈blB.\n19",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "T5Ni5MWISZ5",
"year": null,
"venue": "J. Comput. Phys. 2009",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=T5Ni5MWISZ5",
"arxiv_id": null,
"doi": null
}
|
{
"title": "A general strategy for designing seamless multiscale methods",
"authors": [
"Weinan E",
"Weiqing Ren",
"Eric Vanden-Eijnden"
],
"abstract": "We present a new general framework for designing multiscale methods. Compared with previous work such as Brandt’s systematic up-scaling, the heterogeneous multiscale method (HMM) and the “equation-free” approach, this new framework has the distinct feature that it does not require reinitializing the microscale model at each macro time step or each macro iteration step. In the new strategy, the macro- and micro-models evolve simultaneously using different time steps (and therefore different clocks), and they exchange data at every step. The micro-model uses its own appropriate time step. The macro-model runs at a slower pace than required by accuracy and stability considerations for the macroscale dynamics, in order for the micro-model to relax. Examples are discussed and application to modeling complex fluids is presented.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "HJvg_VLVXhg",
"year": null,
"venue": "J. Comput. Phys. 2007",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=HJvg_VLVXhg",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Nested stochastic simulation algorithms for chemical kinetic systems with multiple time scales",
"authors": [
"Weinan E",
"Di Liu",
"Eric Vanden-Eijnden"
],
"abstract": "We present an efficient numerical algorithm for simulating chemical kinetic systems with multiple time scales. This algorithm is an improvement of the traditional stochastic simulation algorithm (SSA), also known as Gillespie’s algorithm. It is in the form of a nested SSA and uses an outer SSA to simulate the slow reactions with rates computed from realizations of inner SSAs that simulate the fast reactions. The algorithm itself is quite general and seamless, and it amounts to a small modification of the original SSA. Our analysis of such multi-scale chemical kinetic systems allows us to identify the slow variables in the system, derive effective dynamics on the slow time scale, and provide error estimates for the nested SSA. Efficiency of the nested SSA is discussed using these error estimates, and illustrated through several numerical examples.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "SUaGkQvUvpD",
"year": null,
"venue": null,
"pdf_link": "https://arxiv.org/pdf/1810.06397.pdf",
"forum_link": "https://openreview.net/forum?id=SUaGkQvUvpD",
"arxiv_id": null,
"doi": null
}
|
{
"title": "A priori estimates of the population risk for two-layer neural networks",
"authors": [
"Weinan E",
"Chao Ma",
"Lei Wu"
],
"abstract": "New estimates for the population risk are established for two-layer neural networks.\nThese estimates are nearly optimal in the sense that the error rates scale in the same way as the Monte\nCarlo error rates. They are equally effective in the over-parametrized regime when the network size is\nmuch larger than the size of the dataset. These new estimates are a priori in nature in the sense that\nthe bounds depend only on some norms of the underlying functions to be fitted, not the parameters in\nthe model, in contrast with most existing results which are a posteriori in nature. Using these a priori\nestimates, we provide a perspective for understanding why two-layer neural networks perform better\nthan the related kernel methods.",
"keywords": [],
"raw_extracted_content": "A PRIORI ESTIMATES OF THE POPULATION RISK\nFOR TWO-LAYER NEURAL NETWORKS\nWEINAN E\u0003, CHAO MAy,AND LEI WUz\nIn memory of Professor David Shenou Cai\nAbstract. New estimates for the population risk are established for two-layer neural networks.\nThese estimates are nearly optimal in the sense that the error rates scale in the same way as the Monte\nCarlo error rates. They are equally e\u000bective in the over-parametrized regime when the network size is\nmuch larger than the size of the dataset. These new estimates are a priori in nature in the sense that\nthe bounds depend only on some norms of the underlying functions to be \ftted, not the parameters in\nthe model, in contrast with most existing results which are a posteriori in nature. Using these a priori\nestimates, we provide a perspective for understanding why two-layer neural networks perform better\nthan the related kernel methods.\nKeywords. Two-layer neural network; Barron space; Population risk; A priori estimate;\nRademacher complexity\nAMS subject classi\fcations. 41A46; 41A63; 62J02; 65D05\n1. Introduction\nOne of the main challenges in theoretical machine learning is to understand the\nerrors in neural network models [43]. To this end, it is useful to draw an analogy with\nclassical approximation theory and \fnite element analysis [13]. There are two kinds of\nerror bounds in \fnite element analysis depending on whether the target solution (the\nground truth) or the numerical solution enters into the bounds. Let f\u0003and ^fnbe the\ntrue solution and the \\numerical solution\", respectively. \\A priori\" error estimates\nusually take the form\nk^fn\u0000f\u0003k1\u0014Cn\u0000\u000bkf\u0003k2:\nwhere only norms of the true solution enter into the bounds. In \\a posteriori\" error\nestimates, the norms of the numerical solution enter into the bounds:\nk^fn\u0000f\u0003k1\u0014Cn\u0000\fk^fnk3:\nHerek\u0001k1;k\u0001k2;k\u0001k3denote various norms. In this language, most recent theoretical\nresults [7,24,32{35] on estimating the generalization error of neural networks should be\nviewed as \\a posteriori\" analysis, since the bounds depend on various norms of the neural\nnetwork model obtained after the training process. As was observed in [4, 18, 34], the\nnumerical values of these norms are very large, yielding vacuous bounds. For example,\n[34] calculated the values of various a posteriori bounds for some real two-layer neural\nnetworks and it is found that the best bounds are still on the order of O(105).\nIn this paper, we pursue a di\u000berent line of attack by providing \\a priori\" analy-\nsis. Speci\fcally, we focus on two-layer networks, and we consider models with explicit\n\u0003Department of Mathematics and Program in Applied and Computational Mathematics, Princeton\nUniversity, Princeton NJ 08540, USA; Beijing Institute of Big Data Research, Beijing 100871, China\n([email protected]).\nyProgram in Applied and Computational Mathematics, Princeton University, Princeton NJ 08544,\nUSA ([email protected]).\nzProgram in Applied and Computational Mathematics, Princeton University, Princeton NJ 08544,\nUSA ([email protected])\n1arXiv:1810.06397v3 [stat.ML] 20 Feb 2020\n2 A PRIORI ESTIMATES FOR TWO-LAYER NEURAL NETWORKS\nregularization. We establish estimates for the population risk which are asymptotically\nsharp with constants depending only on the properties of the target function. Our\nnumerical results suggest that such regularization terms are necessary in order for the\nmodel to be \\well-posed\" (see Section 7 for the precise meaning).\nSpeci\fcally, our main contributions are:\n\u000fWe establish a priori estimates of the population risk for learning two-layer\nneural networks with an explicit regularization. These a priori estimates depend\non the Barron norm of the target function. The rates with respect to the number\nof parameters and number of samples are comparable to the Monte Carlo rate.\nIn addition, our estimates hold for high dimensional and over-parametrized\nregime.\n\u000fWe make a comparison between the neural network and kernel methods using\nthese a priori estimates. We show that two-layer neural networks can be un-\nderstood as kernel methods with the kernel adaptively selected from the data.\nThis understanding partially explains why neural networks perform better than\nkernel methods in practice.\nThe present paper is the \frst in a series of papers in which we analyze neural net-\nwork models using a classical numerical analysis perspective. Subsequent papers will\nconsider deep neural network models [19,20], the optimization and implicit regulariza-\ntion problem using gradient descent dynamics [20, 22] and the general function spaces\nand approximation theory in high dimensions [21].\n2. Related work There are two key problems in learning two-layer neural\nnetworks: optimization and generalization. Recent progresses on optimization sug-\ngest that over-parametrization is the key factor leading to a nice empirical landscape\n^Ln[23, 36, 38], thus facilitating convergence towards global minima of ^Lnfor gradient-\nbased optimizers [12,17,31]. This leaves the generalization property of learning two-layer\nneural networks more puzzling, since naive arguments would suggest that more param-\neters implies worse generalization ability. This contradicts what is observed in practice.\nIn what follows, we survey previous attempts in analyzing the generalization properties\nof two-layer neural network models.\n2.1. Explicit regularization This line of works studies the generalization\nproperty of two-layer neural networks with explicit regularization and our work lies\nin this category. Let n;m denote the number of samples and number of param-\neters, respectively. For two-layer sigmoidal networks, [6] established a risk bound\nO(1=m+mdln(n)=n). By considering smoother activation functions, [27] proved an-\nother bound O((lnd=n)1=3) for the case when m\u0019pn. Both of these results are proved\nfor a regularized estimator. In comparison, the error rate established in this paper,\nO(1=m+lnnp\nlnd=n) is sharper and in fact nearly optimal, and it is also applicable for\nthe over-parametrized regime. For a better comparison, please refer to Table 2.1.\nrate over-parametrization\nrate of [6]1\nm+mdln(n)\nnNo\nrate of [27]\u0000lnd\nn\u00011=3No\nour rate1\nm+ln(n)(lnd\nn)1=2Yes\nTable 2.1 .Comparison of the theoretical bounds. The second column are the bounds and the\nthird column indicates whether the bounds are relevant in the over-parametrized regime, i.e. m\u0015n.\n3\nMore recently, [41] considered explicit regularization for classi\fcation problems.\nThey proved that for the speci\fc cross-entropy loss, the regularization path converges\nto the maximum margin solutions. They also proved an a priori bound on how the\nnetwork size a\u000bects the margin. However, their analysis is restricted to the case where\nthe data is well-separated. Our result does not have this restriction.\n2.2. Implicit regularization Another line of works study how gradient de-\nscent (GD) and stochastic gradient descent (SGD) \fnds the generalizable solutions. [9]\nproved that SGD learns over-parametrized networks that provably generalize for binary\nclassi\fcation problem. However, it is not clear how the population risk depends on\nthe number of samples for their compression-based generalization bound. Moreover,\ntheir proof highly relies on the strong assumption that the data is linearly separable.\nThe experiments in [34] suggest that increasing the network width can improve the\ntest accuracy of solutions found by SGD. They tried to explain this phenomena by an\ninitialization-dependent (a posterior) generalization bound. However, in their experi-\nments, the largest width m\u0019n, rather than m\u001dn. Furthermore their generalization\nbounds are arbitrarily loose in practice. So their result cannot tell us whether GD can\n\fnd generalizable solutions for arbitrarily wide networks.\nIn [15] and [1], it is proved that GD with a particularly chosen initialization, learning\nrate and early stopping can \fnd generalizable solutions \u0012Tsuch thatL(\u0012T)\u0014min\u0012L(\u0012)+\n\", as long as m\u0015poly(n;1\n\"). These results di\u000ber from ours in several aspects. First, both\nof them assume that the target function f\u00032H\u00190, where\u00190is the uniform distribution\noverSd. Recall thatH\u00190is the reproducing kernel Hilbert space (RKHS) induced by\nk\u00190(x;x0) =Ew\u0018\u00190[\u001b(hw;xi)\u001b(hw;x0i)], which is much smaller than B2(X), the space we\nconsider. Secondly, through carefully analyzing the polynomial order in two papers, we\ncan see that the sample complexities they provided scales as O(1=n1=4), which is worse\nthanO(1=pn) proved here. See also [3,10] for some even more recent results.\nRecent work in [20,22] has shown clearly that for the kind of initialization schemes\nconsidered in these previous works or in the over-parametrized regime, the neural net-\nwork models do not perform better than the corresponding kernel method with a kernel\nde\fned by the initialization. These results do not rule out the possibility that neural\nnetwork models can still outperform kernel methods in some regimes, but they do show\nthat \fnding these regimes is quite non-trivial.\n3. Preliminaries We begin by recalling the basics of two-layer neural networks\nand their approximation properties.\nThe problem of interest is to learn a function from a training set of nexamplesS=\nf(xi;yi)gn\ni=1, i.i.d. samples drawn from an underlying distribution \u001ax;y, which is assumed\n\fxed but known only through the samples. Our target function is f\u0003(x) =E[yjx]. We\nassume that the values of yiare given through the decomposition y=f\u0003(x)+\u0018, where\n\u0018denotes the noise. For simplicity, we assume that the data lie in X= [\u00001;1]dand\n0\u0014f\u0003\u00141.\nThe two-layer neural network is de\fned by\nf(x;\u0012) =mX\nk=1ak\u001b(wT\nkx); (3.1)\nwherewk2Rd,\u001b:R7!Ris a nonlinear scale-invariant activation function such as\nReLU [30] and Leaky ReLU [25], both satis\fes the condition \u001b(\u000bt) =\u000b\u001b(t) for any\n\u000b\u00150;t2R. Without loss of generality, we assume \u001bis 1-Lipschitz continuous. In the\nformula (3.1), we omit the bias term for notational simplicity. The e\u000bect of bias term\n4 A PRIORI ESTIMATES FOR TWO-LAYER NEURAL NETWORKS\ncan be incorporated if we assume that the \frst component of xis always 1. We say that\na network is over-parametrized if the network width m>n . We de\fne a truncated form\noffthroughTf(x) = maxfminff(x);1g;0g. By an abuse of notation, in the following\nwe still use fto denoteTf. We will use \u0012=f(ak;wk)gm\nk=1to denote all the parameters\nto be learned from the training data,\nThe ultimate goal is to minimize the population risk\nL(\u0012) =Ex;y[`(f(x;\u0012);y)]:\nIn practice, we have to work with the empirical risk\n^Ln(\u0012) =1\nnnX\ni=1`(f(xi;\u0012);yi):\nHere the loss function `(y;y0) =1\n2(y\u0000y0)2, unless it is speci\fed otherwise.\nDe\fne the path norm [35],\nk\u0012kP:=mX\nk=1jakjkwkk1; (3.2)\nWe will consider the regularized model de\fned as follows:\nDefinition 3.1. For a two-layer neural network f(\u0001;\u0012)of widthm, we de\fne the\nregularized risk as\nJ\u0015(\u0012) :=^Ln(\u0012)+\u0015(k\u0012kP+1):\nThe +1term at the right hand side is included only to simplify the proof. Our result\nalso holds if we do not include this term in the regularized risk. The corresponding\nregularized estimator is de\fned as\n^\u0012n;\u0015= argminJ\u0015(\u0012):\nHere\u0015>0 is a tuning parameter that controls the balance between the \ftting error and\nthe model complexity. It is worth noting that the minimizer is not necessarily unique,\nand^\u0012n;\u0015should be understood as any of the minimizers.\nIn the following, we will call Lipschitz continuous functions with Lipschitz constant\nCC-Lipschitz continuous. We will use X.Yto indicate that X\u0014cYfor some universal\nconstantc>0.\n3.1. Barron space\nWe begin by de\fning the natural function space associated with two-layer neural\nnetworks, which we will refer to as the Barron space to honor the pioneering work that\nBarron has done on this subject [5, 27{29]. A more complete discussion can be found\nin [21].\nLetSd:=fwjkwk1= 1g, and letFbe the Borel \u001b-algebra on SdandP(Sd) be the\ncollection of all probability measures on ( Sd;F). LetB(X) be the collection of functions\nthat admit the following integral representation:\nf(x) =Z\nSda(w)\u001b(hw;xi)d\u0019(w)8x2X; (3.3)\n5\nwhere\u00192P(Sd), anda(\u0001) is a measurable function with respect to ( Sd;F). For any\nf2B(X) andp\u00151, we de\fne the following norm\n\rp(f) := inf\n(a;\u0019)2\u0002f\u0012Z\nSdja(w)jpd\u0019(w)\u00131=p\n; (3.4)\nwhere\n\u0002f=\b\n(a;\u0019)jf(x) =Z\nSda(w)\u001b(hw;xi)d\u0019(w)\t\n:\nDefinition 3.2 ( Barron space ).We de\fne Barron space by\nBp(X) :=ff2B(X)j\rp(f)<1g:\nSince\u0019(\u0001) is a probability distribution, by H older's inequality, for any q\u0015p>0 we\nhave\rp(f)\u0014\rq(f):Thus, we haveB1(X)\u001a\u0001\u0001\u0001\u001aB 2(X)\u001aB1(X).\nObviouslyBp(X) is dense in C(X) since all the \fnite two-layer neural networks\nbelong to Barron space with \u0019(w) =1\nmPm\nk=1\u000e(w\u0000^wk) and the universal approximation\ntheorem [14] tells us that continuous functions can be approximated by two-layer neural\nnetworks. Moreover, it is interesting to note that the \r1(\u0001) norm of a two-layer neural\nnetwork is bounded by the path norm of the parameters.\nAn important result proved in [8, 27] states that if a function f:X7!Rsatis\fesR\nRdk!k2\n1j^f(!)jd!<1, where ^fis the Fourier transform of an extension of f, then it\ncan be expressed in the form (3.3) with\n\r1(f) := sup\nw2Sdja(w)j.Z\nRdk!k2\n1j^f(!)jd!:\nThus it lies inB1(X).\nConnection with reproducing kernel Hilbert space. The Barron space has\na natural connection with reproducing kernel Hilbert space (RKHS) [2], and as we will\nshow later, this connection will lead to a precise comparison between two-layer neural\nnetworks and kernel methods. For a \fxed \u0019, we de\fne\nH\u0019(X) :=\u001aZ\nSd\u000b(w)\u001b(hw;xi)d\u0019(w) :kfkH\u0019<1\u001b\n;\nwhere\nkfk2\nH\u0019:=E\u0019[ja(w)j2]:\nRecall that for a symmetric positive de\fnite (PD)1functionk:X\u0002X7!R, the in-\nduced RKHSHkis the completion of fP\niaik(xi;x)gwith respect to the inner product\nhk(xi;\u0001);k(xj;\u0001)iHk=k(xi;xj). It was proved in [37] that H\u0019=Hk\u0019with the kernel k\u0019\nde\fned by\nk\u0019(x;x0) =E\u0019[\u001b(hw;xi)\u001b(hw;x0i)]: (3.5)\n1We saykis PD function, if for any x1;:::;x n, the matrix KnwithKn\ni;j=k(xi;xj) is positive\nsemide\fnite.\n6 A PRIORI ESTIMATES FOR TWO-LAYER NEURAL NETWORKS\nThus Barron space can be viewed as the union of a family of RKHS with kernels de\fned\nby\u0019through Equation (3.5), i.e.\nB2(X) =[\n\u00192P(Sd)H\u0019(X): (3.6)\nNote that the family of kernels is only determined by the activation function \u001b(\u0001).\n3.2. Approximation property\nTheorem 3.1. For anyf2B2(X), there exists a two-layer neural network f(\u0001;~\u0012)of\nwidthm, such that\nEx[(f(x)\u0000f(x;~\u0012))2]\u00143\r2\n2(f)\nm(3.7)\nk~\u0012kP\u00142\r2(f) (3.8)\nThis kind of approximation results have been established in many papers, see for\nexample [5,8]. The di\u000berence is that we provide the explicit control of the norm of the\nconstructed solution in (3.8), and the bound is independent of the network size. This\nobservation will be useful for what follows.\nThe proof of Proposition 3.1 can be found in Appendix A. The basic intuition is that\nthe integral representation of fallows us to approximate fby the Monte-Carlo method:\nf(x)\u00191\nmPm\nk=1a(wk)\u001b(hwk;xi) wherefwkgm\nk=1are sampled from the distribution \u0019.\n4. Main results For simplicity we \frst discuss the case without noise, i.e. \u0018= 0.\nIn the next section, we deal with the noise. We also assume ln(2 d)\u00151, and let ^\rp(f) =\nmaxf1;\rp(f)g;\u0015n= 4p\n2ln(2d)=n. Heredis the dimension of input and the de\fnition\nof\rp(\u0001) is given in Equation (3.4).\nTheorem 4.1 ( Noiseless case ).Assume that the target function f\u00032B2(X)and\u0015\u0015\u0015n.\nThen for any \u000e>0, with probability at least 1\u0000\u000eover the choice of the training set S,\nwe have\nExjf(x;^\u0012n;\u0015)\u0000f\u0003(x)j2.\r2\n2(f\u0003)\nm+\u0015^\r2(f\u0003) (4.1)\n+1pn\u0000\n^\r2(f\u0003)+p\nln(n=\u000e)\u0001\n: (4.2)\nThe above theorem provides an a priori estimate for the population risk. The a priori\nnature is re\rected by dependence of the \r2(\u0001) norm of the target function. The \frst\nterm at the right hand side controls the approximation error. The second term bounds\nthe estimation error. Surprisingly, the bound for the estimation error is independent of\nthe network width m. Hence the bound also makes sense in the over-parametrization\nregime.\nIn particular, if we take \u0015\u0010\u0015nandm\u0015pn, the bound becomes O(1=pn) up to\nsome logarithmic terms. This bound is nearly optimal in a minimax sense [28,42].\n4.1. Comparison with kernel methods Considerf\u00032B2(X), and without\nloss of generality, we assume that ( a\u0003;\u0019\u0003)2\u0002f\u0003is one of the best representations of f\u0003\n(it is easy to prove that such a representation exists), i.e. \r2\n2(f\u0003) =E\u0019\u0003[ja\u0003(w)j2]:For a\n\fxed\u00190, we have,\nf\u0003(x) =Z\nSda\u0003(w)\u001b(hw;xi)d\u0019\u0003(w)\n=Z\nSda\u0003(w)d\u0019\u0003\nd\u00190(w)\u001b(hw;xi)d\u00190(w)(4.3)\n7\nas long as \u0019is absolutely continuous with respect to \u00190. In this sense, we can view\nf\u0003from the perspective of H\u00190. Note thatH\u00190is induced by PD function k\u00190(x;x0) =\nEw\u0018\u00190[\u001b(hw;xi)\u001b(hw;x0i)], and the norm of f\u0003inH\u00190is given by\nkf\u0003k2\nH\u00190=Ew\u0018\u00190[ja\u0003(w)d\u0019\u0003\nd\u00190(w)j2]:\nLet^hn;\u0015be the solution of the kernel ridge regression (KRR) problem de\fned by:\nmin\nh2H\u001901\n2nnX\ni=1(h(xi)\u0000yi)2+\u0015khkH\u00190: (4.4)\nWe are interested in the comparison between the two population risks L(^\u0012n;\u0015) and\nL(^hn;\u0015) =E[`(^hn;\u0015(x);y)].\nIfkf\u0003kH\u00190<1, then we have f\u00032H\u00190and infh2H\u00190L(h) = 0. In this case, it was\nproved in [11] that the optimal learning rate is\nL(^hn;\u0015)\u0018kf\u0003kH\u00190pn: (4.5)\nCompared to Theorem 4.1, we can see that both rates have the same scaling with\nrespect ton, the number of samples. The only di\u000berence appears in the two norms:\n\r2(f\u0003) andkf\u0003kH\u00190. From the de\fnition (3.4), we always have \r2(f\u0003)\u0014kf\u0003kH\u00190, since\n(a\u0003d\u0019\u0003\nd\u00190;\u00190)2\u0002f\u0003. If\u0019\u0003is nearly singular with respect to \u00190, thenkf\u0003kH\u00190\u001d\r2(f\u0003).\nIn this case, the population risk for the kernel methods should be much larger than the\npopulation risk for the neural network model.\nExample . Take\u00190to be the uniform distribution over Sdandf\u0003(x) =\u001b(hw\u0003;xi),\nfor which\u0019\u0003(w) =\u000e(w\u0000w\u0003) anda\u0003(w) = 1. In this case \r2(f\u0003) = 1, butkf\u0003kH\u00190= +1.\nThus the rate (4.5) becomes trivial. Assume that the population risk scales as O(n\u0000\f),\nand it is interesting to see how \fdepends on the dimension d. We numerically estimate\n\f's for two methods, and report the results in Table 4.1. It does show that the higher\nthe dimensionality, the slower the rate of the kernel method. In contrast, the rates for\nthe two-layer neural networks are independent of the dimensionality, which con\frms the\nthe prediction of Theorem 4.1. For this particular target function, the value of \f\u00151 is\nbigger than the lower bound (1 =2) proved in Theorem 4.1. This is not a contradiction\nsince the latter holds for any f2B2(X).\nd 10 100 1000\n\fnn 1:18 1:23 1:02\n\fker 0:50 0:35 0:14\nTable 4.1 .The error rates of learning the one-neuron function in di\u000berent dimensions. The\nsecond and third lines correspond to the two-layer neural network and the kernel ridge regression\nmethod, respectively.\nThe two-layer neural network model as of an adaptive kernel method.\nRecall thatB2(X) =[\u0019H\u0019(X). The norm \r2(\u0001) characterizes the complexity of the\ntarget function by selecting the best kernel among a family of kernels fk\u0019(\u0001;\u0001)g\u00192P(Sd).\nThe kernel method works with a speci\fc RKHS with a particular choice of the kernel\nor the probability distribution \u0019. In contrast, the neural network models work with the\n8 A PRIORI ESTIMATES FOR TWO-LAYER NEURAL NETWORKS\nunion of all these RKHS and select the kernel or the probability distribution adapted\nto the data. From this perspective, we can view the two-layer neural network model as\nan adaptive kernel method.\n4.2. Tackling the noise We \frst make the following sub-Gaussian assumption\non the noise.\nAssumption 4.1. We assume that the noise satis\fes\nP[j\u0018j>t]\u0014c0e\u0000t2\n\u001b8t\u0015\u001c0: (4.6)\nHerec0;\u001c0and\u001bare constants.\nIn the presence of noise, the population risk can be decomposed into\nL(\u0012) =Ex(f(x;\u0012)\u0000f\u0003(x))2+E[\u00182]: (4.7)\nThis suggests that, in spite of the noise, we still have argmin\u0012L(\u0012) = argmin\u0012Exjf(x;\u0012)\u0000\nf\u0003(x)j2;and the latter is what we really want to minimize. However due to the noise,\n`(f(xi);yi) might be unbounded. We cannot directly use the generalization bound in\nTheorem 5.2. To address this issue, we consider the truncated risk de\fned as follows,\nLB(\u0012) =Ex;y[`(f(x;\u0012);y)^B2\n2]\n^LB(\u0012) =1\nnnX\ni=1`(xi;\u0012);yi)^B2\n2:\nLetBn= 1+maxf\u001c0;\u001b2lnng. For the noisy case, we consider the following regularized\nrisk:\nJ\u0015(\u0012) :=^LBn(\u0012)+\u0015Bn(k\u0012kP+1): (4.8)\nThe corresponding regularized estimator is given by ^\u0012n;\u0015= argminJ\u0015(\u0012):Here for sim-\nplicity we slightly abused the notation.\nTheorem 4.2 ( Main result, noisy case ).Assume that the target function f\u00032B2(X)\nand\u0015\u0015\u0015n. Then for any \u000e>0, with probability at least 1\u0000\u000eover the choice of the\ntraining set S, we have\nExjf(x;^\u0012n;\u0015)\u0000f\u0003(x)j2.\r2\n2(f\u0003)\nm+\u0015Bn^\r2(f\u0003)\n+B2\nnpn\u0010\n^\r2(f\u0003)+p\nln(n=\u000e)\u0011\n+B2\nnpn\u0000\nc0\u001b2+r\nE[\u00182]\nn1=2\u0015\u0001\n:\nCompared to Theorem 4.1, the noise introduces at most several logarithmic terms. The\ncase with no noise corresponds to the situation with Bn= 1.\n4.3. Extension to classi\fcation problems Let us consider the simplest set-\nting: binary classi\fcation problem, where y2f0;1g. In this case, f\u0003(x) =Pfy= 1jxg\ndenotes the probability of y= 1 givenx. Givenf\u0003(\u0001) andf(\u0001;\u0012n;\u0015), the corresponding\nplug-in classi\fers are de\fned by \u0011\u0003(x) = 1[f\u0003(x)\u00151\n2] and ^\u0011(x) = 1[f(x;^\u0012n;\u0015)\u00151\n2], respec-\ntively.\u0011\u0003is the optimal Bayes classi\fer.\n9\nFor a classi\fer \u0011, we measure its performance by the 0-1 loss de\fned by E(\u0011) =\nPf\u0011(x)6=yg.\nCorollary 4.1. Under the same assumption as in Theorem 4.2 and taking \u0015=\u0015n,\nfor any\u000e2(0;1), with probability at least 1\u0000\u000e, we have\nE(^\u0011).E(\u0011\u0003)+\r2(f\u0003)pm+ ^\r1=2\n2(f\u0003)ln1=4(d)+ln1=4(n=\u000e)\nn1=4:\nProof . According to the Theorem 2.2. of [16], we have\nE(^\u0011)\u0000E(\u0011\u0003)\u00142E[jf(x;^\u0012n;\u0015)\u0000f\u0003(x)j] (4.9)\n\u00142E[jf(x;^\u0012n;\u0015)\u0000f\u0003(x)j2]\nIn this case, \"i=yi\u0000f\u0003(xi) is bounded by 1, thus \u001c0= 1;c=\u001b= 0. Applying Theorem 4.2\nyields the result.\nThe above theorem suggests that our a priori estimates also hold for classi\fcation\nproblems, although the error rate only scales as O(n\u00001=4). It is possible to improve\nthe rate with more a delicate analyses. One potential way is to speci\fcally develop a\nbetter estimate for L1loss, as can be seen from inequality (4.9). Another way is to\nmake a stronger assumption on the data. For example, we can assume that there exists\nf\u00032B2(X) such that Px;y(yf\u0003(x)\u00151) = 1, for which the Bayes error E(\u0011\u0003) = 0. We leave\nthese to future work.\n5. Proofs\n5.1. Bounding the generalization gap\nDefinition 5.1 ( Rademacher complexity ).LetFbe a hypothesis space, i.e. a set of\nfunctions. The Rademacher complexity of Fwith respect to samples S= (z1;z2;:::;zn)is\nde\fned as ^Rn(F) =1\nnE\"[supf2FPn\ni=1\"if(zi)];wheref\"ign\ni=1are i.i.d. random variables\nwithP(\"i= +1) = P(\"i=\u00001) =1\n2.The generalization gap can be estimated via the\nRademacher complexity by the following theorem [39] .\nTheorem 5.1. Fix a hypothesis space F. Assume that for any f2F andz,jf(z)j\u0014B.\nThen for any \u000e>0, with probability at least 1\u0000\u000eover the choice of S= (z1;z2;:::;zn),\nwe have,\nj1\nnnX\ni=1f(zi)\u0000Ez[f(z)]j\u00142ES[^Rn(F)]+Br\n2ln(2=\u000e)\nn:\nLetFQ=ff(x;\u0012)jk\u0012kP\u0014Qgdenote all the two-layer networks with path norm bounded\nbyQ. It was proved in [35] that\n^Rn(FQ)\u00142Qr\n2ln(2d)\nn: (5.1)\nBy combining the above result withTheorem 5.1, we obtain the following a posterior\nbound of the generalization gap for two-layer neural networks. The proof is deferred to\nAppendix B.\nTheorem 5.2 ( A posterior generalization bound ).Assume that the loss function `(\u0001;y)\nisA\u0000Lipschitz continuous and bounded by B. Then for any \u000e>0, with probability at\n10 A PRIORI ESTIMATES FOR TWO-LAYER NEURAL NETWORKS\nleast 1\u0000\u000eover the choice of the training set S, we have, for any two-layer network\nf(\u0001;\u0012),\njL(\u0012)\u0000^Ln(\u0012)j\u00144Ar\n2ln(2d)\nn(k\u0012kP+1) (5.2)\n+Br\n2ln(2c(k\u0012kP+1)2=\u000e)\nn; (5.3)\nwherec=P1\nk=11=k2.\nWe see that the generalization gap is bounded roughly by k\u0012kP=pnup to some\nlogarithmic terms.\n5.2. Proof for the noiseless case The intuition is as follows. The path norm\nof the special solution ~\u0012which achieves the optimal approximation error is independent\nof the network width, and this norm can also be used to bound the generalization gap\n(Theorem 5.2). Therefore, if the path norm is suitably penalized during training, we\nshould be able to control the generalization gap without harming the approximation\naccuracy.\nWe \frst have the estimate for the regularized risk of ~\u0012.\nProposition 5.1. Let~\u0012be the network constructed in Theorem 3.1, and \u0015\u0015\u0015n. Then\nwith probability at least 1\u0000\u000e, we have\nJ\u0015(~\u0012)\u0014L(~\u0012)+8\u0015^\r2(f\u0003)+2r\n2ln(2c=\u000e)\nn(5.4)\nProof . First`(y;yi) =1\n2(y\u0000yi)2is 1-Lipschitz continuous and bounded by 2. Ac-\ncording to De\fnition 3.1 and the property that k~\u0012kP\u00142\r2(f\u0003), the regularized risk of\n~\u0012satis\fes\nJ\u0015(~\u0012) =^Ln(~\u0012)+\u0015(k~\u0012kP+1)\n\u0014L(~\u0012)+(\u0015n+\u0015)(k~\u0012kP+1)+2s\n2ln(2c(k~\u0012kP+1)2=\u000e)\nn\n\u0014L(~\u0012)+6\u0015^\r2(f\u0003)+2r\n2ln(2c(1+2\r2(f\u0003))2=\u000e)\nn: (5.5)\nThe last term can be simpli\fed by usingp\na+b\u0014pa+p\nband ln(1+a)\u0014afora\u00150;b\u0015\n0. So we have\np\n2ln(2c(1+2\r2(f\u0003))2=\u000e)\u0014p\n2ln(2c=\u000e)+3^\r2(f\u0003):\nPlugging it into Equation (5.5) completes the proof.\nProposition 5.2 ( Properties of regularized solutions ).The regularized estimator ^\u0012n;\u0015\nsatis\fes:\nJ\u0015(^\u0012n;\u0015)\u0014J\u0015(~\u0012)\nk^\u0012n;\u0015kP\u0014L(~\u0012)\n\u0015+8^\r2(f\u0003)+1\n2p\nln(2c=\u000e)\n11\nProof . The \frst claim follows from the de\fnition of ^\u0012n. For the second claim, note\nthat\n\u0015(k^\u0012n;\u0015kP+1)\u0014J\u0015(^\u0012n)\u0014J\u0015(~\u0012);\nApplying Proposition 5.1 completes the proof.\nRemark 5.1. The above proposition establishes the connection between the regularized\nsolution and the special solution ~\u0012constructed in Proposition 3.1. In particular, by\ntaking\u0015=t\u0015nwitht\u00151the generalization gap of the regularized solution is bounded\nbyk^\u0012n;\u0015kPpn!L(~\u0012)=(tp\nln2d)asn!1 , up to some constant. This suggests that our\nregularization term is appropriate, and it forces the generalization gap to be roughly in\nthe same order as the approximation error.\nProof . (Proof of Theorem 4.1 ) Now we are ready to prove the main result.\nFollowing the a posteriori generalization bound given in Theorem 5.2, we have with\nprobability at least 1 \u0000\u000e,\nL(^\u0012n;\u0015)\u0014^Ln(^\u0012n;\u0015)+\u0015n(k^\u0012n;\u0015kP+1)+3Qn\n(1)\n\u0014J\u0015(^\u0012n;\u0015)+3Qn;\nwhereQn=q\nln(2c(1+k^\u0012n;\u0015k)2=\u000e)=n. The inequality (1) is due to the choice \u0015\u0015\u0015n.\nThe \frst term can be bounded by J\u0015(^\u0012n;\u0015)\u0014J\u0015(~\u0012), which is given by Proposition 5.1.\nIt remains to bound Qn,\npnQn\u0014p\nln(2nc=\u000e)+q\n2ln(1+n\u00001=2k^\u0012n;\u0015kP)\n\u0014p\nln(2nc=\u000e)+q\n2k^\u0012n;\u0015kP=pn:\nBy Proposition 5.2, we have\ns\n2k^\u0012n;\u0015kPpn\u0014s\n2(L(~\u0012)=\u0015+8^\r2(f\u0003)+0:5p\nln(2c=\u000e))pn\n\u0014s\n2L(~\u0012)\n\u0015n1=2+3^\r2(f\u0003)\nn1=4+\u0012ln(1=\u000e)\nn\u00131=4\n:\nThus after some simpli\fcation, we obtain\nQn\u00142r\nln(n=\u000e)\nn+s\n2L(~\u0012)\n\u0015n3=2+3^\r2(f\u0003)pn: (5.6)\nBy combining Equation (5.4) and (5.6), we obtain\nL(^\u0012n).L(~\u0012)+8\u0015^\r2(f\u0003)+3pn\u0010s\nL(~\u0012)\nn1=2\u0015+ ^\r2(f\u0003)+p\nln(n=\u000e)\u0011\n:\nBy applying L(~\u0012)\u00143\r2\n2(f\u0003)=m, we complete the proof.\n12 A PRIORI ESTIMATES FOR TWO-LAYER NEURAL NETWORKS\n5.3. Proof for the noisy case We need the following lemma. The proof is\ndeferred to Appendix D.\nLemma 5.1. Under Assumption 4.1, we have\nsup\n\u0012jL(\u0012)\u0000LBn(\u0012)j\u00142c0\u001b2\npn;\nTherefore we have,\nL(\u0012) =L(\u0012)\u0000LBn(\u0012)+LBn(\u0012)\u00142c0\u001b2\npn+LBn(\u0012)\nThis suggests that as long as we can bound the truncated population risk, the original\nrisk will be bounded accordingly.\nProof . (Proof of Theorem 4.2 ) The proof is almost the same as the noiseless\ncase. The loss function `(y;yi)^B2=2 isB-Lipschitz continuous and bounded by B2=2.\nBy analogy with the proof of Proposition 5.1, we obtain that with probability at least\n1\u0000\u000ethe following inequality holds,\nJ\u0015(~\u0012)\u0014LBn(~\u0012)+8Bn\u0015^\r2(f\u0003)+B2\nnr\nln(2c=\u000e)\nn: (5.7)\nFollowing the proof in Proposition 5.2, we similarly obtain J\u0015(^\u0012n;\u0015)\u0014J\u0015(~\u0012) and\nk^\u0012n;\u0015kP\u0014LBn(~\u0012)\nBn\u0015+8^\r(f\u0003)+Bn\n2p\nln(2c=\u000e): (5.8)\nFollowing the proof of Theorem 4.1, we have\nLBn(^\u0012n;\u0015)\u0014J\u0015(~\u0012)+B2\nn\n2q\n2ln(2c(1+k^\u0012n;\u0015kP)2=\u000e)=n (5.9)\nPlugging (5.7) and (5.8) into (5.9), we get\nLBn(^\u0012n;\u0015)\u0014LBn(~\u0012)+8Bn^\r2(f\u0003)\u0015\n+3B2\nnpn\u0010s\nLBn(~\u0012)\nn1=2\u0015+ ^\r2(f\u0003)+p\nln(n=\u000e)\u0011\nUsing Lemma 5.1 and the decomposition (4.7), we complete the proof.\n6. Numerical Experiments In this section, we evaluate the regularized model\nusing numerical experiments. We consider two datasets, MNIST2and CIFAR-103.\nEach example in MNIST is a 28 \u000228 grayscale image, while each example in CIFAR-\n10 is a 32\u000232\u00023 color image. For MNIST, we map numbers f0;1;2;3;4gto label 0\nandf5;6;7;8;9gto 1. For CIFAR-10, we select the examples with labels 0 and 1 to\nconstruct our new training and validation sets. Thus, our new MNIST has 60 ;000\ntraining examples, and CIFAR-10 has 10 ;000 training examples.\nThe two-layer ReLU network is initialized using ai\u0018N(0;2\u0014\nm);wi;j\u0018N(0;2\u0014=d). We\nuse\u0014= 1 and train the regularized models using the Adam optimizer [26] for T= 10;000\nsteps, unless it is speci\fed otherwise. The initial learning rate is set to be 0 :001, and\nit is then multiplied by a decay factor of 0 :1 at 0:7Tand again at 0 :9T. We set the\ntrade-o\u000b parameter \u0015= 0:1\u0015n4.\n2http://yann.lecun.com/exdb/mnist/\n3https://www.cs.toronto.edu/ ~kriz/cifar.html\n4Our proof of theoretical results require \u0015\u0015\u0015n. However, this condition is not necessarily optimal.\n13\n6.1. Shaper bounds for the generalization gap Theorem 5.2 shows that\nthe generalization gap is bounded byk\u0012kPpnup to some logarithmic terms. Previous\nworks [18, 34] showed that (stochastic) gradient descent tends to \fnd solutions with\nhuge norms, causing the a posterior bound to be vacuous. In contrast, our theory\nsuggests there exist good solutions (i.e. solutions with small generalization error) with\nsmall norms, and these solutions can be found by the explicit regularization.\nTo see how this works in practice, we trained both the regularized models and\nun-regularized models ( \u0015= 0) for \fxed network width m=10,000. To cover the over-\nparametrized regime, we also consider the case n= 100 where m=n = 100\u001d1. The\nresults are summarized in Table 6.1.\ndataset \u0015 n training accuracy testing accuracyk\u0012kPpn\nCIFAR-100104100% 84:5% 58\n100 100% 70:5% 507\n0:110487:4% 86:9% 0.14\n100 91:0% 72:0% 0.43\nMNIST06\u0002104100% 98:8% 58\n100 100% 78:7% 162\n0:16\u000210498:1% 97:8% 0.27\n100 100% 74:9% 0.41\nTable 6.1 .Comparison of regularized (\u0015= 0:1)and un-regularized (\u0015= 0) models. For each case,\nthe experiments are repeated for 5times, and the mean values are reported.\nAs we can see, the test accuracies of the regularized and un-regularized solutions\nare generally comparable, but the values ofk\u0012kPpn, which serve as an upper bound for the\ngeneralization gap, are drastically di\u000berent. The bounds for the un-regularized models\nare always vacuous, as was observed in [4, 18, 34]. In contrast, the bounds for the\nregularized models are always several orders of magnitude smaller than that for the un-\nregularized models. This is consistent with the theoretical prediction in Proposition 5.2.\nTo further explore the impact of over-parametrization, we trained various models\nwith di\u000berent widths. For both datasets, all the training examples are used. In Fig-\nure 6.1, we display how the value ofk\u0012kPpnof the learned solution varies with the network\nwidth. We \fnd that for the un-regularized model this quantity increases with network\nwidth, whereas for the regularized model it is almost constant. This is consistent with\nour theoretical result.\n6.2. Dependence on the Initialization Since the neural network model is non-\nconvex, it is interesting to see how initialization a\u000bects the performance of the di\u000berent\nmodels, regularized and un-regularized, especially in the over-parametrized regime. To\nthis end, we \fx m= 10000;n= 100 and vary the variance of random initialization \u0014. The\nresults are reported in Figure 6.2. In general, we \fnd that regularized models are much\nmore stable than the un-regularized models. For large initialization, the regularized\nmodel always performs signi\fcantly better.\n7. Conclusion In this paper, we proved nearly optimal a priori estimates of\nthe population risk for learning two-layer neural networks. Our results also give some\ninsight regarding the advantage of neural network models over the kernel method. We\n14 A PRIORI ESTIMATES FOR TWO-LAYER NEURAL NETWORKS\n102103104\nnetwork width: m0204060/bardblθ/bardblP/√n\nMNIST\nλ= 0.1\nλ= 0\n102103104\nnetwork width: m0204060/bardblθ/bardblP/√n\nCIFAR-10\nλ= 0\nλ= 0.1\nFig. 6.1 .Comparison of the path norms between the regularized and un-regularized solutions for\nvarying widths.\n10−210−1100101102\ninitialization: κ50607080Test accuracy(%)\nMNIST\nλ= 0\nλ= 0.1\n10−210−1100101\ninitialization: κ5055606570Test accuracy(%)\nCIFAR-10\nλ= 0\nλ= 0.1\nFig. 6.2 .Test accuracies of solutions obtained from di\u000berent initializations. Each experiment is\nrepeated for 5times, and we report the mean and standard deviation.\nshould also mention that the main result of this paper has also been extended to deep\nresidual network models in [19].\nThe most unsatisfactory aspect of our result is that it is proved for the regularized\nmodel since practitioners rely on the so-called implicit regularization. At the moment\nit is unclear where the \\implicit regularization\" comes from and how it actually works.\nExisting works consider special initialization schemes and require strong assumptions\non the target function [1, 9, 15, 20, 22]. In particular, the work in [20, 22] demonstrates\nclearly that in the regimes considered the neural network models are no better than the\nkernel method in terms of implicit regularization. This is quite unsatisfactory.\nThere are overwhelming evidence that by tuning the optimization procedure, includ-\ning the algorithm, the initialization, the hyper-parameters, etc., one can \fnd solutions\nwith superior performance on the test data. The problem is that excessive tuning and\nserious experience is required to \fnd good solutions. Until we have a good understand-\ning about the mysteries surrounding implicit regularization, the business of parameter\ntuning for un-regularized models will remain an art. In contrast, the regularized model\nproposed here is rather robust and much more fool-proof. Borrowing the terminology\nfrom mathematical physics, one is tempted to say that the regularized model considered\nhere is \\well-posed\" whereas the un-regularized model is \\ill-posed\" [40].\nAppendix A. Proof of Theorem 3.1. Without loss of generality, let ( a;\u0019) be\n15\nthe best representation of f, i.e.\r2\n2(f) =E\u0019[ja(w)j2]. LetU=fwjgm\nj=1bei.i.d. random\nvariables sampled from \u0019(\u0001), and de\fne\n^fU(x) =1\nmmX\nj=1a(wj)\u001b(hwj;xi):\nLetLU=Exj^fU(x)\u0000f(x)j2denote the population risk, we have\nEU[LU] =ExEUj^fU(x)\u0000f(x)j2\n=1\nm2ExmX\nj;l=1Ewj;wl[(a(wj)\u001b(hwj;xi)\u0000f(x))(a(wl)\u001b(hwl;xi)\u0000f(x))]\n\u0014\r2\n2(f)\nm:\nOn the other hand, denote the path norm of ^fU(x) byAU, we have EU[AU] =\r1(f)\u0014\n\r2(f).\nDe\fne the event E1=fLU<3\r2\n2(f)\nmg, andE2=fAU<2\r1(f)g. By Markov's in-\nequality, we have\nPfE1g= 1\u0000PfLU\u00153\r2\n2(f)\nmg\u00151\u0000EU[L(U)]\n3\r2\n2(f)=m\u00152\n3\nPfE2g= 1\u0000PfAU\u00152\r2(f)g\u00151\u0000E[AU]\n2\r2(f)\u00151\n2:\nTherefore, we have the probability of two events happens together,\nPfE1\\E2g=PfE1g+PfE2g\u00001\u00152\n3+1\n2\u00001>0:\nThis completes the proof.\nAppendix B. Proof of Theorem 5.2. Before we provide the upper bound for the\nRademacher complexity of two-layer networks, we \frst need the following two lemmas.\nLemma B.1 ( Lemma 26.11 of [39] ).LetS= (x1;:::;xn)benvectors in Rd. Then the\nRademacher complexity of H1=fx7!u\u0001xjkuk1\u00141ghas the following upper bound,\n^Rn(H1)\u0014max\nikxik1r\n2ln(2d)\nn\nThe above lemma characterizes the Rademacher complexity of a linear predictor with\n`1norm bounded by 1. To handle the in\ruence of nonlinear activation function, we\nneed the following contraction lemma.\nLemma B.2 ( Lemma 26.9 of [39] ). Let\u001ei:R7!Rbe a\u001a\u0000Lipschitz function,\ni.e. for all \u000b;\f2Rwe havej\u001ei(\u000b)\u0000\u001ei(\f)j\u0014\u001aj\u000b\u0000\fj. For anya2Rn, let\u001e(a) =\n(\u001e1(a1);:::;\u001en(an)), then we have\n^Rn(\u001e\u000eH)\u0014\u001a^Rn(H)\nWe are now ready to estimate the Rademacher complexity of two-layer networks.\nLemma B.3. LetFQ=ffm(x;\u0012)jk\u0012kP\u0014Qgbe the set of two-layer networks with path\nnorm bounded by Q, then we have\n^Rn(FQ)\u00142Qr\n2ln(2d)\nn\n16 A PRIORI ESTIMATES FOR TWO-LAYER NEURAL NETWORKS\nProof . To simplify the proof, we let ck= 0, otherwise we can de\fne wk= (wT\nk;ck)T\nandx= (xT;1)T.\nn^Rn(FQ) =E\u0018\u0002\nsup\nk\u0012kP\u0014QnX\ni=1\u0018imX\nk=1akkwkk1\u001b( ^wT\nkxi)\u0003\n\u0014E\u0018\u0002\nsup\nk\u0012kP\u0014Q;kukk1=1nX\ni=1\u0018imX\nk=1akkwkk1\u001b(uT\nkxi)\u0003\n=E\u0018\u0002\nsup\nk\u0012kP\u0014Q;kukk1=1mX\nk=1akkwkk1nX\ni=1\u0018i\u001b(uT\nkxi)\u0003\n\u0014E\u0018\u0002\nsup\nk\u0012kP\u0014QmX\nk=1jakkwkk1jsup\nkuk1=1jnX\ni=1\u0018i\u001b(uTxi)j\u0003\n\u0014QE\u0018\u0002\nsup\nkuk1=1jnX\ni=1\u0018i\u001b(uTxi)j\u0003\n\u0014QE\u0018\u0002\nsup\nkuk1\u00141jnX\ni=1\u0018i\u001b(uTxi)j\u0003\nDue to the symmetry, we have that\nE\u0018\u0002\nsup\nkuk1\u00141jnX\ni=1\u0018i\u001b(uTxi)j\u0003\n\u0014E\u0018\u0002\nsup\nkuk1\u00141nX\ni=1\u0018i\u001b(uTxi)+ sup\nkuk1\u00141nX\ni=1\u0000\u0018i\u001b(uTxi)\u0003\n= 2E\u0018\u0002\nsup\nkuk1\u00141nX\ni=1\u0018i\u001b(uTxi)\u0003\nSince\u001bis Lipschitz continuous with Lipschitz constant 1, by applying Lemma B.2 and\nLemma B.1, we obtain\n^Rn(FQ)\u00142Qr\n2ln(2d)\nn:\nProposition B.1. Assume the loss function `(\u0001;y)isA\u0000Lipschitz continuous and\nbounded by B, then with probability at least 1\u0000\u000ewe have,\nsup\nk\u0012kP\u0014QjL(\u0012)\u0000^Ln(\u0012)j\u00144AQr\n2ln(2d)\nn+Br\n2ln(2=\u000e)\nn(B.1)\nProof . De\fneHQ=f`\u000efjf2FQg, then we have ^Rn(HQ)\u00142BQq\n2ln(2d)\nn;which\nfollows from Lemma B.2 and B.3. Then directly applying Theorem 5.1 yields the result.\nProof . (Proof of Theorem 5.2 ) Consider the decomposition F=[1\nl=1Fl, where\nFl=ffm(x;\u0012)jk\u0012kP\u0014lg. Let\u000el=\u000e\ncl2wherec=P1\nl=11\nl2. According to Theorem B.1, if\nwe \fxlin advance, then with probability at least 1 \u0000\u000elover the choice of S, we have\nsup\nk\u0012kP\u0014ljL(\u0012)\u0000^Ln(\u0012)j\u00144Alr\n2ln(2d)\nn+Br\n2ln(2=\u000el)\nn: (B.2)\n17\nSo the probability that there exists at least one lsuch that (B.2) fails is at mostP1\nl=1\u000el=\n\u000e. In other words, with probability at least 1 \u0000\u000e, the inequality (B.2) holds for all l.\nGiven an arbitrary set of parameters \u0012, denotel0= minfljk\u0012kP\u0014lg, thenl0\u0014\nk\u0012kP+1. Equation (B.2) implies that\njL(\u0012)\u0000^Ln(\u0012)j\u00144Al0r\n2ln(2d)\nn+Br\n2ln(2cl2\n0=\u000e)\nn\n\u00144A(k\u0012kP+1)r\n2ln(2d)\nn+Br\n2ln(2c(1+k\u0012kP)2=\u000e)\nn:\nAppendix C. Proof of Lemma 5.1.\nProof . LetZ=f(x;\u0012)\u0000f\u0003(x)\u0000\", then for any B\u00152+\u001c0, we have\njL(\u0012)\u0000LB(\u0012)j=E\u0002\n(Z2\u0000B2)1jZj\u0015B\u0003\n=Z1\n0PfZ2\u0000B2\u0015t2gdt2\u0014Z1\n0PfjZj\u0015p\nB2+t2gdt2\n\u0014Z1\n0Pfj\"j\u0015p\nB2+t2\u00002gdt2\n=c0Z1\nBe\u0000s2\n2\u001b2ds2= 2c0\u001b2e\u0000B2=2\u001b2\nSinceBn\u0015\u001b2lnn, we have 2 c0\u001b2e\u0000B2\nn\n2\u001b2\u00142c0\u001b2n\u00001=2. We thus complete the proof.\nAcknowledgement: The work presented here is supported in part by a gift to\nPrinceton University from iFlytek and the ONR grant N00014-13-1-0338.\nREFERENCES\n[1] Zeyuan Allen-Zhu, Yuanzhi Li, and Yingyu Liang. Learning and generalization in overparameter-\nized neural networks, going beyond two layers. arXiv preprint arXiv:1811.04918 , 2018. 2.2,\n7\n[2] Nachman Aronszajn. Theory of reproducing kernels. Transactions of the American mathematical\nsociety , 68(3):337{404, 1950. 3.1\n[3] Sanjeev Arora, Simon S Du, Wei Hu, Zhiyuan Li, and Ruosong Wang. Fine-grained analysis\nof optimization and generalization for overparameterized two-layer neural networks. arXiv\npreprint arXiv:1901.08584 , 2019. 2.2\n[4] Sanjeev Arora, Rong Ge, Behnam Neyshabur, and Yi Zhang. Stronger generalization bounds for\ndeep nets via a compression approach. In Proceedings of the 35th International Conference\non Machine Learning , volume 80, pages 254{263. PMLR, Jul 2018. 1, 6.1\n[5] Andrew R. Barron. Universal approximation bounds for superpositions of a sigmoidal function.\nIEEE Transactions on Information theory , 39(3):930{945, 1993. 3.1, 3.2\n[6] Andrew R. Barron. Approximation and estimation bounds for arti\fcial neural networks. Machine\nlearning , 14(1):115{133, 1994. 2.1\n[7] Peter L. Bartlett, Dylan J. Foster, and Matus J. Telgarsky. Spectrally-normalized margin bounds\nfor neural networks. In Advances in Neural Information Processing Systems 30 , pages 6240{\n6249, 2017. 1\n[8] Leo Breiman. Hinging hyperplanes for regression, classi\fcation, and function approximation.\nIEEE Transactions on Information Theory , 39(3):999{1013, 1993. 3.1, 3.2\n[9] Alon Brutzkus, Amir Globerson, Eran Malach, and Shai Shalev-Shwartz. Sgd learns over-\nparameterized networks that provably generalize on linearly separable data. In International\nConference on Learning Representations , 2018. 2.2, 7\n[10] Yuan Cao and Quanquan Gu. A generalization theory of gradient descent for learning over-\nparameterized deep relu networks. arXiv preprint arXiv:1902.01384 , 2019. 2.2\n18 A PRIORI ESTIMATES FOR TWO-LAYER NEURAL NETWORKS\n[11] Andrea Caponnetto and Ernesto De Vito. Optimal rates for the regularized least-squares algo-\nrithm. Foundations of Computational Mathematics , 7(3):331{368, 2007. 4.1\n[12] L\u0013 ena \u0010c Chizat and Francis Bach. On the global convergence of gradient descent for over-\nparameterized models using optimal transport. In Advances in Neural Information Processing\nSystems 31 , pages 3040{3050. 2018. 2\n[13] Philippe G. Ciarlet. The \fnite element method for elliptic problems. Classics in applied mathe-\nmatics , 40:1{511, 2002. 1\n[14] George Cybenko. Approximation by superpositions of a sigmoidal function. Mathematics of\ncontrol, signals and systems , 2(4):303{314, 1989. 3.1\n[15] Amit Daniely. SGD learns the conjugate kernel class of the network. In Advances in Neural\nInformation Processing Systems , pages 2422{2430, 2017. 2.2, 7\n[16] Luc Devroye, L\u0013 aszl\u0013 o Gy or\f, and G\u0013 abor Lugosi. A probabilistic theory of pattern recognition ,\nvolume 31. Springer Science & Business Media, 2013. 4.3\n[17] Simon S. Du, Xiyu Zhai, Barnabas Poczos, and Aarti Singh. Gradient descent provably optimizes\nover-parameterized neural networks. In International Conference on Learning Representa-\ntions , 2019. 2\n[18] Gintare Karolina Dziugaite and Daniel M. Roy. Computing nonvacuous generalization bounds\nfor deep (stochastic) neural networks with many more parameters than training data. In\nProceedings of the Thirty-Third Conference on Uncertainty in Arti\fcial Intelligence, UAI ,\n2017. 1, 6.1, 6.1\n[19] Weinan E, Chao Ma, and Qingcan Wang. A priori estimates of the population risk for residual\nnetworks. arXiv preprint arXiv:1903.02154 , 2019. 1, 7\n[20] Weinan E, Chao Ma, Qingcan Wang, and Lei Wu. Analysis of the gradient descent algorithm for\na deep neural network model with skip-connections. arXiv preprint arXiv:1904.05263 , 2019.\n1, 2.2, 7\n[21] Weinan E, Chao Ma, and Lei Wu. Barron spaces and compositional function spaces for neural\nnetwork models. arXiv preprint arXiv:1906.08039 , 2019. 1, 3.1\n[22] Weinan E, Chao Ma, and Lei Wu. A comparative analysis of the optimization and generalization\nproperty of two-layer neural network and random feature models under gradient descent\ndynamics. arXiv preprint arXiv:1904.04326 , 2019. 1, 2.2, 7\n[23] C. Daniel Freeman and Joan Bruna. Topology and geometry of half-recti\fed network optimization.\nIn International Conference on Learning Representations (ICLR) , 2017. 2\n[24] Noah Golowich, Alexander Rakhlin, and Ohad Shamir. Size-independent sample complexity of\nneural networks. In Proceedings of the 31st Conference On Learning Theory , volume 75,\npages 297{299. PMLR, 2018. 1\n[25] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into recti\fers: Surpassing\nhuman-level performance on imagenet classi\fcation. In Proceedings of the IEEE international\nconference on computer vision , pages 1026{1034, 2015. 3\n[26] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In International\nConference on Learning Representations , 2015. 6\n[27] Jason M Klusowski and Andrew R Barron. Risk bounds for high-dimensional ridge function\ncombinations including neural networks. arXiv preprint arXiv:1607.01434 , 2016. 2.1, 3.1, 3.1\n[28] Jason M Klusowski and Andrew R Barron. Minimax lower bounds for ridge combinations includ-\ning neural nets. In Information Theory (ISIT), 2017 IEEE International Symposium on ,\npages 1376{1380. IEEE, 2017. 3.1, 4\n[29] Jason M Klusowski and Andrew R Barron. Approximation by combinations of relu and squared\nrelu ridge functions with l1 and l0 controls. 2018. 3.1\n[30] Alex Krizhevsky, Ilya Sutskever, and Geo\u000brey E. Hinton. Imagenet classi\fcation with deep con-\nvolutional neural networks. In Advances in neural information processing systems , pages\n1097{1105, 2012. 3\n[31] Song Mei, Andrea Montanari, and Phan-Minh Nguyen. A mean \feld view of the landscape of\ntwo-layers neural networks. In Proceedings of the National Academy of Sciences , volume 115,\npages E7665{E7671, 2018. 2\n[32] Behnam Neyshabur, Srinadh Bhojanapalli, David Mcallester, and Nati Srebro. Exploring gener-\nalization in deep learning. In Advances in Neural Information Processing Systems 30 , pages\n5949{5958, 2017. 1\n[33] Behnam Neyshabur, Srinadh Bhojanapalli, and Nathan Srebro. A PAC-bayesian approach to\nspectrally-normalized margin bounds for neural networks. In International Conference on\nLearning Representations , 2018. 1\n[34] Behnam Neyshabur, Zhiyuan Li, Srinadh Bhojanapalli, Yann LeCun, and Nathan Srebro. The\nrole of over-parametrization in generalization of neural networks. In International Conference\non Learning Representations , 2019. 1, 2.2, 6.1, 6.1\n19\n[35] Behnam Neyshabur, Ryota Tomioka, and Nathan Srebro. Norm-based capacity control in neural\nnetworks. In Conference on Learning Theory , pages 1376{1401, 2015. 1, 3, 5.1\n[36] Quynh Nguyen, Mahesh Chandra Mukkamala, and Matthias Hein. On the loss landscape of\na class of deep neural networks with no bad local valleys. In International Conference on\nLearning Representations , 2019. 2\n[37] Ali Rahimi and Benjamin Recht. Uniform approximation of functions with random bases. In\n2008 46th Annual Allerton Conference on Communication, Control, and Computing , pages\n555{561. IEEE, 2008. 3.1\n[38] Itay Safran and Ohad Shamir. On the quality of the initial basin in overspeci\fed neural networks.\nInInternational Conference on Machine Learning , pages 774{782, 2016. 2\n[39] Shai Shalev-Shwartz and Shai Ben-David. Understanding machine learning: From theory to\nalgorithms . Cambridge university press, 2014. 5.1, B.1, B.2\n[40] A. N. Tikhonov and Vasilii IAkovlevich Arsenin. Solutions of ill-posed problems . New York:\nWinston, 1977. 7\n[41] Colin Wei, Jason D. Lee, Qiang Liu, and Tengyu Ma. On the margin theory of feedforward neural\nnetworks. arXiv preprint arXiv:1810.05369 , 2018. 2.1\n[42] Yuhong Yang and Andrew Barron. Information-theoretic determination of minimax rates of\nconvergence. Annals of Statistics , pages 1564{1599, 1999. 4\n[43] Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding\ndeep learning requires rethinking generalization. In International Conference on Learning\nRepresentations , 2017. 1",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "EovpfshUXqG",
"year": null,
"venue": null,
"pdf_link": "https://arxiv.org/pdf/1906.08039.pdf",
"forum_link": "https://openreview.net/forum?id=EovpfshUXqG",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Barron Spaces and the Compositional Function Spaces for Neural Network Models",
"authors": [
"Weinan E",
"Chao Ma",
"Lei Wu"
],
"abstract": "One of the key issues in the analysis of machine learning models is to identify the appropriate function space for the model. This is the space of functions that the particular\nmachine learning model can approximate with good accuracy, endowed with a natural norm\nassociated with the approximation process. In this paper, we address this issue for two representative neural network models: the two-layer networks and the residual neural networks.\nWe define Barron space and show that it is the right space for two-layer neural network\nmodels in the sense that optimal direct and inverse approximation theorems hold for functions in the Barron space. For residual neural network models, we construct the so-called\ncompositional function space, and prove direct and inverse approximation theorems for this\nspace. In addition, we show that the Rademacher complexity has the optimal upper bounds\nfor these spaces.",
"keywords": [],
"raw_extracted_content": "arXiv:1906.08039v2 [cs.LG] 27 Mar 2021The Barron Space and the Flow-induced Function Spaces\nfor Neural Network Models\nWeinan E∗1,2,3, Chao Ma†2, and Lei Wu‡2\n1Department of Mathematics, Princeton University\n2Program in Applied and Computational Mathematics, Princeton Unive rsity\n3Beijing Institute of Big Data Research\nMarch 30, 2021\nAbstract\nOne of the key issues in the analysis of machine learning models is to iden tify the appro-\npriate function space and norm for the model. This is the set of func tions endowed with a\nquantity which can control the approximation and estimation error s by a particular machine\nlearning model. In this paper, we address this issue for two represe ntative neural network\nmodels: the two-layer networks and the residual neural network s. We define the Barron\nspace and show that it is the right space for two-layer neural netw ork models in the sense\nthat optimal direct and inverse approximation theorems hold for fu nctions in the Barron\nspace. For residual neural network models, we construct the so -called flow-induced function\nspace, and prove direct and inverse approximation theorems for t his space. In addition, we\nshow that the Rademacher complexity for bounded sets under the se norms has the optimal\nupper bounds.\nKeywords: Function space, Neural network, Approximation, Rademacher co mplexity\nMSC:65D15, 68T05, 46B99\n1 Introduction\nThe task of supervised learning is to approximate a function using a given set of data. This type\nof problem has been the subject of classical numerical analy sis and approximation theory for a\nlong time. The theory of splines and the theory of finite eleme nt methods are very successful\nexamples of such classical results [ 9,8], both are concerned with approximating functions using\npiecewise polynomials. In these theories, one starts from a function in a particular function\nspace, say a Sobolev or Besov space, and proceeds to derive op timal error estimates for this\nfunction. The optimal error estimates depend on the functio n norm, and the regularity encoded\nin the function space as well as the approximation scheme. Th ey are the most important pieces\n∗[email protected]\n†[email protected]\n‡[email protected]\n1\nof information for understanding the underlying approxima tion scheme. When discussing a\nparticular function space, the associated norm is as crucia l as the set of functions it contains.\nIdentifying the right function space that one should use is t he most crucial step in this\nanalysis. Sobolev/Besov type spaces are good function spac es for these classical theories since:\n1. Onecan prove direct andinverse approximation theorems f or thesespaces. Roughly speak-\ning, a function can be approximated by piecewise polynomial s with certain convergence\nrate if and only if the function is in certain Sobolev/Besov s pace.\n2. The functions we are interested in, e.g. solutions of part ial differential equations (PDEs),\nare in these spaces. This is at the heart of the regularity the ory for PDEs.\nHowever, these spaces are tied with the piecewise polynomia l basis used in the approximation\nscheme. These approximation schemes suffer from the curse of d imensionality, i.e. the number\nof parameters needed to achieve certain level of accuracy gr ows exponentially with dimension.\nConsequently, Sobolev/Besov type spaces are not the right f unction spaces for studying machine\nlearning models that can potentially address the curse of di mensionality problem.\nAnother inspiration for this paper comes from kernel method s. It is well-known that the\nright function space associated with a kernel method is the c orresponding reproducing kernel\nHilbert space (RKHS) [ 1]. RKHS and kernel methods provide one of the firstexamples fo r which\ndimension-independent error estimates can be established .\nThe main purpose of this paper is to construct and identify th e analog of these spaces for\ntwo-layer and residual neural network models. For two-laye r neural network models, we show\nthat the right function space is the so-called “Barron space ”. Roughly speaking, a function\nbelongs to the Barron space if and only if it can be approximat ed by “well-behaved” two-layer\nneural networks, and the approximation error is controlled by the norm of the Barron space.\nThe analog of the Barron space for deep residual neural netwo rks is the “flow-induced function\nspace” that we construct in the second part of this paper. Wit h the “flow-induced norms”, we\nwill establish direct and inverse approximation theorems f or these spaces as well as the optimal\nRademacher complexity estimates.\nOne important difference between approximation theory in low and high dimensions is that\nin high dimensions, the best error rate (or order of converge nce) that one can hope for is the\nMonte Carlo error rate. Therefore using the error rate as an i ndicator to distinguish the quality\nof different approximation schemes or machine learning model s is not a good option. The\nfunction spaces or the associated norms seem to be a better al ternative. We take the viewpoint\nthat a function space is defined by its approximation propert y using a particular approximation\nscheme. In this sense, Sobolev/Besov spaces are the result w hen we consider approximation by\npiecewise polynomials or wavelets. Barron space is the anal og when we consider approximation\nby two-layer neural networks and the flow-induced function s pace is the analog when we consider\napproximation by deep residual networks. The norms that are associated with these new spaces\nmay seem a bit unusual at a first sight, but they arise naturall y in the approximation process,\nas we will see from the direct and inverse approximation theo rems presented below.\nIt should be stressed that the terminologies “space” and “no rm” in this paper are used\nin a loose way. For example, flow-induced norms are a family of quantities that control the\napproximation error. We do not take effort to investigate whet her it is a real norm.\nAlthough this work was motivated by the problem of understan ding approximation theory\nfor neural network models in machine learning, we believe th at it should have an implication for\nhigh dimensional analysis in general. One natural follow-u p question is whether one can show\n2\nthat solutions to high dimensional partial differential equa tions (PDE) belong to the function\nspaces introduced here. At least for linear parabolic PDEs, the work in [ 14] suggests that some\nclose analog of the flow-induced spaces should serve the purp ose.\nIn Section 2, we introduce the Barron space for two-layer neural network s. Although not\nall the results in this section are new (some have appeared in various forms in [ 15,2,11]), they\nare useful for illustrating our angle of attack and they are a lso useful for the work in Section 3\nwhere we introduce the flow-induced function space for resid ual networks.\nNotations : LetSd={w∈Rd+1:/ba∇dblw/ba∇dbl1= 1}. We define ˆw=w\n/bardblw/bardbl1ifw/ne}ationslash= 0 otherwise\nˆw= 0. For simplicity, we fix the domain of interest to be X= [0,1]d. We denote by x∈X\nthe input variable, and let ˜x= (xT,1)T. We sometimes abuse notation and use f(x) (or\nsome other analogs) to denote the function fin order to signify the independent variable under\nconsideration. We use /ba∇dblf/ba∇dblto denote the L2norm of function fdefined by\n/ba∇dblf/ba∇dbl=/parenleftbigg/integraldisplay\nX|f(x)|2µ(dx)/parenrightbigg1\n2\n,\nwhereµ(x) is a probability distribution on X. We do not specify µin this paper.\nOne important point for working in high dimension is the depe ndence of the constants on\nthe dimension. We will use Cto denote constants that are independent of the dimension.\nIn Section 3, the absolute values and powers of matrices and vectors ( |·|and (·)p) are under-\nstood as beingelement-wise. Themultiplication of two matr ices is regular matrix multiplication.\n2 The Barron space\nIn this section we define the Barron space and study its proper ties. The proofs of theorems are\npostponed to the end of the section.\n2.1 Definition of the Barron space\nWe will consider functions f:X/ma√sto→Rthat admit the following representation\nf(x) =/integraldisplay\nΩaσ(bTx+c)ρ(da,db,dc),x∈X (1)\nwhere Ω = R1×Rd×R1,ρis a probability distribution on (Ω, Σ Ω), with Σ Ωbeing a Borel\nσ-algebra on Ω, and σ(x) = max{x,0}is the ReLU activation function. This representation\ncan be considered as the continuum analog of two-layer neura l networks:\nfm(x;Θ) :=1\nmm/summationdisplay\nj=1ajσ(bT\njx+cj),\nwhere Θ = ( a1,b1,c1,...,am,bm,cm) denotes all the parameters. It should be noted that in\ngeneral, the ρ’s for which ( 1) holds are not unique.\nTo get some intuition about the representation ( 1), we write the Fourier representation of a\nfunction fas:\nf(x) =/integraldisplay\nRdˆf(ω)cos(ωTx)dω=/integraldisplay\nR1×Rdacos(ωTx)ρ(da,dω), (2)\n3\nρ(da,dω) =δ(a−ˆf(ω))dadω.\nThis can be thought of as the analog of ( 1) for the case when σ(z) = cos(z) except for the fact\nthat the ρdefined in ( 2) is not normalizable.\nFor functions that admit the representation ( 1), we define its Barron norm:\n/ba∇dblf/ba∇dblBp= inf\nρ/parenleftbig\nEρ[|a|p(/ba∇dblb/ba∇dbl1+|c|)p]/parenrightbig1/p,1≤p≤+∞. (3)\nHere the infimum is taken over all ρfor which ( 1) holds for all x∈X, and when p=∞the\nnorm (3) becomes\ninf\nρmax\n(a,b,c)∈supp(ρ)|a|(/ba∇dblb/ba∇dbl1+|c|).\nBarron spacesBpare defined as the set of continuous functions that can be repr esented by ( 1)\nwith finite Barron norm. We name these spaces after Barron to h onor his contribution to the\nmathematical analysis of two-layer neural networks, in par ticular the work in [ 4,5,15].\nRemark 1. It should be noted that the Barron norm defined here is different from the spectral\nnorm used in Barron’s original papers (see for example [ 4]).\nAs a consequence of the H¨ older’s inequality, we trivially h ave\nB∞⊂···B 2⊂B1.\nHowever, the opposite is also true for the ReLU activation fu nction we are considering.\nProposition 1. For any f∈B1, we have f∈B∞and\n/ba∇dblf/ba∇dblB1=/ba∇dblf/ba∇dblB∞.\nAs a consequence, we have that for any 1 ≤p≤∞,Bp=B∞and/ba∇dblf/ba∇dblBp=/ba∇dblf/ba∇dblB∞. Hence,\nwe can useBand/ba∇dbl·/ba∇dblBto denote the Barron space and Barron norm.\nA natural question is: What kind of functions are in the Barro n space? The following is a\nrestatement of an important result proved in [ 15]. It is an extension of the Fourier analysis of\ntwo-layer sigmoidal neural networks in Barron’s seminal wo rk [4].\nProposition 2 (Theorem 6 in [ 15]).Letf∈C(X), the space of continuous functions on X,\nand assume that fsatisfies:\nγ(f) := inf\nˆf/integraldisplay\nRd/ba∇dblω/ba∇dbl2\n1|ˆf(ω)|dω <∞,\nwhereˆfis the Fourier transform of an extension of ftoRd. Thenfadmits a representation\nas in(1). Moreover,\n/ba∇dblf/ba∇dblB≤2γ(f)+2/ba∇dbl∇f(0)/ba∇dbl1+2|f(0)|. (4)\nRemark 2. In Section 9 of [ 4], examples of functions with bounded γ(f)are given (e.g. Gaus-\nsian, positive definite functions, etc.). [ 4] used the norm/integraltext\nRd/ba∇dblω/ba∇dbl|ˆf(ω)|dω,instead of γ(f), but\nthe analysis also shows that Gaussian and positive definite f unctions give rise to finite values of\nγ(f). By Proposition 2, these functions belong to the Barron space.\n4\nIn addition, the Barron space is also closely related to a fam ily of RKHS. Let w= (b,c).\nDue to the scaling invariance of σ(·), we can assume w∈Sd. Then (1) can be written as\nf(x) =/integraldisplay\nSdaσ(wT˜x)ρ(da,dw) =/integraldisplay\nSda(w)σ(wT˜x)π(dw), (5)\na(w) =/integraltext\nRaρ(a,w)da\nπ(w), π(w) =/integraldisplay\nRρ(a,w)da\nMoreover,\n/ba∇dblf/ba∇dbl2\nB2= inf\nπEπ[|a(w)|2],\nwhere the infimum is taken over all πthat satisfies ( 5).\nGiven a fixed probability distribution π, we can define a kernel:\nkπ(x,x′) =Ew∼π[σ(wT˜x)σ(wT˜x′)]\nLetHkπdenote the RKHS induced by kπ. Then we have the following proposition.\nProposition 3.\nB=/uniondisplay\nπ∈P(Sd)Hkπ.\n2.2 Direct and inverse approximation theorems\nWith (1), approximating fby two-layer networks becomes a Monte Carlo integration pro blem.\nTheorem 4. For anyf∈Bandm >0, there exists a two-layer neural network fm(·;Θ),fm(x;Θ) =\n1\nm/summationtextm\nk=1akσ(bT\nkx+ck)(Θdenotes the parameters {(ak,bk,ck),k∈[m]}in the neural network),\nsuch that\n/ba∇dblf(·)−fm(·;Θ)/ba∇dbl2≤3/ba∇dblf/ba∇dbl2\nB\nm,\nFurthermore, we have\n/ba∇dblΘ/ba∇dblP:=1\nmm/summationdisplay\nj=1|aj|(/ba∇dblbj/ba∇dbl1+|cj|)≤2/ba∇dblf/ba∇dblB.\nRemark 3. We call/ba∇dblΘ/ba∇dblPthe path norm of two-layer neural network. This is the analog o f the\nBarron norm of functions in B. Hence, when studying approximation properties, it is natu ral to\nstudy two-layer neural networks with bounded path norm.\nOne can also prove an inverse approximation theorem. To stat e this result, we define:\nNQ=/braceleftBigg\n1\nmm/summationdisplay\nk=1akσ(bT\nkx+ck) :1\nmm/summationdisplay\nk=1|ak|(/ba∇dblbk/ba∇dbl1+|ck|)≤Q,m∈N+/bracerightBigg\n.\nTheorem 5. Letf∗be a continuous function on X. Assume there exists a constant Qand a\nsequence of functions (fm)⊂NQsuch that\nfm(x)→f∗(x)\nfor allx∈X. Then there exists a probability distribution ρ∗on(Ω,ΣΩ), such that\nf∗(x) =/integraldisplay\naσ(bTx+c)ρ∗(da,db,dc),\nfor allx∈X. Furthermore, we have f∗∈Bwith\n/ba∇dblf∗/ba∇dblB≤Q.\n5\n2.3 Estimates of the Rademacher complexity\nNext, weshowthattheBarronspaceswedefinedhavelow comple xity. Weshowthisbybounding\nthe Rademacher complexity of bounded sets in the Barron spac es.\nDefinition 1 (Rademacher complexity) .Given a set of functions Fandndata samples S=\n{x1,x2,···,xn}, the Rademacher complexity of Fwith respect to Sis defined as\nRadn(F) =1\nnEξsup\nf∈Fn/summationdisplay\ni=1ξif(xi),\nwhereξ= (ξ1,ξ2,···,ξn) is a vector of ni.i.d.random variables that satisfy P(ξ= 1) =P(ξ=\n−1) =1\n2.\nThe following theorem gives an estimate of the Rademacher co mplexity of the Barron space.\nSimilar results can be found in [ 2]. We include the proof in the next section for completeness.\nTheorem 6. LetFQ={f∈B:/ba∇dblf/ba∇dblB≤Q}. Then we have\nRadn(FQ)≤2Q/radicalbigg\n2ln(2d)\nn\nFrom Theorem 8 in [ 6], we see that the above result implies that functions in the B arron\nspaces can be learned efficiently .\n2.4 Barron space for Non-ReLU functions and the space F1\nThe definition of the Barron space and Barron norm can be exten ded to representations ( 1) with\nσ(·) being a general activation function. Specifically, for any function fwith representation\nf(x) =/integraldisplay\nΩa˜σ(bTx+c)ρ(da,db,dc),x∈X, (6)\nwhere ˜σis an activation function not necessarily ReLU, we define the extended Barron norm\n(which is denoted by /ba∇dbl·/ba∇dbl˜Bp) as\n/ba∇dblf/ba∇dbl˜Bp: = inf\nρ(Eρ[|a|p(/ba∇dblb/ba∇dbl1+|c|+1)p])1/p, (7)\nwherep∈[1,∞], and the infimum is taken over all ρfor which ( 6) holds. The extended Barron\nspace˜Bpis definedas theset of functionswithfinite ˜Bpnorm. Inthis case, sincethehomogeneity\nproperty does not hold for the activation function, ˜Bpspaces with different pare not equal. The\ndirect approximation theorem and Rademacher complexity co ntrol can be proven for ˜Bpas long\nas ˜σsatisfies /integraldisplay\nR|˜σ′′(x)|(|x|+1)dx <∞.\nSee [18] for more details.\nWe deal with general activation functions by approximating them using two-layer ReLU\nneural networks, and the “+1” term in ( 7) appears naturally during the approximation process.\nIt is worth mentioning that if ˜ σ=ReLU the ˜Bpnorms become equivalent with the Barron norm\n/ba∇dbl·/ba∇dblB, because of the infimum and the homogeneity property.\n6\nIn [2], a similar function space F1is defined by using the variation norm [ 16,19]. In [2],\nsigned measures are used to represent the function as follow s,\nf(x) =/integraldisplay\nVσ(bTx+c)dµ(b,c), (8)\nwhereVis the support of the signed measure µ. LetSfdenote the set of signed measures such\nthat (8) holds. TheF1norm of fis given by\n/ba∇dblf/ba∇dblF1:= inf\nµ∈Sf|µ|(V),\nwhere|µ|(V) denotes the total variation of µ. The estimate of Rademacher complexity of F1is\nprovided for the ReLU activation function.\nFor ReLU activation function, F1is equivalent with B, and the norms are equal, too [ 12].\nHowever, forageneral activation function(e.g. tanh, sigm oid), theBarron spaceis differentfrom\nF1.F1typically requires ( b,c) to lie in a compact set, which is generally not true. With ( b,c)\nbeing in a compact set, the variation norm only considers aand treat features with any ( b,c)\nequivalently. Hence, a very simple feature will have the sam e variation norm as a complicated\nfeature, whichleads tolooseboundsforsimplefunctions. O nthecontrary, the ˜Bpnormsconsider\n(a,b,c) together, and features with different ( b,c) make different contributions to the norm.\n2.5 Proofs\n2.5.1 Proof of Proposition 1\nTakef∈B1. For any ε >0, there exists a probability measure ρthat satisfies\nf(x) =/integraldisplay\nΩaσ(bTx+c)ρ(da,db,dc),∀x∈X,\nand\nEρ[|a|(/ba∇dblb/ba∇dbl1+|c|)]</ba∇dblf/ba∇dblB1+ε.\nLet Λ ={(b,c) :/ba∇dblb/ba∇dbl1+|c|= 1}, and consider two measures ρ+andρ−on Λ defined by\nρ+(A) =/integraldisplay\n{(a,b,c): (ˆb,ˆc)∈A,a>0}|a|(/ba∇dblb/ba∇dbl1+|c|)ρ(da,db,dc),\nρ−(A) =/integraldisplay\n{(a,b,c): (ˆb,ˆc)∈A,a<0}|a|(/ba∇dblb/ba∇dbl1+|c|)ρ(da,db,dc),\nfor any Borel set A⊂Λ, where\nˆb=b\n/ba∇dblb/ba∇dbl1+|c|,ˆc=c\n/ba∇dblb/ba∇dbl1+|c|.\nObviously ρ+(Λ)+ρ−(Λ) =Eρ[|a|(/ba∇dblb/ba∇dbl1+|c|)], and\nf(x) =/integraldisplay\nΛσ(bTx+c)ρ+(db,dc)−/integraldisplay\nΛσ(bTx+c)ρ−(db,dc).\nNext, we define extensions of ρ+andρ−to{−1,1}×Λ by\n˜ρ+(A′) =ρ+({(b,c) : (1,b,c)∈A′}),\n˜ρ−(A′) =ρ−({(b,c) : (−1,b,c)∈A′}),\n7\nfor any Borel sets A′⊂{−1,1}×Λ, and let ˜ ρ= ˜ρ++ ˜ρ−. Then we have ˜ ρ({−1,1}×Λ) =\nEρ[|a|(/ba∇dblb/ba∇dbl1+|c|)] and\nf(x) =/integraldisplay\n{−1,1}×Λaσ(bTx+c)˜ρ(da,db,dc).\nTherefore, we can normalize ˜ ρto be a probability measure, and\n/ba∇dblf/ba∇dblB∞≤˜ρ({−1,1}×Λ)≤/ba∇dblf/ba∇dblB1+ε.\nTaking the limit as ε→0, we have/ba∇dblf/ba∇dblB∞≤/ba∇dblf/ba∇dblB1. Since/ba∇dblf/ba∇dblB1≤/ba∇dblf/ba∇dblB∞from H¨ older’s\ninequality, we conclude that /ba∇dblf/ba∇dblB1=/ba∇dblf/ba∇dblB∞.\n2.5.2 Proof of Theorem 3\nAccording to [ 21], we have the following characterization of Hkπ:\nHkπ=/braceleftbigg/integraldisplay\nSda(w)σ(wT˜x)dπ(w) :Eπ[|a(w)|2]<∞/bracerightbigg\n.\nIn addition, for any h∈Hkπ,/ba∇dblh/ba∇dbl2\nHkπ=Eπ[|a(w)|2]. It is obvious that for any π∈P(Sd),\nHkπ⊂ B2, which implies that ∪πHkπ⊂ B2. Conversely, for any f∈ B2, there exists a\nprobability distribution ˜ πthat satisfies\nf(x) =/integraldisplay\nSda(w)σ(wT˜x)˜π(dw)∀x∈X,\nandE˜π[|a(w)|2]≤2/ba∇dblf/ba∇dbl2\nB2<∞. Hence we have f∈Hk˜π, which impliesB2⊂∪πHkπ. Therefore\nB2=∪πHkπ. Together with Proposition 1, we complete the proof.\n2.5.3 Proof of Theorem 4\nLetεbe a positive number such that ε <1/5. Letρbe a probability distribution such that\nf(x) =Eρ[aσ(bTx+c)] andEρ[|a|2(/ba∇dblb/ba∇dbl1+|c|)2]≤(1 +ε)/ba∇dblf/ba∇dbl2\nB2. Letφ(x;θ) =aσ(bTx+c)\nwithθ= (a,b,c)∼ρ. Then we have Eθ∼ρ[φ(x;θ)] =f(x). Let Θ ={θj}m\nj=1be i.i.d. random\nvariables drawn from ρ(·), and consider the following empirical average,\nˆfm(x;Θ) =1\nmm/summationdisplay\nj=1φ(x;θj).\n8\nLetE(Θ) =Ex[|ˆfm(x;Θ)−f(x)|2] be the approximation error. Then we have\nEΘ[E(Θ)] =EΘEx|ˆfm(x;Θ)−f(x)|2\n=ExEΘ|1\nmm/summationdisplay\nj=1φ(x;θj)−f(x)|2\n=1\nm2Exm/summationdisplay\nj,k=1Eθj,θk[(φ(x;θj)−f(x))(φ(x;θk)−f(x))]\n≤1\nm2m/summationdisplay\nj=1ExEθj[(φ(x;θj)−f(x))2]\n≤1\nmExEθ∼ρ[φ2(x;θ)]\n≤(1+ε)/ba∇dblf/ba∇dbl2\nB2\nm.\nIn addition,\nEΘ[/ba∇dblΘ/ba∇dblP] =1\nmm/summationdisplay\nj=1EΘ[/ba∇dblaj/ba∇dbl(/ba∇dblbj/ba∇dbl1+|cj|)]≤(1+ε)/ba∇dblf/ba∇dblB2.\nDefine the event E1={E(Θ)<3/bardblf/bardbl2\nB2\nm}, andE2={/ba∇dblΘ/ba∇dblP<2/ba∇dblf/ba∇dblB2}. By Markov inequality,\nwe have\nP{E1}= 1−P{Ec\n1}≥1−EΘ[E(Θ)]\n3/ba∇dblf/ba∇dbl2\nB2/m≥2−ε\n3\nP{E2}= 1−P{Ec\n2}≥1−EΘ[/ba∇dblΘ/ba∇dblP]\n2/ba∇dblf/ba∇dblB2≥1−ε\n2.\nTherefore we have\nP{E1∩E2}=P{E1}+P{E2}−1≥2−ε\n3+1−ε\n2−1 =1−5ε\n6>0.\nChoose any Θ in E1∩E2. The two-layer neural network model defined by this Θ satisfie s both\nrequirements in the theorem.\n2.5.4 Proof of Theorem 5\nWithout loss of generality, we assume that /ba∇dblb/ba∇dbl1+|c|= 1, otherwise due to the scaling invariance\nofσ(·) we can redefine the parameters as follows,\na←a(/ba∇dblb/ba∇dbl1+|c|),b←b\n/ba∇dblb/ba∇dbl1+|c|, c←c\n/ba∇dblb/ba∇dbl1+|c|.\nLet Θm={(a(m)\nk,b(m)\nk,c(m)\nk)}m\nk=1be the parameters in the two-layer neural network model fm\nand letA=/summationtextm\nk=1|ak|andαk=|ak|\nA. Then we can define a probability measure:\nρm=m/summationdisplay\nk=1αkδ/parenleftBigg\na−sign(a(m)\nk)A\nm/parenrightBigg\nδ(b−b(m)\nk)δ(c−c(m)\nk),\n9\nwhich satisfies\nfm(x;Θm) =/integraldisplay\naσ(bTx+c)ρm(da,db,dc).\nLet\nKQ={(a,b,c) :|a|≤Q,/ba∇dblb/ba∇dbl1+|c|≤1}.\nIt is obvious that supp( ρm)⊂KQfor allm. SinceKQis compact, the sequence of probabil-\nity measure ( ρm) is tight. By Prokhorov’s Theorem, there exists a subsequen ce (ρmk) and a\nprobability measure ρ∗such that ρmkconverges weakly to ρ∗.\nThe fact that supp( ρm)⊂KQimplies supp( ρ∗)⊂KQ. Therefore, we have\n/ba∇dblf∗/ba∇dblB=/ba∇dblf∗/ba∇dblB∞≤Q.\nFor anyx∈X,aσ(bTx+c) is continuous with respect to ( a,b,c) and bounded from above by\nQ. Sinceρ∗is the weak limit of ρmk, we have\nf∗(x) = lim\nk→∞/integraldisplay\naσ(bTx+c)dρmk=/integraldisplay\naσ(bTx+c)dρ∗(da,db,dc).\n2.5.5 Proof of Theorem 6\nLetw= (bT,c)Tand˜x= (xT,1)T. For any ε >0 andf∈B, letρε\nf(a,w) be a distribution\nsuch that f(x) =Eρε\nf[aσ(bTx+c)] andEρε\nf[|a|/ba∇dblw/ba∇dbl1]<(1+ε)/ba∇dblf/ba∇dblB. Then,\nnRadn(FQ) =Eξ[ sup\nf∈FQn/summationdisplay\ni=1ξiEρε\nf[aσ(wTxi)]]\n=Eξ[ sup\nf∈FQEρε\nf[n/summationdisplay\ni=1ξiaσ(wTxi)]]\n=Eξ[ sup\nf∈FQEρε\nf[|a|/ba∇dblw/ba∇dbl1|n/summationdisplay\ni=1ξiσ(ˆwTxi)|]]\n≤(1+ε)QEξ[ sup\n/bardblw/bardbl≤1|n/summationdisplay\ni=1ξiσ(wTxi)|]. (9)\nDue to the symmetry, we have\nEξ[ sup\n/bardblw/bardbl≤1|n/summationdisplay\ni=1ξiσ(wTxi)|]≤Eξ[ sup\n/bardblw/bardbl≤1n/summationdisplay\ni=1ξiσ(wTxi)]+Eξ[ sup\n/bardblw/bardbl≤1−n/summationdisplay\ni=1ξiσ(wTxi)]\n= 2Eξ[ sup\n/bardblw/bardbl≤1n/summationdisplay\ni=1ξiσ(wTxi)]\n≤2Eξ[ sup\n/bardblw/bardbl≤1n/summationdisplay\ni=1ξiwTxi], (10)\n10\nwhere the last inequality follows from the contraction prop erty of Rademacher complexity (see\nLemma 26.9 in [ 22]) and the fact that σ(·) is Lipschitz continuous with Lipschitz constant 1.\nApplying Lemma 26.11 in [ 22] and plugging ( 10) into (9), we obtain\nRadn(FQ)≤2(1+ε)Q/radicalbigg\n2ln(2d)\nn.\nTakingε→0, we complete the proof.\n3 Flow-induced function spaces\nIn this section, we carry out a similar program for residual n eural networks. Since the limit\nof these networks give rise to continuous in time flows, the na tural function spaces and norms\nassociated with the residual neural networks are also flow-b ased. For this reason we call them\nflow-induced spaces and flow-induced norms, respectively. S imilar to what was done in the last\nsection, we establish a natural connection between these fu nction spaces and residual neural\nnetworks, by proving direct and inverse approximation theo rems. We also prove a complexity\nbound for the flow-induced space.\nWe postpone all the proofs to the end of this section.\n3.1 The compositional law of large numbers\nWe consider residual neural networks defined by\nz0,L(x) =Vx,\nzl+1,L(x) =zl,L(x)+1\nLUlσ◦(Wlzl,L(x)),\nfL(x;Θ) = αTzL,L(x), (11)\nwherex∈Rdis the input, V∈RD×d,Wl∈Rm×D,Ul∈RD×m,α∈RDand we use Θ :=\n{V,U1,...,UL,Wl,...,WL,α}to denote all the parameters to be learned from data. Without\nloss of generality, we will fix Vto be\nV=/bracketleftbiggId×d\n0(D−d)×d/bracketrightbigg\n. (12)\nWe will fix Dandmthroughout this paper, and when there is no danger for confus ion we will\nomit Θ in the notation and use fL(x) to denote the residual network for simplicity.\nFor two layer neural networks, if the parameters {ak,bk,ck}are i.i.d sampled from a proba-\nbility distribution ρ, then we have\n1\nmm/summationdisplay\nk=1akσ(bT\nkx+ck)→/integraldisplay\naσ(bTx+c)ρ(da,db,dc),\nwhenm→∞as a consequence of the law of large numbers. To get some intui tion in the current\nsituation, we will first study a similar setting for residual networks in which UlandWlare i.i.d\nsampled from a probability distribution ρonRD×m×Rm×D. To this end, we will study the\nbehavior of zL,L(·) asL→∞. The sequence of mappings we obtain is the repeated composit ion\nof many i.i.d.random near-identity maps.\n11\nThe following theorem can be viewed as a compositional versi on of the law of large numbers.\nThe“compositional mean” is definedwith thehelp of the follo wing ordinarydifferential equation\n(ODE) system:\nz(x,0) =Vx,\nd\ndtz(x,t) =E(U,W)∼ρUσ(Wz(x,t)). (13)\nTheorem 7. Assume that σis Lipschitz continuous and\nEρ/ba∇dbl|U||W|/ba∇dbl2\nF<∞. (14)\nThen, the ODE (13)has a unique solution. For any x∈X, we have\nzL,L(x)→z(x,1)\nin probability as L→+∞. Moreover, we have\nlim\nL→∞sup\nx∈XE/ba∇dblzL,L(x)−z(x,1)/ba∇dbl2= 0,\ni.e. the convergence is uniform with respect to x∈X.\nThis result can be extended to situations when the distribut ionρis time-dependent, which\nis the right setting in applications.\nTheorem 8. Let{ρt, t∈[0,1]}be a family of probability distributions on RD×m×Rm×Dwith\nthe property that there exist constants c1andc2such that\nEρt/ba∇dbl|U||W|/ba∇dbl2\nF< c1\nand\n|EρtUσ(Wz)−EρsUσ(Wz)|≤c2|t−s||z|\nfor alls,t∈[0,1]. Letzbe the solution of the following ODE,\nz(x,0) =Vx,\nd\ndtz(x,t) =E(U,W)∼ρtUσ(Wz(x,t)).\nThen, for any fixed x∈X, we have\nzL,L(x)→z(x,1)\nin probability as L→+∞. Moreover, the convergence is uniform in x.\nSimilar results have been proved in the context of stochasti c approximations, for example\nin [17,7].\n12\n3.2 The flow-induced function spaces\nMotivated by the previous results, we consider the set of fun ctionsfα,{ρt}defined by:\nz(x,0) =Vx,\n˙z(x,t) =E(U,W)∼ρtUσ(Wz(x,t)),\nfα,{ρt}(x) =αTz(x,1), (15)\nwhereV∈RD×dis given in ( 12),U∈RD×m,W∈Rm×D, andα∈RD. To define a norm for\nthese functions, we consider the following linear ODEs ( p≥1)\nNp(0) = e,\n˙Np(t) = (Eρt(|U||W|)p)1/pNp(t), (16)\nwhereeis the all-one vector in RD. Note that in ( 16),|A|and|A|qare defined element-wise\nfor matrix A, and the multiplication of |U|and|W|is the regular matrix multiplication. This\nlinear system of equations has a unique solution as long as th e expected value is integrable as a\nfunction of t. Iffadmits a representation as in ( 15), we can define the Dpnorm of f.\nDefinition 2. Letfbe a function that satisfies f=fα,{ρt}for a pair of ( α,{ρt}), then we\ndefine\n/ba∇dblf/ba∇dblDp(α,{ρt})=|α|TNp(1),\nto be theDpnorm of fwith respect to the pair ( α,{ρt}). We define\n/ba∇dblf/ba∇dblDp= inf\nf=fα,{ρt}|α|TNp(1). (17)\nto be theDpnorm of f.\nAs an example, if ρis constant in t, then theDpnorm becomes\n/ba∇dblf/ba∇dblDp= inf\nf=fα,ρ|α|Te(Eρ(|U||W|)p)1/pe.\nGiven this definition, the flow-induced function spaces on Xare defined as the set of continuous\nfunctions that can be represented as fα,{ρt}in (15) with finiteDpnorm. Here we assume that\nfor anyt∈[0,1],ρtis a probability distribution defined on (Ω ,ΣΩ), Ω =RD×m×Rm×D, ΣΩis\nthe Borel σ-algebra on Ω. We use Dpto denote these function spaces. It is easy to see Dp⊂Dq\nforp≥q.\nNote that in the definitions above, the only condition on {ρt}is the existence and uniqueness\nofzdefined by ( 15). Hence,{ρt}can be discontinuous as a function t. However, the composi-\ntional law of large numbers, which is the underlying reason b ehind the approximation theorem\nthat we will discuss next (Theorem 8), requires{ρt}to satisfy some continuity condition. To\nthat end, we define the following “Lipschitz coefficient” and “ Lipschitz norm” for {ρt}\nDefinition 3. Given a family of probability distribution {ρt, t∈[0,1]}, the “Lipschitz coeffi-\ncient” of{ρt}, which is denoted by Lip{ρt}, is defined as the infimum of all the number Lthat\nsatisfies\n|EρtUσ(Wz)−EρsUσ(Wz)|≤Lip{ρt}|t−s||z|,\n13\nand /vextendsingle/vextendsingle/vextendsingle/ba∇dblEρt|U||W|/ba∇dbl1,1−/ba∇dblEρs|U||W|/ba∇dbl1,1/vextendsingle/vextendsingle/vextendsingle≤Lip{ρt}|t−s|,\nfor anyt,s∈[0,1], where/ba∇dbl·/ba∇dbl1,1is the sum of the absolute values of all the entries in a matrix .\nThe “Lipschitz norm” of {ρt}is defined as\n/ba∇dbl{ρt}/ba∇dblLip=/ba∇dblEρ0|U||W|/ba∇dbl1,1+Lip{ρt}.\nWith the Lipschitz norm of {ρt}defined above, we can introduce another class of function\nspaces˜Dp, which independently controls Np(1) and/ba∇dbl{ρt}/ba∇dblLip.\nDefinition 4. Letfbe a function that satisfies f=fα,{ρt}for a pair of ( α,{ρt}), then we\ndefine\n/ba∇dblf/ba∇dbl˜Dp(α,{ρt})=|α|TNp(1)+/ba∇dblNp(1)/ba∇dbl1−D+/ba∇dbl{ρt}/ba∇dblLip,\nto be the ˜Dpnorm of fwith respect to the pair ( α,{ρt}). We define\n/ba∇dblf/ba∇dbl˜Dp= inf\nf=fα,{ρt}/ba∇dblf/ba∇dbl˜Dp(α,{ρt}).\nto be the ˜Dpnorm of f. The space ˜Dpis defined as the set of all the continuous functions that\nadmit the representation fα,{ρt}in (15) with finite ˜Dpnorm.\nRemark 4. We add a “−D” term in the definition of ˜Dpnorm because/ba∇dblNp(1)/ba∇dbl1≥Dand\nwe want the norm of the zero function to be 0. As was stressed earlier, we use the terminology\n“norm” loosely, and we do not care whether these are really no rms. Strictly speaking, they are\njust some quantities that can be used to bound approximation /estimation errors.\nNext, for residual networks ( 11), we define a parameter-based norm as a discrete analog\nof (17). This is similar to the l1path norm of the residual network, which is studied in [ 20,10]\nDefinition 5. For a residual network defined by ( 11) with parameters Θ = {α,Ul,Wl,l=\n0,1,···,L−1}, we define the l1path norm of Θ to be\n/ba∇dblΘ/ba∇dblP=|α|TL/productdisplay\nl=1/parenleftbigg\nI+1\nL|Ul||Wl|/parenrightbigg\ne.\nWe can also define the analog of the p-norms for p >1 for residual networks. But in this paper\nwe will only use the l1norm defined above.\nIt is easy to see that ˜Dp⊂Dp, and for any f∈˜Dpwe have/ba∇dblf/ba∇dblDp≤/ba∇dblf/ba∇dbl˜Dp. Moreover, for\nany 1≤q≤p, iff∈˜Dp, then we have f∈˜Dqand/ba∇dblf/ba∇dbl˜Dq≤/ba∇dblf/ba∇dbl˜Dp. The next proposition states\nthat Barron space is embedded in ˜D1.\nProposition 9. Assume that D≥d+2andm≥1. For any function f∈B, we have f∈˜D1,\nand\n/ba∇dblf/ba∇dbl˜D1≤2/ba∇dblf/ba∇dblB+1.\nMoreover, for any ε >0, there exists (α,{ρt})such that ρtis fixed for all t,f=fα,{ρt}, and\n/ba∇dblf/ba∇dbl˜D1(α,{ρt})≤2/ba∇dblf/ba∇dblB+1+ε.\n14\nSimilar to the results of Proposition 9, we can prove that the composition of two Barron\nfunctions belongs to the flow-induced function space, and th e norm is bounded by a polynomial\nof the norms of the two Barron functions.\nProposition 10. Assume that D≥d+ 3andm≥1. Assume that g: [0,1]d→[0,1]∈B,\nh: [0,1]→R1∈B1. Letf=h◦gbe the composition of gandh. Then we have f∈D1and\n/ba∇dblf/ba∇dblD1≤(/ba∇dblh/ba∇dblB+1)(/ba∇dblg/ba∇dblB+1).\nIn [13], the authors constructed a sequence of functions {fd:Rd→R}whose spectral\nnorms (4) grow exponentially with respect to d. They also showed that these functions can be\nexpressed as the composition of two functions (one from RdtoRand the other from RtoR)\nwhose spectral norms depend only polynomially on the dimens iond. By Proposition 10, the\nD1norms of fdare bounded by a polynomial of d. This shows that in the high dimensions,\nthe flow-induced norm can be significantly smaller than the sp ectral norm. Combined with the\ndirect approximation theorem below, this implies that resi dual networks can better approximate\nsome functions than two-layer networks.\n3.3 Direct and inverse approximation theorems\nWe first prove the direct approximation theorem, which state s that functions in ˜D2can be\napproximated by a sequence of residual networks with a 1 /L1−δerror rate for any δ∈(0,1),\nand the networks have uniformly bounded path norm.\nTheorem 11. Letf∈˜D2,δ∈(0,1). Then, there exists an absolute constant C, such that for\nany\nL≥C/parenleftBig\nm4D6/ba∇dblf/ba∇dbl5\n˜D2(/ba∇dblf/ba∇dbl˜D2+D)2/parenrightBig3\nδ,\nthere is an L-layer residual network fL(·;Θ)that satisfies\n/ba∇dblf−fL(·;Θ)/ba∇dbl2≤/ba∇dblf/ba∇dbl2\n˜D2\nL1−δ,\nand\n/ba∇dblΘ/ba∇dblP≤9/ba∇dblf/ba∇dbl˜D1.\nWe can also prove an inverse approximation theorem, which st ates that any function that\ncan be approximated by a sequence of well-behaved residual n etworks has to belong to the\nflow-induced space.\nTheorem 12. Letfbe a function defined on X. Assume that there is a sequence of residual\nnetworks{fL(·;ΘL)}∞\nL=1such that fL(x;Θ)→f(x)for every x∈XasL→∞. Assume further\nthat the parameters in {fL(·;Θ)}∞\nL=1are (entry-wise) bounded by c0. Then, we have f∈D∞,\nand\n/ba∇dblf/ba∇dblD∞≤2em(c2\n0+1)D2c0\nm\nMoreover, if for some constant c1,/ba∇dblfL/ba∇dblD1≤c1holds for all L >0, then we have\n/ba∇dblf/ba∇dblD1≤c1\n15\n3.4 Bounds for the Rademacher complexity\nOur final result is an upper bound for the Rademacher complexi ty involving the flow-induced\nnorm. Due to technical difficulties, in this part we consider a family of modified flow-induced\nfunction norms/ba∇dbl·/ba∇dblˆDp, which is defined as\n/ba∇dblf/ba∇dblˆDp= inf\nf=fα,{ρt}|α|TˆNp(1)+/ba∇dblˆNp(1)/ba∇dbl1−D+/ba∇dbl{ρt}/ba∇dblLip,\nwhereˆNp(t) is given by\nˆNp(0) = 2e,\n˙ˆNp(t) = 2(Eρt(|U||W|)p)1/pˆNp(t).\nDenote by ˆDpthe space of functions with finite ˆDpnorm. Then, we have\nTheorem 13. LetˆDQ\np={f∈ˆDp:/ba∇dblf/ba∇dblˆDp≤Q}, then we have\nRadn(ˆDQ\n2)≤18Q/radicalbigg\n2log(2d)\nn.\nThe difference between the definitions of the spaces ˆDpandDplies in the factor 2 that\nappears in ˆNp. At this stage, we are not able to remove this factor. It shoul d be noted that this\nfactor of 2 is also present in the “weighted path norm” introd uced in [10]. IfU,WandˆNp(t) are\nscalars, then ˆNp(t) can be upper bounded by ( Np(t))2. However, in the vectorial case this bound\ndoes not always hold. Hence, it is unclear how the two spaces ˆDpandDpare related. Clearly\nwe can also develop an approximation theory for the space ˆDp, but we feel it is worthwhile to\nshow that the space ˜Dpis sufficient for that purpose. We also point out here at this st age, we\nallow to use different quantities (norms) to control the appro ximation and estimation errors.\n3.5 Proofs\n3.5.1 Proof of Theorem 7\nTo prove convergence, let tl,L=l/L, and consider el,L(x) =√\nL(zl,L(x)−z(x,tl,L)). We will\nfocus on fixed xand from now on we omit the dependence on xin the notations and write\ninsteadel,L,zl,Landz(t), for example. From the definition of z(t), we have\nz(tl+1,L) =z(tl,L)+/integraldisplaytl+1,L\ntl,LEUσ(Wz(t))dt\n=z(tl,L)+1\nLUlσ(Wlz(tl,L))+1\nL(Ulσ(Wlz(tl,L))−EUσ(Wz(tl,L)))\n+/parenleftBigg\n1\nLEUσ(Wz(tl,L))−/integraldisplaytl+1,L\ntl,LEUσ(Wz(t))dt/parenrightBigg\n. (18)\nSince\nzl+1,L=zl,L+1\nLUlσ(Wlzl,L), (19)\n16\nsubtract ( 18) from (19) gives\nel+1,L=el,L+1√\nL(Ulσ(Wlzl,L)−Ulσ(Wlz(tl,L)))\n+1√\nL(Ulσ(Wlz(tl,L))−EUσ(Wz(tl,L)))\n+1√\nL/parenleftBigg\nEUσ(Wz(tl,L))−L/integraldisplaytl+1,L\ntl,LEUσ(Wz(t))dt/parenrightBigg\n.\nDefine\nIl,L=1√\nL(Ulσ(Wlzl,L)−Ulσ(Wlz(tl,L))),\nJl,L=1√\nL(Ulσ(Wlz(tl,L))−EUσ(Wz(tl,L))),\nKl,L=1√\nL/parenleftBigg\nEUσ(Wz(tl,L))−L/integraldisplaytl+1,L\ntl,LEUσ(Wz(t))dt/parenrightBigg\n.\nThen, we have\nel+1,L=el,L+Il,L+Jl,L+Kl,L. (20)\nNext, we consider /ba∇dblel,L/ba∇dbl2. From (20), we get\n/ba∇dblel+1,L/ba∇dbl2=/ba∇dblel,L/ba∇dbl2+/ba∇dblIl,L/ba∇dbl2+/ba∇dblJl,L/ba∇dbl2+/ba∇dblKl,L/ba∇dbl2\n+2eT\nl,LIl,L+2eT\nl,LJl,L+2eT\nl,LKl,L\n+2IT\nl,LJl,L+2IT\nl,LKl,L+2JT\nl,LKl,L\n≤ /ba∇dblel,L/ba∇dbl2+3/ba∇dblIl,L/ba∇dbl2+3/ba∇dblJl,L/ba∇dbl2+3/ba∇dblKl,L/ba∇dbl2\n+2eT\nl,LIl,L+2eT\nl,LJl,L+2eT\nl,LKl,L. (21)\nWe are going to estimate the expectation of the right hand sid e of (21) term by term. First,\nnote that E/ba∇dbl|U||W|/ba∇dbl2\nF<∞, which means z(t) is bounded for t∈[0,1]. Hence, we can find a\nconstant C >0 that satisfies\nE/ba∇dbl|U||W|/ba∇dblF≤C,E/ba∇dbl|U||W|/ba∇dbl2\nF≤C,\nand/ba∇dblz(t)/ba∇dbl ≤C. In addition, note that for any l,UlandWlare independent with el,L.\nTherefore, for/ba∇dblIl,L/ba∇dbl2, we have\nE/ba∇dblIl,L/ba∇dbl2=1\nLE/ba∇dblUσ(Wzl,L)−Uσ(Wz(tl,L))/ba∇dbl2\n≤1\nL2E/ba∇dbl|Ul||Wl||el,L|/ba∇dbl2\n≤C\nL2E/ba∇dblel,L/ba∇dbl2.\nFor the term/ba∇dblJl,L/ba∇dbl2, we have\nE/ba∇dblJl,L/ba∇dbl2≤1\nLE/ba∇dbl|Ul||Wl||z(tl,L)|/ba∇dbl2≤C2\nL.\n17\nFor the term/ba∇dblKl,L/ba∇dbl, sinceE/ba∇dbl|U||W|/ba∇dblF≤Cand/ba∇dblz/ba∇dbl≤C, we know that the Lipschitz constant\nofz(t) is bounded by C2. Hence, we have\n/ba∇dblKl,L/ba∇dbl ≤√\nL/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble/integraldisplaytl+1,L\ntl,LE(Uσ(Wz(tl,L))−Uσ(Wz(t)))dt/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble\n≤C2\n√\nL/integraldisplaytl+1,L\ntl,L(t−tl,L)E/ba∇dbl|U||W|/ba∇dbldt\n≤C3\nL√\nL,\nwhich implies that\nE/ba∇dblKl,L/ba∇dbl2≤C6\nL3.\nNext, we consider eT\nl,LIl,L. We easily have\nEeT\nl,LIl,L≤1\nLE/ba∇dbl|U||W|/ba∇dblF/ba∇dblel,L/ba∇dbl2≤C\nLE/ba∇dblel,L/ba∇dbl2.\nForeT\nl,LJl,L, by the independence of Ul,Wlandel,L, we have\nEeT\nl,LJl,L= 0.\nFinally, for eT\nl,LKl,L, we have\nEeT\nl,LKl,L≤C3\nL√\nL/radicalBig\nE/ba∇dblel,L/ba∇dbl2≤C3\nL√\nL(E/ba∇dblel,L/ba∇dbl2+1).\nPlugging all the estimates above into ( 21), we obtain\nE/ba∇dblel+1,L/ba∇dbl2≤/parenleftbigg\n1+2C\nL+3C\nL2+2C3\nL√\nL/parenrightbigg\nE/ba∇dblel,L/ba∇dbl2+3C2\nL+3C6\nL3+2C3\nL√\nL.\nHence there is an L0depending only on C, such that if L > L0, we have\nE/ba∇dblel+1,L/ba∇dbl2≤/parenleftbigg\n1+3C\nL/parenrightbigg\nE/ba∇dblel,L/ba∇dbl2+4C2\nL.\nSincee0,L= 0, by induction we obtain\nE/ba∇dbleL,L/ba∇dbl2≤4C2e3C,\nwhich means\nE/ba∇dblzL,L−z(1)/ba∇dbl2≤4C2e3C\nL→0,\nwhenL→∞. This implies that zL,L→z(1) in probability.\n18\n3.5.2 Proof of Theorem 8\nThe only modification required for the proof of Theorem 8is in the estimate of Kl,L. NowKl,L\nbecomes\nKl,L=1√\nL/parenleftBigg\nEρtl,LUσ(Wz(tl,L))−L/integraldisplaytl+1,L\ntl,LEρtUσ(Wz(t))dt/parenrightBigg\n.\nThe conditions of the theorem still guarantee that z(t) is Lipschtiz continuous. Hence, we can\nfind a constant C′such that z(t) isC′-Lipschitz and\nEρt/ba∇dbl|U||W|/ba∇dbl≤C′,\nfor anyt∈[0,1]. Hence,\n/ba∇dblKl,L/ba∇dbl ≤√\nL/integraldisplaytl+1,L\ntl,L/vextenddouble/vextenddouble/vextenddoubleEρtl,LUσ(Wz(tl,L))−EρtUσ(Wz(t))/vextenddouble/vextenddouble/vextenddoubledt\n≤√\nL/integraldisplaytl+1,L\ntl,L/vextenddouble/vextenddouble/vextenddoubleEρtl,LUσ(Wz(tl,L))−EρtUσ(Wz(tl,L))/vextenddouble/vextenddouble/vextenddoubledt\n+√\nL/integraldisplaytl+1,L\ntl,L/ba∇dblEρtUσ(Wz(tl,L))−EρtUσ(Wz(t))/ba∇dbldt\n≤c2C′\nL√\nL+C′2\nL√\nL. (22)\nFrom (22) we know that in this case Kl,Lis of the same order as that in Theorem 7. We can\nthen complete the proof following the same arguments as in th e proof of Theorem 7.\n3.5.3 Proof of Proposition 9\nSincef∈B, for any ε >0, there exists a distribution ρεthat satisfies\nf(x) =/integraldisplay\nΩaσ(bTx+c)ρε(da,db,dc)\nEρε[|a|(/ba∇dblb/ba∇dbl1+|c|)]≤/ba∇dblf/ba∇dblB+ε.\nDefineˆfby\nz(x,0) =\nx\n1\n0\n\nd\ndtz(x,t) =E(a,b,c)∼ρε\n0\n0\na\nσ([bT,c,0]z(x,t)) (23)\nˆf(x) =eT\nd+2z(x,1)\nThen, we can easily verify that ˆf=f. Usingρε, we can define probability distribution ˜ ρεon\nRD×m×Rm×D: ˜ρεis concentrated on matrices of the form that appears in ( 23). Consider\n19\n/ba∇dblf/ba∇dbl˜D1(ed+2,{˜ρε}), we have\n/ba∇dblf/ba∇dbl˜D1(ed+2,{˜ρε})=eT\nd+2exp\nEρ\n0 0 0\n0 0 0\n|abT| |ac|0\n\ne+/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddoubleexp\nEρ\n0 0 0\n0 0 0\n|abT| |ac|0\n\n/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble\n1−D\n=eT\nd+2\nI 0 0\n0 1 0\nEρ|abT|Eρ|ac|1\ne+/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble\nI 0 0\n0 1 0\nEρ|abT|Eρ|ac|1\n/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble\n1−D\n= 2Eρε|a|(/ba∇dblb/ba∇dbl1+|c|)+1\n≤2/ba∇dblf/ba∇dblB+2ε+1.\nTherefore, we have\n/ba∇dblf/ba∇dbl˜D1≤/ba∇dblf/ba∇dbl˜D1(α,˜ρε)≤2/ba∇dblf/ba∇dblB+2ε+1.\nTakingε→0, we get\n/ba∇dblf/ba∇dbl˜D1≤2/ba∇dblf/ba∇dblB+1.\nBesides, since{˜ρε}gives the same probability distribution for all t∈[0,1], we have Lip{˜ρε}=\n0.\n3.5.4 Proof of Theorem 11\nFor anyε >0, let\nσε(x) =/integraldisplay\nR1√\n2πε2e−(x−y)2\n2ε2σ(y)dy.\nThen we have\n|σε(x)−σ(x)|< ε,|(σε(x))′|≤1,|(σε(x))′′|≤1\nε,\nfor allx∈R. For a function f∈˜D2, we are going to show that for sufficiently large Lthere\nexists an L-layer residual network fLsuch that\n/ba∇dblf−fL/ba∇dbl2≤/ba∇dblf/ba∇dbl2\n˜D2\nL1−δ,\n.\nTo do this, assume that αand{ρt}satisfyf=fα,{ρt}and/ba∇dblf/ba∇dbl˜D2(α,{ρt})≤2/ba∇dblf/ba∇dbl˜D1. LetfL\nbe a residual network in the form ( 11), and the weights Ul,Wlare sampled from ρl/L. Letfε\nandfε\nLbe generated in the same way as fandfLusing instead the activation function σε. Then\nwe have\n/ba∇dblf−fL/ba∇dbl2≤3/parenleftbig\n/ba∇dblf−fε/ba∇dbl2+/ba∇dblfε−fε\nL/ba∇dbl2+/ba∇dblfε\nL−fL/ba∇dbl2/parenrightbig\n. (24)\nBefore dealing with ( 24), we first prove the following lemma, which shows that we can p ick\nthe family of distributions ˜ ρtto have compact support.\nLemma 1. For any f∈˜D1that satisfies the conditions of Theorem 11, and any ε >0, there\nexistsαand{ρt}, such that f=fα,{ρt}and/ba∇dblf/ba∇dbl˜D2(α,{ρt})≤(1+ε)/ba∇dblf/ba∇dbl˜D2. Moreover, for any\nt∈[0,1], we have\nmax\n(U,W)∼ρt(/ba∇dbl|U||W|/ba∇dbl1)≤(1+ε)/ba∇dblf/ba∇dbl˜D2.\n20\nProof of Lemma 1\nTheproofof Lemma 1is similar totheproofof Proposition 1. By thedefinition of ˜D2, for any\nf∈˜D2andε >0, there exists αand{ρt}such that f=fα,{ρt},/ba∇dblf/ba∇dbl˜D2(α,{ρt})≤(1+ε)/ba∇dblf/ba∇dbl˜D2,\nand hence/ba∇dbl{ρt}/ba∇dblLip≤(1+ε)/ba∇dblf/ba∇dbl˜D2. This means that for any t∈[0,1], we have\n/ba∇dblEρt|U||W|/ba∇dbl1≤(1+ε)/ba∇dblf/ba∇dbl˜D2.\nLet Λ ={(U,W) :/ba∇dblW/ba∇dbl1= 1,/ba∇dbl|U||W|/ba∇dbl1= 1}, and consider a family of measures {ρΛ\nt}\ndefined by\nρΛ\nt(A) =/integraldisplay\n(U,W): (¯U,¯W)∈Λ/ba∇dbl|U||W|/ba∇dbl1ρt(dU,dW),\nfor any Borel set A⊂Λ, where\n¯U=/ba∇dblW/ba∇dbl1\n/ba∇dbl|U||W|/ba∇dbl1U,¯W=W\n/ba∇dblW/ba∇dbl1.\nIt is easy to verify that ρΛ\nt(Λ) =Eρt/ba∇dbl|U||W|/ba∇dbl1and\nEρΛ\nt¯Uσ(¯Wz) =EρtUσ(Wz)\nhold for any t∈[0,1] andz∈RD. After normalizing {ρΛ\nt}, we obtain a family of probability\ndistributions{˜ρΛ\nt}on\n{(U,W) :/ba∇dblW/ba∇dbl1= 1,/ba∇dbl|U||W|/ba∇dbl=Eρt/ba∇dbl|U||W|/ba∇dbl1}.\nFinally, it is easy to verify that f=fα,{˜ρΛ\nt},/ba∇dblf/ba∇dbl˜D2(α,{ρt})≤(1+ε)/ba∇dblf/ba∇dbl˜D2, as well as\nmax\n(U,W)∼˜ρΛ\nt(/ba∇dbl|U||W|/ba∇dbl1)≤(1+ε)/ba∇dblf/ba∇dbl˜D2.\nFrom Lemma 1, without loss of generality we can assume that ρthas compact support, and\nthe entries of ( U,W) sampled from ρtfor anytis bounded by 2/ba∇dblf/ba∇dbl˜D2. Next we proceed to\ncontrol thethreeterms on theright handsideof ( 24). Thefollowing two lemmas give thebounds\nfor the first and third terms.\nLemma 2./ba∇dblf−fε/ba∇dbl2≤4m2ε2/ba∇dblf/ba∇dbl4\n˜D2.\nProof of Lemma 2\nLetz(t) be defined by ( 15) for fixed x, andzε(t) be the solution of the same ODE after\nreplacing σbyσε. Then, we have z(0)−zε(0) = 0, and\n|z(t)−zε(t)| ≤/integraldisplayt\n0/vextendsingle/vextendsingle/vextendsingle/vextendsingled\ndt(z(s)−zε(s))/vextendsingle/vextendsingle/vextendsingle/vextendsingleds\n=/integraldisplayt\n0|EρsUσ(Wz(s))−EρsUσε(Wzε(s))|ds\n≤/integraldisplayt\n0|EρsUσ(Wz(s))−EρsUσ(Wzε(s))|ds\n+/integraldisplayt\n0|EρsUσ(Wzε(s))−EρsUσε(Wzε(s))|ds\n≤/integraldisplayt\n0/parenleftBig\nEρs|U||W||z(s)−zε(s)|+2/ba∇dblf/ba∇dbl˜D2mε/parenrightBig\nds.\n21\nHence, we have\n|z(1)−zε(1)|≤2/ba∇dblf/ba∇dbl˜D2mεN1(1)e,\nwhereeis an all-one vector. This gives that\n/ba∇dblf−fε/ba∇dbl2≤/integraldisplay\nD0/parenleftbig\n|α|T|z(x,1)−zε(x,1)|/parenrightbig2dρ(x)≤4m2ε2/ba∇dblf/ba∇dbl4\n˜D2.\nLemma 3.\nE/ba∇dblfL−fε\nL/ba∇dbl2≤4m2ε2/ba∇dblf/ba∇dbl4\n˜D2,\nwhere the expectation is taken over the random choice of weig hts{(Ul,Wl)}.\nProof of Lemma 3\nLetzl,Lbe defined by ( 11) for a fixed x, andzε\nl,Lbe defined similarly with σreplaced by σε.\nThen, we have z0,L−zε\n0,L= 0, and\nzl+1,L−zε\nl+1,L=zl,L−zε\nl,L+1\nL/bracketleftbig\nUlσ(Wlzl,L)−Ulσ(Wlzε\nl,L)/bracketrightbig\n+1\nL/bracketleftbig\nUlσ(Wlzε\nl,L)−Ulσε(Wlzε\nl,L)/bracketrightbig\n.\nTaking absolute value gives\n|zl+1,L−zε\nl+1,L|≤/parenleftbigg\nI+1\nL|U||W|/parenrightbigg\n|zl,L−zε\nl,L|+2/ba∇dblf/ba∇dbl˜D2mε\nLe,\nwhich implies that\n|zL,L−zε\nL,L|≤2/ba∇dblf/ba∇dbl˜D2mεL−1/productdisplay\nl=0/parenleftbigg\nI+1\nL|U||W|/parenrightbigg\ne.\nBy Theorem 8, we have\nE|fL(x)−fε\nL(x)|2≤4/ba∇dblf/ba∇dbl2\n˜D2m2ε2E/parenleftBigg\n|α|TL−1/productdisplay\nl=0/parenleftbigg\nI+1\nL|U||W|/parenrightbigg\ne/parenrightBigg2\n≤4m2ε2/ba∇dblf/ba∇dbl4\n˜D2. (25)\nIntegrating ( 25) overxgives the results.\nProof of Theorem 11(Continued)\nWith Lemmas 2and3, we have\nE/ba∇dblf−fL/ba∇dbl2≤24m2ε2/ba∇dblf/ba∇dbl4\n˜D2+3E/ba∇dblfε−fε\nL/ba∇dbl2. (26)\nTo bound E/ba∇dblfε−fε\nL/ba∇dbl2, letel,L=√\nL(zε\nl,L−zε\ntl,L), and recall that we can write\nel+1,L=el,L+Il,L+Jl,L+Kl,L, (27)\n22\nwith\nIl,L=1√\nL/bracketleftbig\nUlσε(Wlzε\nl,L)−Ulσε(Wlzε(tl,L))/bracketrightbig\n,\nJl,L=1√\nL/bracketleftBig\nUlσε(Wlzε(tl,L))−Eρtl,LUσε(Wzε(tl,L))/bracketrightBig\nKl,L=1√\nL/bracketleftBigg\nEρtl,LUσε(Wzε(tl,L))−L/integraldisplaytl+1,L\ntl,LEρtUσε(Wzε(t))dt/bracketrightBigg\n.\nForIl,L, by the Taylor expansion of Ulσε(Wlzε\nl,L) atzε(tl,L), we get\nIl,L=1\nLUl(σε(Wlzε(tl,L)))′Wlel,L+Ul(σε(Wlzε(tl,L)))′′(Wlel,L)◦(Wlel,L)\nL√\nL,(28)\nwhere for two vectors αandβ,α◦βmeans element-wise product. For the second term on the\nright hand side of ( 28), we have\n/vextendsingle/vextendsingleUl(σε(Wlzε(tl,L)))′′(Wlel,L)◦(Wlel,L)/vextendsingle/vextendsingle≤8/ba∇dblf/ba∇dbl3\n˜D2mD/ba∇dblel,L/ba∇dbl2\nεe.\nOn the other hand, for Kl,Lwe have\n|Kl,L|≤C2\nL√\nLe,\nfor some constant C2. Hence, we can write ( 27) as\nel+1,L=el,L+1\nLUl(σε(Wlzε(tl,L)))′Wlel,L+Jl,L+rl,L\nL√\nL, (29)\nwith\n|rl,L|≤(8/ba∇dblf/ba∇dbl3\n˜D2mDε−1/ba∇dblel,L/ba∇dbl2+C2)e.\nNext, we consider el,LeT\nl,L. By (29), we have\nel+1,LeT\nl+1,L=el,LeT\nl,L+1\nL/parenleftbig\nUl(σε(Wlzε(tl,L)))′Wlel,LeT\nl,L+el,LeT\nl,LUl(σε(Wlzε(tl,L)))′Wl/parenrightbig\n+Jl,LJT\nl,L+1\nL2Ul(σε(Wlzε(tl,L)))′Wlel,L(Ul(σε(Wlzε(tl,L)))′Wlel,L)T\n+el,LJT\nl,L+Jl,LeT\nl,L+1\nL√\nL/parenleftbig\nel,LrT\nl,L+rl,Lel,L/parenrightbig\n+1\nL3rl,LrT\nl,L\n+1\nLUl(σε(Wlzε(tl,L)))′Wlel,LJT\nl,L+1\nLJl,L(Ul(σε(Wlzε(tl,L)))′Wlel,L)T\n+1\nL2√\nLUl(σε(Wlzε(tl,L)))′Wlel,LrT\nl,L+1\nL2√\nLrl,L(Ul(σε(Wlzε(tl,L)))′Wlel,L)T\n+1\nL√\nL/parenleftbig\nJl,LrT\nl,L+rl,LJT\nl,L/parenrightbig\n.\nTaking expectation over the equation above, noting that Jl,Lis independent with el,L, and using\nthe bound of rl,Lwe derived above, we get\n|Eel+1,LeT\nl+1,L| ≤ |Eel,LeT\nl,L|+1\nL/parenleftbig\nAl,L|Eel,LeT\nl,L|+|Eel,LeT\nl,L|AT\nl,L/parenrightbig\n+1\nLΣl,L\n+C/ba∇dblf/ba∇dbl3\n˜D2\nL/parenleftbiggmDE/ba∇dblel,L/ba∇dbl3\n√\nLε+m2D2E/ba∇dblel,L/ba∇dbl4\nL2ε2/parenrightbigg\nE, (30)\n23\nwhere\nAl,L=Eρtl,L|U||W|,Σl,L=/vextendsingle/vextendsingle/vextendsingleCovρtl,LUlσε(Wlzε(tl,L))/vextendsingle/vextendsingle/vextendsingle,\nEis an all-one matrix and Cis a constant.\nNext, we boundthe third and fourth order moments of /ba∇dblel,L/ba∇dblusing its second order moment.\nThis is done by the following lemma.\nLemma 4. For any Land1≤l≤L, there exists a constant Csuch that\nE/ba∇dblel,L/ba∇dbl3≤CmD3/2/ba∇dblf/ba∇dbl˜D2/parenleftbigg/radicalbig\nlogL+D√\nLε/parenrightbigg\nE/ba∇dblel,L/ba∇dbl2,\nand\nE/ba∇dblel,L/ba∇dbl4≤C2m2D3/ba∇dblf/ba∇dbl2\n˜D2/parenleftbigg/radicalbig\nlogL+D√\nLε/parenrightbigg2\nE/ba∇dblel,L/ba∇dbl2.\nProof of Lemma 4\nLetSl,L=l−1/summationtext\nk=0Jk,L. Then,ESl,L= 0. Since Jl,Lare independent for different l, and\n|Jl,L|≤C′mD√\nLe\nholds for all land some constant C′, by Hoeffding’s inequality, for any t >0 and 1≤i≤Dwe\nhave\nP(|Sl,L,i|≥t)≤2exp(−t2\n2C′2m2D2).\nHere,Sl,L,idenotes the i-th entry of the vector Sl,L. Taking t= 2C′mD√logL, we obtain\nP/parenleftBig\n|Sl,L,i|≥2C′mD/radicalbig\nlogL/parenrightBig\n≤2\nL2.\nThis implies\nP/parenleftBig\n/ba∇dblSl,L/ba∇dbl≥2C′mD3/2/radicalbig\nlogL/parenrightBig\n= 1−P/parenleftBig\n/ba∇dblSl,L/ba∇dbl<2C′mD3/2/radicalbig\nlogL/parenrightBig\n≤1−P/parenleftBigg/uniondisplay\ni/braceleftBig\n|Sl,L,i|<2C′mD/radicalbig\nlogL/bracerightBig/parenrightBigg\n≤1−/parenleftbigg\n1−2\nL2/parenrightbiggD\n≤2D\nL2. (31)\nDefine the eventAby\nA=/braceleftBig\n/ba∇dblSl,L/ba∇dbl≤2C′mD3/2/radicalbig\nlogL, i= 1,2,···,L/bracerightBig\n.\nThen by ( 31) we have\nP(A)≥1−2D\nL.\n24\nUsing (29), we have\nel,L=l−1/summationdisplay\nk=01\nLUk(σε(Wkzε(tk,L)))′Wkek,L+Sl,L+l−1/summationdisplay\nk=0rk,L\nL√\nL.\nHence, using the bounds of Sl,Landrk,L, we obtain that there is a constant Csuch that\n/ba∇dblel,L/ba∇dbl≤CmD3/2/ba∇dblf/ba∇dbl˜D2/parenleftbigg√\nL+1√\nLε/parenrightbigg\n.\nOn the other hand, under event A, using the sharper bound of Sl,L, we have\n/ba∇dblel,L/ba∇dbl≤CmD3/2/ba∇dblf/ba∇dbl˜D2/parenleftbigg/radicalbig\nlogL+1√\nLε/parenrightbigg\n.\nFor third-order moment of /ba∇dblel,L/ba∇dbl, we have\nE/ba∇dblel,L/ba∇dbl3≤CmD3/2/ba∇dblf/ba∇dbl˜D2/parenleftbigg/parenleftbigg/radicalbig\nlogL+1√\nLε/parenrightbigg\nP(A)+/parenleftbigg√\nL+1√\nLε/parenrightbigg\nP(Ac)/parenrightbigg\nE/ba∇dblel,L/ba∇dbl2\n≤CmD3/2/ba∇dblf/ba∇dbl˜D2/parenleftbigg/radicalbig\nlogL+D√\nLε/parenrightbigg\nE/ba∇dblel,L/ba∇dbl2.\nSimilarly, for fourth order moment we have\nE/ba∇dblel,L/ba∇dbl4≤C2m2D3/ba∇dblf/ba∇dbl2\n˜D2/parenleftbigg/radicalbig\nlogL+D√\nLε/parenrightbigg2\nE/ba∇dblel,L/ba∇dbl2.\nProof of Theorem 11(Continued)\nApplying the results of Lemma 4to (30) gives\n|Eel+1,LeT\nl+1,L| ≤ |Eel,LeT\nl,L|+1\nL/parenleftbig\nAl,L|Eel,LeT\nl,L|+|Eel,LeT\nl,L|AT\nl,L/parenrightbig\n+1\nLΣl,L\n+C\nL/parenleftbigg\nm4D5/ba∇dblf/ba∇dbl5\n˜D2/parenleftbigg√logL√\nLε+D\nLε2/parenrightbigg\nE/ba∇dblel,L/ba∇dbl2/parenrightbigg\nE. (32)\nSince/ba∇dblf/ba∇dbl˜D2<∞, Σl,Lis uniformly bounded. Without loss of generality, we can ass ume\nΣl,L≤CE. Furthermore, assume Lis sufficiently large such that\nm4D6/ba∇dblf/ba∇dbl5\n˜D2E/ba∇dblel,L/ba∇dbl2\nLδ/3≤1. (33)\nThen, from ( 32) we have\n|Eel+1,LeT\nl+1,L| ≤ |Eel,LeT\nl,L|+1\nL/parenleftbig\nAl,L|Eel,LeT\nl,L|+|Eel,LeT\nl,L|AT\nl,L/parenrightbig\n+C\nL/parenleftbigg\n1+√logL\nL1/2−δ/3ε+D\nL1−δ/3ε2/parenrightbigg\nE,\nwhich implies that\n|Eel+1,LeT\nl+1,L|≤C/parenleftbigg\n1+√logL\nL1/2−δ/3ε+D\nL1−δ/3ε2/parenrightbigg\nN1(1)N1(1)T, (34)\n25\nand thus\nE/ba∇dblel,L/ba∇dbl2≤eT|Eel+1,LeT\nl+1,L|e≤C/parenleftbigg\n1+√logL\nL1/2−δ/3ε+D\nL1−δ/3ε2/parenrightbigg\n(eTN1(1))2.(35)\nNote that eTN1(1) =/ba∇dblN1(1)/ba∇dbl1≤/ba∇dblf/ba∇dbl˜D2+D. By (33) and (35), (34) happens if\nCm4D6/ba∇dblf/ba∇dbl5\n˜D2/parenleftbigg\n1+√logL\nL1/2−δ/3ε+D\nL1−δ/3ε2/parenrightbigg\n(/ba∇dblf/ba∇dbl˜D2+D)2≤Lδ/3.\nTakingε=L−1/2+δ/3, it suffices to have\nCm4D6/ba∇dblf/ba∇dbl5\n˜D2/parenleftBig\n1+D+/radicalbig\nlogL/parenrightBig\n(/ba∇dblf/ba∇dbl˜D2+D)2≤Lδ/3.\nIn this case, we have\nE/ba∇dblfε−fε\nL/ba∇dbl2≤C\nL/parenleftBig\n1+D+/radicalbig\nlogL/parenrightBig\n/ba∇dblf/ba∇dbl2\n˜D2.\nPlugging into ( 26) gives\nE/ba∇dblf−fL/ba∇dbl2≤24m2\nL1−2δ/3/ba∇dblf/ba∇dbl4\n˜D2+3C\nL/parenleftBig\n1+D+/radicalbig\nlogL/parenrightBig\n/ba∇dblf/ba∇dbl2\n˜D2.\nWhenLsufficiently large (larger than polynomial of m,D, logL), we have\nE/ba∇dblf−fL/ba∇dbl2≤/ba∇dblf/ba∇dbl2\n˜D2\n3L1−δ.\nNote that the bound above holds for any fixed x∈X. Now, integrating over x, we have\nE/ba∇dblf−fL/ba∇dbl2=/integraldisplay\nE|f(x)−fL(x)|2dµ(x)≤/ba∇dblf/ba∇dbl2\n˜D2\n3L1−δ.\nBy Markov’s inequality, with probability no less than2\n3, the distance between fandfLcan be\ncontrolled by\n/ba∇dblf−fL/ba∇dbl2≤/ba∇dblf/ba∇dbl2\n˜D2\nL1−δ. (36)\nNext, consider the path norm of fL, which is defined as\n/ba∇dblfL/ba∇dblP=/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble|α|L/productdisplay\nl=1/parenleftbigg\nI+1\nL|Ul||Wl|/parenrightbigg\n|V|/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble\n1.\nDefine a recurrent scheme,\ny0,L=V,\nyl+1,L=yl,L+1\nL|Ul||Wl|yl,L.\nUsing Theorem 8withσbeing the identity function and UandWreplaced by|U|and|W|\nrespectively, we know that /ba∇dbl|α|TyL,L/ba∇dbl1→/ba∇dblf/ba∇dblD1(ρt)almost surely. Hence, by taking ρtsuch\nthat/ba∇dblf/ba∇dblD1(ρt)≤2/ba∇dblf/ba∇dblD1, we have\nE/ba∇dblfL/ba∇dblP≤3/ba∇dblf/ba∇dblD1,\n26\nwhenLis sufficiently large. Again using Markov’s inequality, with probability no less than2\n3,\nwe have\nE/ba∇dblfL/ba∇dblP≤9/ba∇dblf/ba∇dblD1. (37)\nCombining the result above with ( 36), we know that with probability no less than1\n3, we\nhave both ( 36) and (37). Therefore, we can find an fLthat satisfies both ( 36) and (37). This\ncompletes the proof.\n3.5.5 Proof of Theorem 12\nFor anyL, letfL(·) be the residual network represented by the parameters αL,{UL\nl,WL\nl}L−1\nl=0\nandV. Letzl,L(x) be the function represented by the l-th layer of network fL, thenfL(x) =\nαT\nLzL,L(x) for allx∈X. SinceαLuniformly bounded for all L, there exists a subsequence Lk\nandαsuch that\nαLk→α,\nwhenk→∞. Without loss of generality, we assume αL→α.\nLetUL\nt: [0,1]→RD×mbe a piecewise constant function defined by\nUL\nt=UL\nl, for t∈[l\nL,l+1\nL),\nandUL\n1=UL\nL−1. Similarly we can define WL\nt. Then,{UL\nt}and{WL\nt}are uniformly bounded.\nHence, by the fundamental theorem for Young measures [ 23,3], there exists a subsequence {Lk}\nand a family of probability measure {ρt,t∈[0,1]}, such that for every Caratheodory function\nF,\nlim\nk→∞/integraldisplay1\n0F(ULk\nt,WLk\nt,t)dt=/integraldisplay1\n0EρtF(U,W,t)dt.\nLet˜f=fα,{ρt}. We are going to show ˜f=f. LetzY(·,t) be defined by zY(x,0) =Vxand\nzY(x,t) =zY(x,0)+/integraldisplayt\n0EρtUσ(WzY(x,s))ds.\nThen it suffices to show that\nlim\nk→∞zLk,Lk(x)→zY(x,1), (38)\nfor any fixed x∈D0.\nTo prove ( 38), we first consider the following continuous version of zl,L,\nzL(x,0) =z0,L(x),\nd\ndtzL(x,t) =UL\ntσ(WL\ntzL(x,t)),\nand show that|zL(x,1)−zL,L(x)|→0. To see this, note that\nzL(x,tl+1,L) =zL(x,tl,L)+/integraldisplaytl+1,L\ntl,LUL\ntσ(WL\ntzL(x,s))ds, (39)\nzl+1,L(x) =zl,L(x)+/integraldisplaytl+1,L\ntl,LUL\ntσ(WL\ntzl,L(x))ds. (40)\n27\nSubtracting ( 39) from (40), and let el,L=zl,L(x)−zL(x,tl,L), we have\nel+1,L=el,L+/integraldisplaytl+1,L\ntl,L/parenleftbig\nUL\ntσ(WL\ntzl,L(x))−UL\ntσ(WL\ntzL(x,s))/parenrightbig\nds\n=el,L+/integraldisplaytl+1,L\ntl,L/parenleftbig\nUL\ntσ(WL\ntzl,L(x))−UL\ntσ(WL\ntzL(x,tl,L))/parenrightbig\nds\n+/integraldisplaytl+1,L\ntl,L/parenleftbig\nUL\ntσ(WL\ntzL(x,tl,L))−UL\ntσ(WL\ntzL(x,s))/parenrightbig\nds. (41)\nSince{UL\nt}and{WL\nt}are bounded, we know that {zL(x,t)}is bounded, and{d\ndtzL(x,t)}is\nalso bounded. Hence, there exists a uniform constant Csuch that\n/vextenddouble/vextenddoubleUL\ntσ(WL\ntzl,L(x))−UL\ntσ(WL\ntzL(x,tl,L))/vextenddouble/vextenddouble≤C/ba∇dblel,L/ba∇dbl, (42)/vextenddouble/vextenddoubleUL\ntσ(WL\ntzL(x,tl,L))−UL\ntσ(WL\ntzL(x,s))/vextenddouble/vextenddouble≤C|s−tl,L|.. (43)\nPlugging ( 42) and (43) into (41), we obtain\n/ba∇dblel+1,L/ba∇dbl≤/parenleftbigg\n1+C\nL/parenrightbigg\n/ba∇dblel,L/ba∇dbl+C\nL2.\nTherefore, by Gronwall’s inequality, /ba∇dbleL,L/ba∇dbl≤O(1/L), which gives\n|zL(x,1)−zL,L(x)|→0. (44)\nNow with ( 44), we only need to show\nlim\nk→∞zLk(x,1)→zY(x,1),\nwhich is equivalent to showing that for any ǫ, there exists K >0 such that for any k > K, we\nhave\n/ba∇dblzLk(x,1)−zY(x,1)/ba∇dbl≤ǫ.\nFor a large integer N, letti,N=i/N. By the definition of zYandzLk, we have\nzLk(x,ti+1,N) =zLk(x,ti,N)+/integraldisplayti+1,N\nti,NULksσ(WLkszLk(x,s))ds,\nand\nzY(x,ti+1,N) =zY(x,ti,N)+/integraldisplayti+1,N\nti,NEρtUσ(WzY(x,s))ds.\nLetri,N(x) =zY(x,ti,N)−zLk(x,ti,N), and note that{ULk\nt}and{WLk\nt}are bounded, we have\n/ba∇dblri+1,N(x)/ba∇dbl ≤/parenleftbigg\n1+C\nN/parenrightbigg\n/ba∇dblri,N/ba∇dbl+C\nN2\n+/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble/integraldisplayti+1,N\nti,N/bracketleftbig\nULksσ(WLkszY(x,s))−EρtUσ(WzY(x,s))/bracketrightbig\nds/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble,\n28\nfor some constant C. Using the theorem for Young measures [ 23,3], there exists a sufficiently\nlargeK, such that for all k > K, we have\n/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble/integraldisplayti+1,N\nti,N/bracketleftbig\nULksσ(WLkszY(x,s))−EρtUσ(WzY(x,s))/bracketrightbig\nds/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble≤1\nN2,\nfor all 0≤i≤N−1. By Gronwall’s inequality, there exists a constant ˜Csuch that\n/ba∇dblrN,N(x)/ba∇dbl≤˜C\nN.\nIf we take N=ǫ/˜C, we have\n/ba∇dblzLk(x,1)−zY(x,1)/ba∇dbl≤ǫ,\nfor sufficiently large k. This shows that f=fα,{ρt}.\nTo bound theD∞norm of f, takeFas the indicator function of {|U|≤c0,|W|≤c0}cand\napply the theorem for Young measures, we obtain that for any t∈[0,1], the support of ρtlies\nin{|U|≤c0,|W|≤c0}. Hence, f∈D∞. To estimate/ba∇dblf/ba∇dblD∞, consider N∞(t) defined by ( 16),\nsince the elements of UandWare bounded by c0, we have\n˙N∞(t)≤mc2\n0EN∞(t),\nwhereEis an all-one D×Dmatrix. Therefore, we have\nN∞(1)≤emc2\n0Ee≤2Dem(c2\n0+1)\nme.\nSince the elements of αare also bounded by c0, we get\n/ba∇dblf/ba∇dblD∞≤|α|TN∞(1)≤2D2em(c2\n0+1)c0\nm.\nFinally, if/ba∇dblfL/ba∇dblD1≤c1holds for all L >0, then using the technique of treating zY(x,t) on\nN1(t), we obtain/ba∇dblf/ba∇dblD1≤c1.\n3.5.6 Proof of Theorem 13\nSimilar to the proof of Theorem 11, we can define a discrete analogy of the ˆD1norm for residual\nnetwork\n/ba∇dblΘ/ba∇dblWP=|α|TL/productdisplay\nl=1/parenleftbigg\nI+2\nL|Ul||Wl|/parenrightbigg\ne.\nUsing the same techniques as for the direct approximation th eorem, we can show that any\nfunctions in ˆDQ\n2can be approximated by a series of residual networks fL(·;ΘL) with depth L\ntends to infinity and /ba∇dblΘL/ba∇dblWP≤9Q. Here we use WP (weighted path) to denote the discrete\nnorm because this norm is a weighted version of the original p ath norm, and assigns larger\nweights for those paths going through more non-linearities . LetFQbe the set of all residual\nnetworks whose weighted path norms are bounded by Q, i.e.\nFQ={f(·;Θ) :f(·;Θ) is a residual network and /ba∇dblΘ/ba∇dblWP≤Q},\n29\nand letFQbe the closure of FQ. Then, by the direct approximation results, ˆDQ\n2⊂ˆDQ\n1⊂FQ.\nHence, Rad n(ˆDQ\n2)≤Radn(F9Q). On the other hand, in [ 10] it is proven that\nRadn(FQ)≤2Q/radicalbigg\n2log(2d)\nn.\nTherefore,\nRadn(ˆDQ\n2)≤18Q/radicalbigg\n2log(2d)\nn\n4 Concluding remarks\nAs far as the high dimensional approximation theory is conce rned, we are interested in approx-\nimation schemes (or machine learning models) that satisfy\n/ba∇dblf−fm/ba∇dbl2≤C0γ(f)2\nm\nforfis a certain function space Fdefined by the particular approximation scheme or machine\nlearning model. Here γis a functional defined on F, typically a norm for the function space.\nIt plays the role of the variance in the context of Monte Carlo integration. A machine learning\nmodel is preferred if its associated function space Fis large and the functional γis small.\nHowever, practical machine learning models can only work wi th a finite dataset on which\nthe values of the target function are known. This results in a n additional error, the estimation\nerror, in the total error of the machine learning model. The e stimation error is controlled by\nthe Rademacher complexity of the hypothesis space, which ca n be thought of as a truncated\nversion of the space F. It just so happens that for the spaces identified here the Rad emacher\ncomplexity has the optimal estimates:\nRadn(FQ)≤C0Q√n.\nThis is also true for the RKHS. It is not clear whether this is a coincidence, or there are some\nmore fundamental reasons behind.\nWhatever the reason, the combination of these two results im ply that the generalization\nerror (also called population risk) should have the optimal scalingO(1/m) +O(1/√n) for all\nthree methods: the kernel method, the two-layer neural netw orks and residual networks. The\ndifference lies in the coefficients hidden in the above expressi on. These coefficients are the\nnorms of the target function in the corresponding function s paces. In this sense, going from\nthe kernel method to two-layer neural networks and to deep re sidual neural networks is like a\nvariance reduction process since the value of the norms decr eases in this process. In addition,\nthe function space Fexpands substantially from some RKHS to the Barron space and to the\nflow-induced function space.\nAcknowledgement: The work presented here is supported in part by a gift to Princ eton\nUniversity from iFlytek and the ONR grant N00014-13-1-0338 .\n30\nReferences\n[1] Nachman Aronszajn. Theory of reproducing kernels. Transactions of the American math-\nematical society , 68(3):337–404, 1950.\n[2] Francis Bach. Breaking the curse of dimensionality with convex neural networks. Journal\nof Machine Learning Research , 18(19):1–53, 2017.\n[3] John M Ball. A version of the fundamental theorem for youn g measures. In PDEs and\ncontinuum models of phase transitions , pages 207–215. Springer, 1989.\n[4] Andrew R. Barron. Universal approximation bounds for su perpositions of a sigmoidal\nfunction. IEEE Transactions on Information theory , 39(3):930–945, 1993.\n[5] Andrew R Barron. Approximation and estimation bounds fo r artificial neural networks.\nMachine Learning , 14(1):115–133, 1994.\n[6] Peter L. Bartlett and Shahar Mendelson. Rademacher and g aussian complexities: Risk\nbounds and structural results. Journal of Machine Learning Research , 3(Nov):463–482,\n2002.\n[7] AlbertBenveniste, Michel M´ etivier, andPierrePriour et.Adaptive algorithms and stochastic\napproximations , volume 22. Springer Science & Business Media, 2012.\n[8] Philippe G Ciarlet. The finite element method for ellipti c problems. Classics in applied\nmathematics , 40:1–511, 2002.\n[9] RonaldADeVore andGeorge GLorentz. Constructive approximation , volume303. Springer\nScience & Business Media, 1993.\n[10] Weinan E, Chao Ma, and Qingcan Wang. A priori estimates o f the population risk for\nresidual networks. arXiv preprint arXiv:1903.02154 , 2019.\n[11] Weinan E, Chao Ma, and Lei Wu. A priori estimates of the po pulation risk for two-layer\nneural networks. Communications in Mathematical Sciences , 17(5):1407–1425, 2019; arXiv\npreprint arXiv:1810.06397.\n[12] Weinan E and Stephan Wojtowytsch. Representation form ulas and pointwise properties for\nbarron functions. arXiv preprint arXiv:2006.05982 , 2020.\n[13] Ronen Eldan and Ohad Shamir. The power of depth for feedf orward neural networks. In\nConference on learning theory , pages 907–940, 2016.\n[14] Arnulf Jentzen, Diyora Salimova, and Timo Welti. A proo f that deep artificial neural net-\nworks overcome the curse of dimensionality in the numerical approximation of kolmogorov\npartial differential equations with constant diffusion and non linear drift coefficients. arXiv\npreprint arXiv:1809.07321 , 2018.\n[15] Jason M Klusowski and Andrew R Barron. Risk boundsfor hi gh-dimensional ridge function\ncombinations including neural networks. arXiv preprint arXiv:1607.01434 , 2016.\n[16] Vera Kurkov´ a and Marcello Sanguineti. Bounds on rates of variable-basis and neural-\nnetwork approximation. IEEE Transactions on Information Theory , 47(6):2659–2665, 2001.\n31\n[17] Harold Kushner and G George Yin. Stochastic approximation and recursive algorithms and\napplications , volume 35. Springer Science & Business Media, 2003.\n[18] Zhong Li, Chao Ma, and Lei Wu. Complexity measures for ne ural networks with general\nactivation functions using path-based norms. arXiv preprint arXiv:2009.06132 , 2020.\n[19] Hrushikesh Narhar Mhaskar. On the tractability of mult ivariate integration and approxi-\nmation by neural networks. Journal of Complexity , 20(4):561–590, 2004.\n[20] Behnam Neyshabur, Srinadh Bhojanapalli, David Mcalle ster, and Nati Srebro. Exploring\ngeneralization in deep learning. In Advances in Neural Information Processing Systems 30 ,\npages 5949–5958, 2017.\n[21] Ali Rahimi and Benjamin Recht. Uniform approximation o f functions with random bases.\nIn2008 46th Annual Allerton Conference on Communication, Contro l, and Computing ,\npages 555–561. IEEE, 2008.\n[22] Shai Shalev-Shwartz and Shai Ben-David. Understanding machine learning: From theory\nto algorithms . Cambridge university press, 2014.\n[23] Laurence Chisholm Young. Lecture on the calculus of variations and optimal control th eory,\nvolume 304. American Mathematical Soc., 2000.\n32",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "sevWZwWovQB",
"year": null,
"venue": null,
"pdf_link": "https://arxiv.org/pdf/1904.04326.pdf",
"forum_link": "https://openreview.net/forum?id=sevWZwWovQB",
"arxiv_id": null,
"doi": null
}
|
{
"title": "A comparative analysis of the optimization and generalization property of two-layer neural network and random feature models under gradient descent dynamics",
"authors": [
"Weinan E",
"Chao Ma",
"Lei Wu"
],
"abstract": "A fairly comprehensive analysis is presented for the gradient descent dynamics for\ntraining two-layer neural network models in the situation when the parameters in both\nlayers are updated. General initialization schemes as well as general regimes for the\nnetwork width and training data size are considered. In the over-parametrized regime,\nit is shown that gradient descent dynamics can achieve zero training loss exponentially\nfast regardless of the quality of the labels. In addition, it is proved that throughout the\ntraining process the functions represented by the neural network model are uniformly\nclose to that of a kernel method. For general values of the network width and training\ndata size, sharp estimates of the generalization error is established for target functions in\nthe appropriate reproducing kernel Hilbert space.",
"keywords": [],
"raw_extracted_content": "A Comparative Analysis of Optimization and Generalization\nProperties of Two-layer Neural Network and Random Feature\nModels Under Gradient Descent Dynamics\nWeinan E\u00031,2,3, Chao May2, and Lei Wuz2\n1Department of Mathematics, Princeton University\n2Program in Applied and Computational Mathematics, Princeton University\n3Beijing Institute of Big Data Research\nAbstract\nA fairly comprehensive analysis is presented for the gradient descent dynamics for\ntraining two-layer neural network models in the situation when the parameters in both\nlayers are updated. General initialization schemes as well as general regimes for the\nnetwork width and training data size are considered. In the over-parametrized regime,\nit is shown that gradient descent dynamics can achieve zero training loss exponentially\nfast regardless of the quality of the labels. In addition, it is proved that throughout the\ntraining process the functions represented by the neural network model are uniformly\nclose to that of a kernel method. For general values of the network width and training\ndata size, sharp estimates of the generalization error is established for target functions in\nthe appropriate reproducing kernel Hilbert space.\n1 Introduction\nOptimization and generalization are two central issues in the theoretical analysis of machine\nlearning models. These issues are of special interest for modern neural network models,\nnot only because of their practical success [ 18,19], but also because of the fact that these\nneural network models are often heavily over-parametrized and traditional machine learning\ntheory does not seem to work directly [ 21,30]. For this reason, there has been a lot of recent\ntheoretical work centered on these issues [ 15,16,12,11,2,8,10,31,29,28,25,27]. One issue\nof particular interest is whether the gradient descent (GD) algorithm can produce models\nthat optimize the empirical risk and at the same time generalize well for the population risk.\nIn the case of over-parametrized two-layer neural network models, which will be the focus of\nthis paper, it is generally understood that as a result of the non-degeneracy of the associated\nGram matrix [ 29,12], optimization can be accomplished using the gradient descent algorithm\nregardless of the quality of the labels, in spite of the fact that the empirical risk function is\nnon-convex. In this regard, one can say that over-parametrization facilitates optimization.\n\[email protected]\[email protected]\[email protected]\n1arXiv:1904.04326v2 [cs.LG] 21 Feb 2020\nThe situation with generalization is a di\u000berent story. There has been a lot of interest\non the so-called \\implicit regularization\" e\u000bect [ 21], i.e. by tuning the parameters in the\noptimization algorithms, one might be able to guide the algorithm to move towards network\nmodels that generalize well, without the need to add any explicit regularization terms (see\nbelow for a review of the existing literature). But despite these e\u000borts, it is fair to say that\nthe general picture has yet to emerge.\nIn this paper, we perform a rather thorough analysis of the gradient descent algorithm\nfor training two-layer neural network models. We study the case in which the parameters in\nboth the input and output layers are updated { the case found in practice. In the heavily\nover-parametrized regime, for general initializations, we prove that the results of [ 12] still hold,\nnamely, the gradient descent dynamics still converges to a global minimum exponentially fast,\nregardless of the quality of the labels. However, we also prove that the functions obtained are\nuniformly close to the ones found in an associated kernel method, with the kernel de\fned by\nthe initialization.\nIn the second part of the paper, we study the more general situation when the assumption\nof over-parametrization is relaxed. We provide sharp estimates for both the empirical\nand population risks. In particular, we prove that for target functions in the appropriate\nreproducing kernel Hilbert space (RKHS) [ 3], the generalization error can be made small if\ncertain early stopping strategy is adopted for the gradient descent algorithm.\nOur results imply that under this setting over-parametrized two-layer neural networks are\na lot like the kernel methods: They can always \ft any set of random labels, but in order to\ngeneralize, the target functions have to be in the right RKHS. This should be compared with\nthe optimal generalization error bounds proved in [13] for regularized models.\n1.1 Related work\nThe seminal work of [ 30] presented both numerical and theoretical evidence that over-\nparametrized neural networks can \ft random labels. Building upon earlier work on the\nnon-degeneracy of some Gram matrices [ 29], Du et al. went a step further by proving that the\nGD algorithm can \fnd global minima of the empirical risk for su\u000eciently over-parametrized\ntwo-layer neural networks [ 12]. This result was extended to multi-layer networks in [ 11,2]\nor a general setting [ 9]. The related result for in\fnitely wide neural networks was obtained\nin [14]. In this paper, we prove a new optimization result (Theorem 3.2) that removes the\nnon-degeneracy assumption of the input data by utilizing the smoothness of the target function.\nAlso the requirement of the network width is signi\fcantly relaxed.\nThe issue of generalization is less clear. [ 10] established generalization error bounds for\nsolutions produced by the online stochastic gradient descent (SGD) algorithm with early\nstopping when the target function is in a certain RKHS. Similar results were proved in [ 20]\nfor the classi\fcation problem, in [ 8] for o\u000fine SGD algorithms, and in [ 1] for GD algorithm.\nThese results are similar to ours, but we do not require the network to be over-parametrized.\nMoreover, in Theorem 3.3 we show that in this setting neural networks are uniformly close to\nthe random feature models if the network is highly over-parametrized.\nMore recently in [ 4], a generalization bound was derived for GD solutions using a data-\ndependent norm. This norm is bounded if the target function belongs to the appropriate\nRKHS. However, their error bounds are not strong enough to rule out the possibility of\ncurse of dimensionality. Indeed the results of the present paper do suggest that curse of\ndimensionality does occur in their setting (see Theorem 3.4).\n2\n[14] provided by a heuristic argument that the GD solutions of a in\fnitely-wide neural\nnetwork are captured by the so-called neural tangent kernel. In this paper, we provide a\nrigorous proof of the non-asymptotic version of the result for the two-layer neural network\nunder weaker conditions.\n2 Preliminaries\nThroughout this paper, we will use the following notation [ n] =f1;2;:::;ng, ifnis a positive\ninteger. We usekkandkkFto denote the `2and Frobenius norms for matrices, respectively.\nWe let Sd\u00001=fx:kxk= 1g, and use\u00190to denote the uniform distribution over Sd\u00001. We\nuseX.Yto indicate that there exists an absolute constant C0>0 such that X\u0014C0Y, and\nX&Yis similarly de\fned. If fis a function de\fned on Rdand\u0016is a probability distribution\nonRd, we letkfk\u0016= (R\nRdf(x)2d\u0016(x))1=2.\n2.1 Problem setup\nWe focus on the regression problem with a training data set given by f(xi;yi)gn\ni=1, i.i.d.\nsamples drawn from a distribution \u001a, which is assumed \fxed but only known through the\nsamples. In this paper, we assume kxk2= 1 andjyj\u00141. We are interested in \ftting the data\nby a two-layer neural network:\nfm(x; \u0002) = aT\u001b(Bx); (1)\nwhere a2Rm;B= (b1;b2;\u0001\u0001\u0001;bm)T2Rm\u0002dand \u0002 =fa;Bgdenote all the parameters.\nHere\u001b(t) =max(0;t) is the ReLU activation function. We will omit the subscript min the\nnotation for fmif there is no danger of confusion. In formula (1), we omit the bias term for\nnotational simplicity. The e\u000bect of the bias term can be incorporated if we think of xas\n(x;1)T.\nThe ultimate goal is to minimize the population risk de\fned by\nR(\u0002) =1\n2Ex;y[(f(x; \u0002)\u0000y)2]:\nBut in practice, we can only work with the following empirical risk\n^Rn(\u0002) =1\n2nnX\ni=1(f(xi; \u0002)\u0000yi)2:\nGradient Descent We are interested in analyzing the property of the following gradient\ndescent algorithm: \u0002 t+1= \u0002t\u0000\u0011r^Rn(\u0002t);where\u0011is the learning rate. For simplicity, we\nwill focus on its continuous version, the gradient descent (GD) dynamics:\nd\u0002t\ndt=\u0000r^Rn(\u0002t): (2)\nInitialization \u00020=fa(0);B(0)g. We assume that fbk(0)gm\nk=1are i.i.d. random variables\ndrawn from \u00190, andfak(0)gm\nk=1are i.i.d. random variables drawn from the distribution\nde\fned by Pfak(0) =\fg=Pfak(0) =\u0000\fg=1\n2. Here\fcontrols the magnitude of the\ninitialization, and it may depend on m, e.g.\f=1\nmor1pm. Other initialization schemes can\nalso be considered (e.g. distributions other than \u00190, other ways of initializing a). The needed\nargument does not change much from the ones for this special case.\n3\n2.2 Assumption on the input data\nWith the activation function \u001b(\u0001) and the distribution \u00190, we can de\fne two positive de\fnite\n(PD) functions1\nk(a)(x;x0)def=Eb\u0018\u00190[\u001b(bTx)\u001b(bTx0)];\nk(b)(x;x0)def=Eb\u0018\u00190[\u001b0(bTx)\u001b0(bTx0)hx;x0i]:\nFor a \fxed training sample, the corresponding normalized kernel matrices K(a)= (K(a)\ni;j);K(b)=\n(K(b)\ni;j)2Rn\u0002nare de\fned by\nK(a)\ni;j=1\nnk(a)(xi;xj);\nK(b)\ni;j=1\nnk(a)(xi;xj):(3)\nThroughout this paper, we make the following assumption on the training set.\nAssumption 1. For the given training set f(xi;yi)gn\ni=1, we assume that the smallest eigen-\nvalues of the two kernel matrices de\fned above are both positive, i.e.\n\u0015(a)\nndef=\u0015min(Ka)>0; \u0015(b)\nndef=\u0015min(K(b))>0:\nLet\u0015n= minf\u0015a\nn;\u0015b\nng.\nRemark 1. Note that\u0015(a)\nn\u0014mini2[n]K(a)\ni;i\u00141=n;\u0015(b)\nn\u0014mini2[n]K(a)\ni;i\u00141=n. In general,\n\u0015(a)\nn;\u0015(b)\nndepend on the data set. For any PD functions s(\u0001;\u0001), the Hilbert-Schmidt integral\noperatorTs:L2(Sd\u00001;\u00190)7!L2(Sd\u00001;\u00190) is de\fned by\nTsf(x) =Z\nSd\u00001s(x;x0)f(x0)d\u00190(x0):\nLet \u0003n(Ts) denote its n-th largest eigenvalue. If fxign\ni=1are independently drawn from \u00190, it\nwas proved in [ 6] that with high probability \u0015(a)\nn\u0015\u0003n(Tk(a))=2 and\u0015(b)\nn\u0015\u0003n(Tk(b))=2. Using\nthe similar idea, [ 29] provided lower bounds for \u0015(b)\nnbased on some geometric discrepancy,\nwhich quanti\fes the uniformity degree of fxign\ni=1. In this paper, we leave \u0015(a)\nn>0;\u0015(b)\nn>0 as\nour basic assumption.\n2.3 The random feature model\nWe introduce the following random feature model [ 22] as a reference for the two-layer neural\nnetwork model\nfm(x;~a;B0)def=~aT\u001b(B0x); (4)\nwhere a2Rm;B02Rm\u0002d. HereB0is \fxed at the corresponding initial values for the neural\nnetwork model, and is not part of the parameters to be trained. The corresponding gradient\ndescent dynamics is given by\nd~at\ndt=\u00001\nnnX\ni=1(~aT\nt\u001b(B0xi)\u0000yi)\u001b(B0xi): (5)\nThis dynamics is relatively simple since it is linear.\n1We say that a continuous symmetric function kis positive de\fnite if and only if for any x1; : : : ;xn, the\nkernel matrix K= (Ki;j)2Rn\u0002nwithKi;j=k(xi;xj) is positive de\fnite.\n4\n3 Analysis of the over-parameterized case\nIn this section, we consider the optimization and generalization properties of the GD dynamics\nin the over-parametrized regime. We introduce two Gram matrices G(a)(\u0002);G(b)(\u0002)2Rn\u0002n,\nde\fned by\nG(a)\ni;j(\u0002) =1\nnmmX\nk=1\u001b(bT\nkxi)\u001b(bT\nkxj);\nG(b)\ni;j(\u0002) =1\nnmmX\nk=1a2\nkxT\nixj\u001b0(bT\nkxi)\u001b0(bT\nkxj):\nLetG=G(a)+G(b)2Rn\u0002n;ej=f(xj;\u0002)\u0000yjande= (e1;e2;\u0001\u0001\u0001;en)T, it is easy to see\nthat\nkr\u0002^Rnk2=m\nneTGe: (6)\nSince ^Rn=1\n2neTe, we have\n2m\u0015min(G)^Rn\u0014kr \u0002^Rnk2\u00142m\u0015max(G)^Rn:\n3.1 Properties of the initialization\nLemma 1. For any \fxed \u000e>0, with probability at least 1\u0000\u000eover the random initialization,\nwe have\n^Rn(\u00020)\u00141\n2\u0000\n1 +c(\u000e)pm\f\u00012;\nwherec(\u000e) = 2 +p\nln(1=\u000e).\nThe proof of this lemma can be found in Appendix C.\nIn addition, at the initialization, the Gram matrices satisfy\nG(a)(\u00020)!K(a); G(b)(\u00020)!\f2K(b)asm!1:\nIn fact, we have\nLemma 2. For\u000e>0, ifm\u00158\n\u00152nln(2n2=\u000e), we have, with probability at least 1\u0000\u000eover the\nrandom choice of \u00020\n\u0015min(G(\u00020))\u00153\n4(\u0015(a)\nn+\f2\u0015(b)\nn):\nThe proof of this lemma is deferred to Appendix D.\n3.2 Gradient descent near the initialization\nWe de\fne a neighborhood of the initialization by\nI(\u00020)def=f\u0002 :kG(\u0002)\u0000G(\u00020)kF\u00141\n4(\u0015(a)\nn+\f2\u0015(b)\nn)g: (7)\nUsing the lemma above, we conclude that for any \fxed \u000e>0, with probability at least 1 \u0000\u000e\nover the random choices of \u0002 0, we must have\n\u0015min(G(\u0002))\u0015\u0015min(G(\u00020))\u0000kG(\u0002)\u0000G(\u00020)kF\u00151\n2(\u0015(a)\nn+\f2\u0015(b)\nn)\n5\nfor all \u00022I(\u00020).\nFor the GD dynamics, we de\fne the exit time of I(\u00020) by\nt0def= infft: \u0002t=2I(\u00020)g: (8)\nLemma 3. For any \fxed \u000e2(0;1), assume that m\u00158\n\u00152nln(2n2=\u000e). Then with probability at\nleast 1\u0000\u000eover the random choices of \u00020, we have the following holds for any t2[0;t0],\n^Rn(\u0002t)\u0014e\u0000m(\u0015(a)\nn+\f2\u0015(b)\nn)t^Rn(\u00020):\nProof. We have\nd^Rn(\u0002t)\ndt=\u0000kr \u0002^Rnk2\nF\u0014\u0000m(\u0015(a)\nn+\f2\u0015(b)\nn)^Rn(\u0002t);\nwhere the last inequality is due to the fact that \u0002 t2I(\u00020). This completes the proof.\nWe de\fne two quantities:\npndef=4q\n^Rn(\u00020)\nm(\u0015(a)\nn+\f2\u0015(b)\nn); qndef=p2\nn+\fpn: (9)\nThe following is the most crucial characterization of the GD dynamics.\nProposition 3.1. For any\u000e>0, assumem\u00151024\u0015\u00002\nnln(n2=\u000e). Then, with probability at\nleast 1\u0000\u000e, we have the following holds for any t2[0;t0],\njak(t)\u0000ak(0)j\u00142pn\nkbk(t)\u0000bk(0)k\u00142qn:\nProof. First, we have\nkrak^Rnk2=\u00001\nnnX\ni=1ei\u001b(xT\nibk)\u00012\u00142kbkk2^Rn(\u0002);\nkrbk^Rnk2=k1\nnnX\ni=1eiak\u001b0(xT\nibk)xik2\u00142a2\nk^Rn(\u0002):\nTo facilitate the analysis, we de\fne the following two quantities,\n\u000bk(t) = max\ns2[0;t]jak(s)j; !k(t) = max\ns2[0;t]kbk(s)k:\n6\nUsing Lemma 3, we have\nkbk(t)\u0000bk(0)k\u0014Zt\n0krbk^Rn(\u0002t0)kdt0\n\u00142Zt\n0\u000bk(t)q\n^Rn(\u0002t0)dt0\n\u00144q\n^Rn(\u00020)\u000bk(t)\nm(\u0015(a)\nn+\f2\u0015(b)\nn)=pn\u000bk(t);\njak(t)\u0000ak(0)j\u0014Zt\n0jrak^Rn(\u0002t0)jdt0\n\u00142Zt\n0!k(t)q\n^Rn(\u0002t0)dt0\n\u00144q\n^Rn(\u00020)!k(t)\nm(\u0015(a)\nn+\f2\u0015(b)\nn)=pn!k(t):(10)\nCombining the two inequalities above, we get\n\u000bk(t)\u0014jak(0)j+pn(1 +pn\u000bk(t)):\nUsing Lemma 1 and the fact that m\u0015maxf16\n\u0015(a)\nn;64c2(\u000e)\n\u0015(b)\nn\u0015(a)\nng, we have\npn\u00144(1 +c(\u000e)pm\f)\nm(\u0015(a)\nn+\f2\u0015(b)\nn)\n\u00144\nm\u0015(a)\nn+4c(\u000e)q\nm\u0015(a)\nn\u0015(b)\nn\u00141\n2: (11)\nTherefore,\n\u000bk(t)\u0014(1\u0000p2\nn)\u00001(pn+\f)\u00142(pn+\f):\nInserting the above estimates back to (10), we obtain\nkbk(t)\u0000bk(0)k\u00142p2\nn+ 2\fpn:\nSincem\u0015maxf16p\n\u0015(a)\nn\u0015(b)\nn;1024c2(\u000e)\n(\u0015(b)\nn)2g, we have\n2\fpn\u00148\f(1 +c(\u000e)pm\f)\nm(\u0015(a)\nn+\f2\u0015(b)\nn)\n\u00148\f\nm(\u0015(a)\nn+\f2\u0015(b)\nn)+8c(\u000e)\npm\u0015(b)\nn\n\u00144\nmq\n\u0015(a)\nn\u0015(b)\nn+8c(\u000e)\npm\u0015(b)\nn\n\u00141\n2: (12)\n7\nTherefore we have !k(t)\u00141 +kbk(t)\u0000bk(0)k\u00142, which leads to\njak(t)\u0000ak(0)j\u0014pn!k(t)\u00142pn:\nThe following lemma provides that how pnandqndepend on\fandm.\nLemma 4. For any\u000e>0, assumem\u00151024\u0015\u00002\nnln(n2=\u000e). LetC(\u000e) = 10c2(\u000e). If\f\u00141, we\nhave\npn\u0014C(\u000e)\npm\u0015(a)\nn\u00121pm+\f\u0013\nqn\u0014C(\u000e)\nm(\u0015(a)\nn)2\u00121\nm+2\fpm+\f2\u0013\n+C(\u000e)\f\nm\u0015(a)\nn+C(\u000e)\f2\npm\u0015(a)\nn:(13)\nIf\f >1, we have\npn\u0014C(\u000e)q\nm\u0015(a)\nn\u0015(b)\nn\nqn\u0014C(\u000e)\npm\u0015(b)\nn:(14)\n3.3 Global convergence for arbitrary labels\nProposition 3.1 and Lemma 4 tell us that no matter how large \fis, we have\nmax\nk2[m]\b\nkbk(t)\u0000bk(0)k;jak(t)\u0000ak(0)j\t\n!0 asm!1:\nThis actually implies that the GD dynamics always stays in I(\u00020), i.e.t0=1.\nTheorem 3.2. For any\u000e2(0;1), assumem&\u0015\u00004\nnn2\u000e\u00001ln(n2=\u000e). Then with probability at\nleast 1\u0000\u000eover the random initialization, we have\n^Rn(\u0002t)\u0014e\u0000m(\u0015(a)\nn+\f2\u0015(b)\nn)t^Rn(\u00020);\nfor anyt\u00150.\nProof. According to Lemma 3, we only need to prove that t0=1. Assumet0<1.\nLet us \frst consider the Gram matrix G(a). Since\u001b(\u0001) is 1\u0000Lipschitz and maxkkbk(t0)\u0000\nbk(0)k\u0014qn\u00141, we have\njG(a)\ni;j(\u0002t0)\u0000G(a)\ni;j(\u00020)j=1\nnmmX\nk=1\u0000\n\u001b(bT\nk(t0)xi)\u001b(bT\nk(t0)xj)\u0000\u001b(bT\nk(0)xi)\u001b(bT\nk(0)xj)\u0001\n\u00141\nnmmX\nk=1\u0000\n2kbk(t0)\u0000bk(0)k+kbk(t0)\u0000bk(0)k2\u0001\n\u00143qn\nn:\nThis leads to\nkG(a)(\u0002t0)\u0000G(a)(\u00020)kF\u00143qn: (15)\n8\nNext we turn to the Gram matrix G(b). De\fne the event\nDi;k=fbk(0) :kbk(t0)\u0000bk(0)k\u0014qn;\u001b0(bT\nk(t0)xi)6=\u001b0(bT\nk(0)xi)g:\nSince\u001b(\u0001) is ReLU, this event happens only if jxT\nibk(0)j\u0014qn. By the fact that kxik= 1 and\nbk(0) is drawn from the uniform distribution over the sphere, we have P[Di;k].qn. Therefore\nthe entry-wise deviation of G(b)satis\fes,\nnjG(b)\ni;j(\u0002t0)\u0000G(b)\ni;j(\u00020)j\n\u0014jxT\nixjj2\nm2jmX\nk=1\u0000\na2\nk(t0)\u001b0(bT\nk(t0)xi)\u001b0(bT\nk(t0)xj)\u0000a2\nk(0)\u001b0(bT\nk(0)xi)\u001b0(bT\nk(0)xj)\u0001\nj\n\u00141\nm2jmX\nk=1\u0000\na2\nk(t0)Qk;i;j+Pk\u0001\nj;\nwhere\nQk;i;j=j\u001b0(xT\nibk(t0))\u001b0(xT\njbk(t0))\u0000\u001b0(xT\nibk(0))\u001b0(xT\njbk(0))j\nPk=ja2\nk(t0)\u0000a2\nk(0)j:\nNote that E[Qk;i;j]\u0014P[Dk;i[Dk;j].qn. In addition, by Proposition 3.1, we have\nPk\u0014(\f+ 2pn)2\u0000\f2.qn\na2\nk(t0)\u0014a2\nk(0) +Pk.\f2+qn:\nHence using qn=p2\nn+\fpn\u00141, we obtain\nnE[jG(b)\ni;j(\u0002t0)\u0000G(b)\ni;j(\u00020)j].(\f2+qn)qn+qn\n.(1 +\f2)qn: (16)\nBy the Markov inequality, with probability 1 \u0000\u000e=nwe have\njG(b)\ni;j(\u0002t0)\u0000G(b)\ni;j(\u00020)j\u0014(1 +\f2)qn\n\u000e:\nConsequently, with probability 1 \u0000\u000ewe have\nkG(b)(\u0002t0)\u0000G(b)(\u00020)kF.(1 +\f2)nqn\n\u000e: (17)\nCombining (15) and (17), we get\nkG(t0)\u0000G(0)kF\u0014kG(a)(t0)\u0000G(a)(0)kF+kG(b)(t0)\u0000G(b)(0)kF\n.3qn+(1 +\f2)nqn\n\u000e\n.(n\u000e\u00001+ 1)C(\u000e)\npm\u0015(b)\nn+\f2n\u000e\u00001C(\u000e)\npm\u0015(b)\nn;\nwhere the last inequality comes from Lemma (4). Taking m&\u0015\u00004\nnn2\u000e\u00001ln(n2=\u000e), we get\nkG(t0)\u0000G(0)kF<1\n4(\u0015(a)\nn+\f2\u0015(b)\nn):\nThe above result contradicts the de\fnition of t0. Therefore t0=1.\n9\nRemark 2. Compared with Proposition 3.1, the above theorem imposes a stronger assumption\non the network width: m\u0015poly(\u000e\u00001). This is due to the lack of continuity of \u001b0(\u0001) when\nhandlingkG(b)(\u0002t0)\u0000G(b)(\u00020)kF. If\u001b0(\u0001) is continuous, we can get rid of the dependence\nonpoly(\u000e\u00001). In addition, it is also possible to remove this assumption for the case when\n\f=o(1), since in this case the Gram matrix G=G(a)+\f2G(b)is dominated by G(a).\nRemark 3. Theorem 3.2 is closely related to the result of Du et al. [ 12] where exponential\nconvergence to global minima was \frst proved for over-parametrized two-layer neural networks.\nBut it improves the result of [ 12] in two aspects. First, as is done in practice, we allow the\nparameters in both layers to be updated, while [ 12] chooses to freeze the parameters in the\n\frst layer. Secondly, our analysis does not impose any speci\fc requirement on the scale of the\ninitialization whereas the proof of [12] relies on the speci\fc scaling: \f\u00181=pm.\n3.4 Characterization of the whole GD trajectory\nIn the last subsection, we showed that very wide networks can \ft arbitrary labels. In this\nsubsection, we study the functions represented by such networks. We show that for highly\nover-parametrized two-layer neural networks, the solution of the GD dynamics is uniformly\nclose to the solution for the random feature model starting from the same initial function.\nTheorem 3.3. Assume\f\u00141. Denote the solution of GD dynamics for the random feature\nmodel by\nfker\nm(x;t) =fm(x;~at;B0);\nwhere ~atis the solution of GD dynamics (5). For any \u000e2(0;1), assume that m&\n\u0015\u00004\nnn2\u000e\u00001ln(n2\u000e\u00001). Then with probability at least 1\u00006\u000ewe have,\njfm(x; \u0002t)\u0000fker\nm(x;t)j.c2(\u000e)\n\u0015(a)\nn\u00121pm+\f+pm\f3\u0013\n; (18)\nwherec(\u000e) = 1 +p\nln(1=\u000e).\nRemark 4. Again the factor \u000e\u00001in the condition for mcan be removed if \u001bis assumed to\nbe smooth or \fis assumed to be small (see the remark at the end of Theorem 3.2).\nRemark 5. If\f=o(m\u00001=6), the right-hand-side of (18) goes to 0 as m!1 . For example,\nif we take\f= 1=pm, we have\njfm(x; \u0002t)\u0000fker\nm(x;t)k.c(\u000e)\n\u0015(a)\nnpm: (19)\nHence this theorem says that the GD trajectory of a very wide network is uniformly close to\nthe GD trajectory of the related kernel method (5).\nProof of Theorem 3.3\nWe de\fne\ng(a)(x;x0) =1\nmnmX\nk=1\u001b(bk(0)Tx)\u001b(bk(0)Tx0)\ng(x;x0;t) =1\nmnmX\nk=1\u0000\n\u001b(bk(t)Tx)\u001b(bk(t)Tx0) +ak(t)2\u001b0(bk(t)Tx)\u001b0(bk(t)Tx0)xTx0\u0001\n:(20)\n10\nRecall the de\fnition of G(\u0002t) in Section 3, we know that G(\u0002t)i;j=gm(xi;xj;t). For any\nx2Sd\u00001, letg(x;t) and g(a)(x) be twon-dimensional vectors de\fned by\ng(a)\ni(x) =g(a)(x;xi)\ngi(x;t) =g(x;xi;t):(21)\nFor GD dynamics (2), de\fne e(t) = (fm(x; \u0002t)\u0000yi)2Rn. Then we have,\nd\ndte(t) =\u0000mG(\u0002t)e(t)\nd\ndtfm(x;at;Bt) =\u0000mg(x;t)Te(t):(22)\nFor GD dynamics (5)of the random feature model, we de\fne ~e(t) = (fm(xi;~at;B0)\u0000yi)2Rn.\nThen, we have\nd\ndt~e(t) =\u0000mG(a)(\u00020)~e(t)\nd\ndtfm(x;~at;B0) =\u0000mg(a)(x)T~e(t):(23)\nFrom (22) and (23), we have\nfm(x;at;Bt) =fm(x;a0;B0)\u0000mZt\n0g(x;s)Te(s)ds;\nfm(x;~at;B0) =fm(x;a0;B0)\u0000mZt\n0g(a)(x)T~e(s)ds;(24)\nLet\nJ1(x;t) =mZt\n0(g(x;s)\u0000g(a)(x))Te(s)ds;\nJ2(x;t) =mZt\n0g(a)(x)T(e(s)\u0000~e(s))ds;\nthen we have\nfm(x; \u0002t) =fm(x;~at;B0) +J1(x;t) +J2(x;t): (25)\nWe are now going to bound J1(x;t) andJ2(x;t).\nWe \frst consider J1. By Theorem (3.1), with probability at least 1 \u0000\u000ewe have\njg(x;x0;t)\u0000g(a)(x;x0)j\u00143qn\nn+(\f+ 2pn)2\nn\n.\f2+qn\nn\n11\nfor anyt\u00150. Therefore, for any x2Sd\u00001, we have\njJ1(x;t)j\u0014mZt\n0kg(x;s)\u0000g(a)(x)kke(s)kds\n\u0014m\u00123qn+\f2\npn\u0013Zt\n0ke(s)kds\n\u0014m\u0000\n3qn+\f2\u0001Zt\n0q\n^Rn(\u0002s)ds\n.qn+\f2\n\u0015(a)\nn+\f2\u0015(b)\nnq\n^Rn(\u00020):\nHence, by the estimates of ^Rn(\u00020) in Lemma 1, we have\njJ1(x;t)j.qn+\f2\n\u0015(a)\nn\u0000\n1 +c(\u000e)pm\f\u0001\n: (26)\nInserting the estimate of qnin Lemma 4, we get\njJ1(x;t)j.c2(\u000e)\n\u0015(a)\nn\u00121pm+\f+pm\f3\u0013\n: (27)\nNext we turn to estimating J2. Let u(t) =e(t)\u0000~e(t). Following (22) and(23), we obtain\nu(0) = 0\nd\ndtu(t) =\u0000mG(a)(\u00020)u(t) +m(G(a)(\u00020)\u0000G(\u0002t))e(t):\nSolving the equation above gives\nu(t) =mZt\n0e\u0000mG(a)(\u00020)(t\u0000s)(G(a)(\u00020)\u0000G(\u0002s))e(s)ds: (28)\nConsider the initializations for which \u0015min(G(a)(\u00020))\u00153\u0015(a)\nn\n4. The probability of this event is\nno less than 1\u0000\u000e. For such initializations, we have\nku(t)k\u0014mZt\n0e\u00003\n4m\u0015(a)\nn(t\u0000s)kG(a)(\u00020)\u0000G(\u0002s)kFke(s)kds: (29)\nUsing Proposition (3.1), we conclude that with probability no less than 1 \u00002\u000e, the following\nholds:\nkG(\u0002s)\u0000G(a)(\u00020)kF\u0014kG(a)(\u0002s)\u0000G(a)(\u00020)kF+ max\nk2[m]a2\nk(s) (30)\n\u00143qn+ (\f+ 2pn)2(31)\n.qn+\f2: (32)\n12\nTogether with the fact that ke(s)k\u0014q\n2n^Rn(\u00020)e\u0000m\u0015(a)\nn\n2s, we obtain\nku(t)k.m(\f2+qn)q\n2n^Rn(\u00020)Zt\n0e\u00003\n4m\u0015(a)\nn(t\u0000s)e\u0000m\u0015(a)\nn\n2sds\n\u0014m(\f2+qn)q\n2n^Rn(\u00020)Zt\n0e\u00001\n4m\u0015(a)\nn(t\u0000s)e\u0000m\u0015(a)\nn\n2sds\n.m(\f2+qn)q\n2n^Rn(\u00020)1\n\u0015(a)\nne\u0000m\u0015(a)\nn\n4t(33)\nIn addition, for any x2Sd\u00001, we havekg(a)(x)k\u00141\nmpn. Hence, plugging (33) into J2leads\nto\njJ2(x;t)j\u0014mZt\n0kg(a)(x)kku(s)kds\n.m(\f2+qn)q\n^Rn(\u00020)\n\u0015(a)\nnZt\n0e\u0000m\u0015(a)\nn\n4sds\n.(\f2+qn)q\n^Rn(\u00020): (34)\nSubstituting in the estimates for qnand ^Rn(\u00020), and assuming that \f2\u00141, we obtain, for\nany\u000e>0, with probability no less than 1 \u00003\u000e,\njJ2(x;t)j.c2(\u000e)\n\u0015(a)\nn\u00121pm+\f+pm\f3\u0013\n: (35)\nFinally, combining the estimates of J1andJ2, we conclude that\njfm(x; \u0002t)\u0000fker\nm(x;t)j.c2(\u000e)\n\u0015(a)\nn\u00121pm+\f+pm\f3\u0013\n; (36)\nholds for any \u000e>0 with probability at least 1 \u00006\u000e. This completes the proof of Theorem 3.3.\n3.5 Curse of dimensionality of the implicit regularization\nFrom (24), we have\nfker\nm(x;t) =fm(x; \u00020)\u0000mg(a)(x)Zt\n0~e(s)ds\n=fm(x; \u00020)\u0000nX\ni=1g(a)(x;xi)wi(t); (37)\nwherewi(t) =mRt\n0~ei(s)ds. The second term in the right hand slide of (37) lives in the span\nofn\fxed basis:fg(a)(x;x1);g(a)(x;x2);\u0001\u0001\u0001;g(a)(x;xn)g.\nFor any probability distribution \u0019overSd\u00001, we de\fne\nH\u0019=\u001aZ\nSd\u00001a(w)\u001b(wTx)\u0019(dw) :Z\nSd\u00001a2(w)\u0019(dw)<1\u001b\n:\nFor anyh2H\u0019, de\fnekhk2\nH\u0019=E\u0019[ja2(w)j]. As shown in [ 23],H\u0019is exactly the RKHS with\nthe kernel de\fned by k(x;x0) =E\u0019[\u001b(wTx)\u001b(wTx0)].\n13\nDe\fnition 1 (Barron space) .The Barron space is de\fned as the union of H\u0019, i.e.\nBdef=[\u0019H\u0019:\nThe Barron norm for any h2Bis de\fned by\nkhkBdef= inf\n\u0019khkH\u0019:\nTo signify the dependence on the target function and data set, we introduce the notation:\nAt(f;fxign\ni=1;\u00020) =fker\nm(\u0001;t): (38)\nwhere the right hand side is the GD solution of the random feature model obtained by\nusing the training data fxi;yign\ni=1withyi=f(xi) and \u0002 0as the initial parameters. Let\nBQ=ff2B:kfkB\u0014Qg. We then have the following theorem.\nTheorem 3.4. There exists an absolute constant \u0014>0, such that for any t2[0;+1)\nsup\nf2BQkf\u0000At(f;fxign\ni=1;\u00020)k\u001a\u0015\u0014Q\nd(n+ 1)1=d: (39)\nRemark 6. Combined Theorem 3.4 with Theorem 3.3, we conclude that for any \u000e2(0;1), if\nmis su\u000eciently large, then with probability at least 1 \u0000\u000ewe have\nsup\nf2BQkf\u0000Bt(f;fxign\ni=1;\u00020)k\u001a\u0015\u0014Q\nd(n+ 1)1=d\u0000c2(\u000e)\n\u0015(a)\nn\u00121pm+\f+pm\f3\u0013\n;(40)\nwhere\nBt(f;fxign\ni=1;\u00020) =fm(\u0001;\u0002(t))\ndenotes the solution at time tof the GD dynamics for the two-layer neural network model. If\n\fis su\u000eciently small (e.g. \f=o(m\u00001=6)), then we see that the curse of dimensionality also\nholds for the solutions generated by the GD dynamics for the two-layer neural network model.\nSince this statement holds for all time t, no early-stopping strategy is able to \fx this curse of\ndimensionality problem.\nIn contrast, it has been proved in [ 13] that an appropriate regularization can avoid this\ncurse of dimensionality problem, i.e. if we denote by M(f;fxign\ni=1) the estimator for the\nregularized model in [ 13] (see (86) below), then it was shown that for any \u000e > 0, with\nprobability at least 1 \u0000\u000eover the sampling of fxign\ni=1, the following holds\nsup\nf2BQkf\u0000M (f;fxign\ni=1)k\u001a.Qpn\u0010p\nln(d) +p\nln(n=\u000e)\u0011\n: (41)\nThe comparison between (40) and(41) provides a quantitative understanding of the insu\u000e-\nciency of using the random feature model to explain the generalization behavior of neural\nnetwork models.\nTo prove Theorem 3.4, we need the following lemma, which is proved in [5].\n14\nLemma 5. Let^f(!)be the Fourier transform of a function fde\fned on Rd. Let \u0000Q=ff:R\nk!k2\n1j^f(!)jd!<Qg. Then, for any \fxed functions h1;h2;:::;hn, we have\nsup\nf2\u0000Qinf\nh2span(h1;:::;hn)kf\u0000hk\u0015\u0014Q\ndn1=d: (42)\nWe now prove Theorem 3.4.\nProof. As is shown in [7, 17], any function f2\u0000Qcan be represented as\nZ\nSd\u00001a(b)\u001b(bTx)\u0019(db)\nfor some\u0019andkfkH\u0019\u0014Q, which means f2BQ. Hence, \u0000 Q\u001aBQ. Next, since the training\ndatafxign\ni=1and the initialization are \fxed, we have\nAt(f;fxign\ni=1;\u00020)2span\u0010\ng(a)(\u0001;x1);:::;g(a)(\u0001;xn);fm(\u0001; \u00020)\u0011\n:\nTherefore, by Lemma 5, we obtain\nsup\nf2BQkf\u0000At(f;fxign\ni=1;\u00020)k\u001a\u0015sup\nf2BQinf\nh2span (g(a)(\u0001;xi);:::;g(a)(\u0001;xn);fm(\u0001;\u00020))kf\u0000hk\n\u0015sup\nf2\u0000Qinf\nh2span (g(a)(\u0001;xi);:::;g(a)(\u0001;xn);fm(\u0001;\u00020))kf\u0000hk\n\u0015\u0014Q\nd(n+ 1)1=d;\nfor some universal constant \u0014.\n4 Analysis of the general case\nIn this section, we will relax the requirement of the network width. We will make the following\nassumption on the target function.\nAssumption 2. We assume that the target function f\u0003admits the following integral repre-\nsentation\nf\u0003(x) =Z\nSd\u00001a\u0003(b)\u001b(bTx)d\u00190(b); (43)\nwith\r(f\u0003)def= maxf1;supb2Sd\u00001ja\u0003(b)jg<1.\nLetHkabe the RKHS induced by ka(\u0001;\u0001). It was shown in [ 23] thatkf\u0003kHka=p\nE\u00190[ja\u0003(b)j2]\u0014\r(f\u0003). Thus the assumption above implies that f\u00032Hka.\nThe following approximation result is essentially the same as the ones in [ 23,24]. Since\nwe are interested in the explicit control for the norm of the solution, we provide a complete\nproof in Appendix A.\nLemma 6. Assume that the target function f\u0003satis\fes Assumption 2. Then for any \u000e>0,\nwith probability at least 1\u0000\u000eover the choice of B0, there exists a\u00032Rmsuch that\nR(a\u0003;B0)\u0014\r2(f\u0003)\nm\u0010\n1 +p\n2 ln(1=\u000e)\u00112\n(44)\n15\nka\u0003k\u0014\r(f\u0003)pm; (45)\nwhereR(a\u0003;B0) =kfm(\u0001;a\u0003;B0)\u0000f\u0003(\u0001)k2\n\u001ais the population risk.\nThe following generalization bound for the random feature model will be used later.\nLemma 7. For \fxedB0, and any\u000e>0, with probability no less than 1\u00003\u000eover the choice\nof the training data, we have\n\f\f\fR(a;B0)\u0000^Rn(a;B0)\f\f\f\u00142(2pmkak+ 1)2\npn \n1 +s\n2 ln\u00122\n\u000e(kak+1\nkak)\u0013!\n(46)\nfor any a2Rm.\nPlease see Appendix B for the proof.\n4.1 Optimization results\nWe \frst show that the gradient descent algorithm can reduce the empirical risk to O(1\nm+1pn).\nHere we will assume m\u0015n. This assumption is not used in the next subsection, except for\nCorollary 4.3.\nTheorem 4.1. Take\f=c\nmfor some absolute constant c. Assume that the target function\nf\u0003satis\fes Assumption 2, and kf\u0003k1\u00141. Then, for any \u000e2(0;1), with probability no less\nthan 1\u00004\u000ewe have\n^Rn(at;Bt)\u0014C\u00121\nm+1\nmt+1pn\u0013\n;\nfor anyt>0, whereCis a constant depending on \u000e,\r(f\u0003)andc.\nThe next three lemmas give bounds on the changes of the parameters.\nLemma 8. Let\f=c\nm, andTbe a \fxed constant. Then there exists constant CTdepending\nonT, such that for any 0\u0014t\u0014T,\nkatk\u0014CT\u0012cpm+pmt\u0013\n;kBtk\u0014CT\u0012ctpm+pm\u0013\n; (47)\nand\nkBt\u0000B0k\u0014CT(c+ 1)\u0012cpmt+pm\n2t2\u0013\n: (48)\nProof. By the gradient descent dynamics, we have\nkak(t)k\u0014kak(0)k+Zt\n0kbk(s)kq\n^Rn(a0;B0)ds\u0014kak(0)k+ (\r(f\u0003) + 1)Zt\n0kbk(s)kds\nkbk(t)k\u0014k bk(0)k+Zt\n0kak(s)kq\n^Rn(a0;B0)ds\u0014kbk(0)k+ (\r(f\u0003) + 1)Zt\n0kak(s)kds\n16\nSincekak(0)k=c\nmandkbk(0)k= 1, we have\nkak(t)k\u0014cosh((c+ 1)t)c\nm+ sinh((c+ 1)t);\nkbk(t)k\u0014sinh((c+ 1)t)c\nm+ cosh((c+ 1)t):\nIft\u0014T, since cosh(( c+ 1)t)\u0014e(c+1)T+1\n2and sinh((c+ 1)t)\u0014e(c+1)T+1\n2t, we have\nkak(t)k\u0014CT\u0010c\nm+t\u0011\n;\nkbk(t)k\u0014CT\u0012ct\nm+ 1\u0013\n;\nwithCT=e(c+1)T+1\n2. Hence, we have\nkatk\u0014CT\u0012cpm+pmt\u0013\n;kBtk\u0014CT\u0012ctpm+pm\u0013\n: (49)\nForkBt\u0000B0k, consider a more re\fned estimate\nkbk(t)\u0000bk(0)k\u0014Zt\n0kak(s)kq\n^Rn(a0;B0)ds\u0014(c+ 1)Zt\n0kak(s)kds: (50)\nPlugging in the above estimate for ak, we obtain\nkBt\u0000B0k\u0014CT(c+ 1)\u0012cpmt+pm\n2t2\u0013\n: (51)\nLemma 9. Let\r=\r(f\u0003),\f=c\nm, and assumepm\u0015\r. Then, for any \u000e>0, with probability\nno less than 1\u00004\u000e, we have for any 0\u0014t\u0014T,\nk~atk\u0014~CT\u00121pm+p\ntpm+p\nt\nn1=4\u0013\n: (52)\nwhere ~CTis a constant.\nProof. Fork~atk, consider the Lyapunov function\nJ(t) =t\u0010\n^Rn(~at;B0)\u0000^Rn(a\u0003;B0)\u0011\n+1\n2k~at\u0000a\u0003k2: (53)\nSince ^Rn(~at;B0) is convex with respect to ~at, we haved\ndtJ(t)\u00140, which implies J(t)\u0014J(0).\nHence we have\nt(^Rn(~at;B0)\u0000^Rn(a\u0003;B0)) +1\n2k~at\u0000a\u0003k2\u00141\n2ka0\u0000a\u0003k2: (54)\nSince ^Rn(~at;B0)\u00150, we obtain\nk~at\u0000a\u0003k2\u00142t^Rn(a\u0003;B0)) +ka0\u0000a\u0003k2: (55)\n17\nBy Lemma 6 and Lemma 7, whenpm\u0015\r, with probability no less than 1 \u00004\u000e, we have\n^Rn(a\u0003;B0) =R(a\u0003;B0) +^Rn(a\u0003;B0)\u0000R(a\u0003;B0)\n\u0014\r2\nm \n1 +r\n2 log(1\n\u000e)!2\n+2(2pmka\u0003k+ 1)2\npn \n1 +s\n2 ln(2\n\u000e(ka\u0003k+1\nka\u0003k))!\n\u00142(2\r+ 1)20\n@1 +s\n2 log(4pm\n\r\u000e)1\nA2\u00121\nm+1pn\u0013\n: (56)\nTherefore we have\nk~atk2\u00142k~at\u0000a\u0003k2+ 2ka\u0003k2\n\u00142ka\u0003k2+ 2ka0\u0000a\u0003k2+ 4t^Rn(a\u0003;B0)\n\u00144\r2+ 2c2\nm+ 8(2\r+ 1)20\n@1 +s\n2 log(4pm\n\r\u000e)1\nA2\u00121\nm+1pn\u0013\nt: (57)\nLet~C= maxfp\n4\r2+ 2c2;2p\n2(2\r+ 1)\u0012\n1 +q\n2 log(4pm\n\r\u000e)\u0013\ng, we get\nk~atk\u0014~C\u00121pm+p\ntpm+p\nt\nn1=4\u0013\n: (58)\nLemma 10. Under the assumptions of Lemmas 8 and 9, for any 0\u0014t\u0014T, we have\nkat\u0000~atk\u0014CTt2\nm(1 +mt)(t+m)\u00121 +p\ntpm+p\nt\nn1=4\u0013\n: (59)\nfor some constant CT.\nProof. Note that\nd\ndt(at\u0000~at) =\u00001\nnnX\ni=1\u0000\nei(t)\u001b(Btxi)\u0000~ei(t)\u001b(B0xi)\u0001\n=\u00001\nnnX\ni=1aT\nt\u001b(Btxi)\u001b(Btxi) +1\nnnX\ni=1~aT\nt\u001b(B0xi)\u001b(B0xi)\n+1\nnnX\ni=1f\u0003(x)T(\u001b(Btxi)\u0000\u001b(B0xi))\n=\u00001\nnnX\ni=1\u001b(Btxi)\u001b(Btxi)T(at\u0000~at) +1\nnnX\ni=1(\u001b(BT\ntxi)\u001b(Btxi)T\u0000\u001b(B0xi)\u001b(B0xi)T)~at\n+1\nnnX\ni=1f\u0003(x)T(\u001b(Btxi)\u0000\u001b(B0xi)): (60)\n18\nMultiplying at\u0000~aton both sides of (60), we get\nd\ndtkat\u0000~atk2\u0014(at\u0000~at)T2\nnnX\ni=1(\u001b(Btxi)\u001b(Btxi)T\u0000\u001b(B0xi)\u001b(B0xi)T)~at\n+ (at\u0000~at)T2\nnnX\ni=1f\u0003(x)T(\u001b(Btxi)\u0000\u001b(B0xi))\n\u00142kBt\u0000B0k(kBtkk~atk+kB0kk~atk+ 1)kat\u0000~atk: (61)\nUsing the estimates in Lemmas 8 and 9, we obtain\nkat\u0000~atk\u00143C2\nT~CT(1 +c)3t2\nm(1 +mt)(t+m)\u00121 +p\ntpm+p\nt\nn1=4\u0013\n: (62)\nProof of Theorem 4.1\nLet ^\u001a=1\nnPn\ni=1\u000exi, then we have\n^Rn(at;Bt) =kf(\u0001;at;Bt)\u0000f\u0003(\u0001)k2\n^\u001a\n\u00143\u0000\nkf(\u0001;at;Bt)\u0000f(\u0001;~at;Bt)k2\n^\u001a+kf(\u0001;~at;Bt)\u0000f(\u0001;~at;B0)k2\n^\u001a\n+^Rn(~at;B0)\u0011\n: (63)\nBy Cauchy-Schwartz, we have\nkf(x;At;Bt)\u0000f(x;~at;Bt)k2\n^\u001a\u0014kat\u0000~atk2kBtk2; (64)\nkf(x;~at;Bt)\u0000f(x;~at;B0)k2\n^\u001a\u0014k~atk2kBt\u0000B0k2: (65)\nFor^Rn(~at;B0), from Lemma 6, with probability 1 \u0000\u000e, there exists a\u0003that satis\fes (44).\nThus we have\n^Rn(~at;B0) =\u0010\n^Rn(~at;B0)\u0000^Rn(a\u0003;B0)\u0011\n+\u0010\n^Rn(a\u0003;B0)\u0000R(a\u0003;B0)\u0011\n=:I1+I2: (66)\nBy Lemma 7, we can bound I2as follows,\nI2\u00142(2pmka\u0003k+ 1)2\npn \n1 +s\n2 ln(2\n\u000e(ka\u0003k+1\nka\u0003k))!\n: (67)\nForI1, consider the Lyapunov function\nJ(t) =t\u0010\n^Rn(~at;B0)\u0000^Rn(a\u0003;B0)\u0011\n+1\n2k~at\u0000a\u0003k2: (68)\nSince ^Rn(~at;B0) is convex with respect to ~at, we haved\ndtJ(t)\u00140, which implies J(t)\u0014J(0).\nHence we have\n^Rn(~at;B0)\u0014^Rn(a\u0003;B0) +ka0\u0000a\u0003k2\n2t: (69)\n19\nCombining all the estimates above, we conclude that for any \u000e>0, with probability larger\nthan 1\u00004\u000e, we have\n^Rn(at;Bt)\u00143kat\u0000~atk2kBtk2+ 3k~atk2kBt\u0000B0k2\n+6(2pmka\u0003k+ 1)2\npn \n1 +s\n2 ln(2\n\u000e(ka\u0003k+1\nka\u0003k))!\n+\r2\nm \n1 +r\n2 ln(1\n\u000e)!2\n+ka0\u0000a\u0003k2\n2t: (70)\nFor the estimate on a\u0003, by Lemma 6, we have ka\u0003k\u0014\rpm. To boundka0\u0000a\u0003k, we have\nka0\u0000a\u0003k\u0014k a0k+ka\u0003k\u0014c+\rpm: (71)\nTogether with the estimates in Lemmas 8, 9 and 10, and without loss of generality assuming\nthat\r\u00151, we obtain\n^Rn(at;Bt)\u0014C \n1\nm+1\nmt+1pn\u0012\n1 +p\nt+p\nmt\nn1=4\u00132\n+t2\nm2(1 +mt)2\u0012\n1 +t2\nm2(t+m)4\u0013\u0012\n1 +p\nt+p\nmt\nn1=4\u00132!\n: (72)\nfort2[0;T], and some constant C(we can choose C= 27C6\nT~C2\nT(c+ 1)8(2\r+ 1)2).\nIf we assume m\u0015nand taket2[0;pn\nm], then we can take T= 1 and obtain\n^R(at;Bt)\u0014C\u00121\nm+1\nmt+1pn\u0013\n80\u0014t\u00141; (73)\nfor some constant C. Moreover, since ^Rn(at;Bt) is non-increasing, ^Rn(at;Bt)\u0014^Rn(apn=m;Bpn=m).\nHence for any t>pn\nm, we have\n^Rn(at;Bt)\u0014C\u00121\nm+2pn\u0013\n; (74)\nfor some constant C. Combining (73) and (74), we complete the proof for all t.\n4.2 Generalization results\nThe following theorem provides an upper bound for the population error of GD solutions at\nany timet2[0;1). It tells that one can use early stopping to reach the optimal error in the\nabsence of over-parametrization.\nTheorem 4.2. Take\f=c\nmfor some constant c. Assume that the target function f\u0003satis\fes\nAssumption 2, and kf\u0003k1\u00141. Fix any positive constant T. Then for \u000e>0, with probability\nno less than 1\u00004\u000ewe have, for t\u0014T\nR(at;Bt)\u0014C \n1\nm+1\nmt+1pn\u0012\n1 +p\nt+p\nmt\nn1=4\u00132\n+t2\nm2(1 +mt)2\u0012\n1 +t2\nm2(t+m)4\u0013\u0012\n1 +p\nt+p\nmt\nn1=4\u00132!\n: (75)\n20\nwhereCis a constant depending only on T,\u000e,\r(f\u0003)andc.\nAs a consequence, we have the following early-stopping results.\nCorollary 4.3 (Early-stopping solution) .Assume that m > n . Lett=pn\nm. Under the\ncondition of Theorem 4.2, we have\nR(at;Bt).1\nm+1pn: (76)\nRemark 7. From these results we conclude that for target functions in a certain RKHS, with\nhigh probability the gradient descent dynamics can \fnd a solution with good generalization\nproperties in a short time. Compared to the long-term analysis in the last section, this\ntheorem does not require mto be very large. It works in the \\mildly over-parameterized\"\nregime.\nThe following Corollary provides a more detailed study of the balance between m,nandt\nto achieve best rates for R(at;Bt).\nCorollary 4.4. Assumem=npfor somep\u00150. Then, ifp\u00147\n8, taket=n\u00003p\n7, we have\nR(at;Bt).n\u00004\n7p: (77)\nIfp>7\n8, taket=n\u0000p+1\n2, we have\nR(at;Bt).n\u00001\n2: (78)\nProof. Letm=npandt=nr. We assume r\u00140, then\n\u0012\n1 +p\nt+p\nmt\nn1=4\u00132\n.1 +mtpn: (79)\nExpand the right hand side of (75), we obtain\nR(at;Bt).n\u0000p+n\u0000r\u0000p+n\u00001\n2+nr+p\u00001+n2r\u00002p+n3r\u0000p\u00001\n2\n+n4r+n5r+p\u00001\n2+n6r+2p+n7r+3p\u00001\n2: (80)\nFor eachp\u00150, we are going to \fnd the corresponding rfor which the maximum value among\nall the terms at the right hand side of (80) is minimized. When r=\u0000p, we have\u0000r\u0000p= 0.\nThus the second term is larger than any other terms. Hence, we only have to consider the case\nwhen\u0000p\u0014r\u00140. In this interval, we only need to compare the terms with powers \u0000r\u0000p,\nr+p\u00001, 6r+ 2pand 7r+ 3p\u00001\n2and neglect all other terms. The desired results are then\nobtained by comparing the second term with the other three terms.\nNow we prove Theorem 4.2.\nProof. Similar to (63), we have\nR(at;Bt) =kf(x;at;Bt)\u0000f\u0003(x)k2\n\u001a\n\u00143\u0000\nkf(x;at;Bt)\u0000f(x;~at;Bt)k2\n\u001a+kf(x;~at;Bt)\u0000f(x;~at;B0)k2\n\u001a\n+R(~at;B0)): (81)\n21\nHere\u001ais the distribution of input data x. For the \frst two terms in (81), we have the same\nestimates as in (64) and (65). For R(~at;B0), we have\nR(~at;B0) =\u0010\nR(~at;B0)\u0000^Rn(~at;B0)\u0011\n+\u0010\n^Rn(~at;B0)\u0000^Rn(a\u0003;B0)\u0011\n+\u0010\n^Rn(a\u0003;B0)\u0000R(a\u0003;B0)\u0011\n: (82)\nThe right hand side of (82) has one more term than (66), and additional term can be bounded\nas\nR(~at;B0)\u0000^Rn(~at;B0)\u00142(2pmk~atk+ 1)2\npn \n1 +s\n2 ln(2\n\u000e(k~atk+1\nk~atk))!\n: (83)\nHence, for any \u000e>0, with probability larger than 1 \u00004\u000e, we have\nR(at;Bt)\u00143kat\u0000~atk2kBtk2+ 3k~atk2kBt\u0000B0k2\n+6(2pmk~atk+ 1)2\npn \n1 +s\n2 ln(2\n\u000e(k~atk+1\nk~atk))!\n+6(2pmka\u0003k+ 1)2\npn \n1 +s\n2 ln(2\n\u000e(ka\u0003k+1\nka\u0003k))!\n+\r2\nm \n1 +r\n2 ln(1\n\u000e)!2\n+ka0\u0000a\u0003k2\n2t: (84)\nUsing the estimates of kat\u0000~atk,kBtk,k~atk,kBt\u0000B0k,ka\u0003kandka0\u0000a\u0003kderived in\nthe previous lemmas, and assuming that1+p\ntpm+p\nt\nn1=4\u00141, we obtain\nR(at;Bt)\u0014C \n1\nm+1\nmt+1pn\u0012\n1 +p\nt+p\nmt\nn1=4\u00132\n+t2\nm2(1 +mt)2\u0012\n1 +t2\nm2(t+m)4\u0013\u0012\n1 +p\nt+p\nmt\nn1=4\u00132!\n: (85)\nIn (85), the constant Ccan be chosen as C= 27C6\nT~C2(c+ 1)8(2\r+ 1)2.\n5 Numerical experiments\nIn this section, we present some numerical results to illustrate our theoretical analysis.\n5.1 Fitting random labels\nThe \frst experiment studies the convergence of GD dynamics for over-parametrized two-layer\nneural networks with di\u000berent initializations. We uniformly sample fxign\ni=1fromSd\u00001, and\nfor each xiwe specify a label yi, which is uniformly drawn from [ \u00001;1]. In the experiments,\nwe choosen= 50;d= 50, and network width m= 10;000\u001dn. Six initializations of di\u000berent\nmagnitudes are tested. Figure 1 shows the training curves.\n22\nWe see that the GD algorithm for the neural network models converges exponentially fast\nfor all initializations considered, even for the case when \f=m. This is consistent with the\nresults of Theorem 3.2.\n0 50 100 150 200 250 300\nnumber of iterations10−1010−710−410−1102105108Training lossn=50, d=50, m=10000\nβ= 0\nβ= 1/m\nβ= 1/m1\n2\nβ= 1\nβ=m1\n2\nβ=m\nFigure 1: Convergence of the GD algorithm for over-parameterized two-layer neural network\nmodels on randomly labeled data. Here \fdenotes the magnitude of the initialization of a.\nDi\u000berent curves correspond to di\u000berent initializations.\n5.2 Learning the one-neuron function\nThe next experiment compares the GD dynamics of two-layer neural networks and random\nfeature models. We consider the target function f\u0003(x) :=\u001b(eT\n1x) with e1= (1;0;\u0001\u0001\u0001;0)T2\nRd. The training set is given by f(xi;f\u0003(xi))gn\ni=1, withfxign\ni=1independently drawn from\nSd\u00001.\nWe \frst choose n= 50;d= 10 to build the training set, and then use the gradient descent\nalgorithm with learning rate \u0011= 0:01 to train two-layer neural network and random feature\nmodels. We initialize the models using \f= 0. In addition, 104new samples are drawn to\nevaluate the test error. Figure 2 shows the training and test error curves of the two models of\nthree widths: m= 4;50;1000. We see that, when the width is very small, the GD algorithm\nfor the random feature model does not converge, while it does converge for the neural network\nmodel and the resulting model does generalize. This is likely due to the special target function\nwe have chosen here. For the intermediate width ( m= 50), the GD algorithm for both models\nconverges, and it converges faster for the neural network model than for the random feature\nmodel. The test accuracy is slightly better for the resulting neural network model (but not as\ngood as for the case when m= 4). When m= 1000, the behavior of the GD algorithm for\ntwo models is almost the same.\nFinally, we study the generalization properties of neural network models of di\u000berent width.\nWe train two-layer neural networks of di\u000berent width until the training error is below 10\u00005.\nThen we measure the test error. We compare the test error with that of the regularized model\nproposed in [13]:\nminimize \u0002^Rn(\u0002) +\u0015r\nln(d)\nnk\u0002kP; (86)\nwhere\nk\u0002kP=mX\nk=1jakjkbkk:\n23\n0 10000 20000 30000 40000\nNumber of iterations10−810−610−410−2Lossm=4\n0 5000 10000 15000 20000\nNumber of iterations10−410−310−210−1Lossm=50\n0 500 1000 1500 2000\nNumber of iterations10−610−510−410−310−210−1Lossm=1000\nnn: train\nnn: test\nrf: train\nrf: testFigure 2: Training and testing losses for the neural network and random feature models using\nthe GD algorithm, starting from zero initialization of a. Left:m= 4; Middle: m= 50; Right:\nm= 1000.\n100101102103\nWidth: m0.0000.0050.0100.0150.020Test Loss\nReg\nun-Reg\nFigure 3: Testing errors of two-layer neural network models with di\u000berent widths, compared\nwith the regularized neural network model. For the regularized model, we choose \u0015= 0:01.\nThe results are showed in Figure 3. One sees that when the width is small, the test error is\nsmall for both methods. However, when the width becomes very large, the un-regularized\nneural network model does not generalized well. In other words, implicit regularization fails.\nThe above results are consistent with the theoretical lower bound (40), which states that\nlearning with GD su\u000bers from the curse of dimensionality for functions in Barron space.\nHere the one-neuron function serves as a speci\fc example. Intuitively, the one-neuron target\nfunctionf\u0003(x) =\u001b((w\u0003)Tx) only relies on the speci\fc direction w\u0003. However the basis\nf\u001b(wTx)gm\nj=1are uniformly drawn from Sd\u00001. In high dimension, we know hwj;w\u0003i\u00190 for\nanywjuniformly drawn from Sd\u00001. Therefore, it is not surprising to see that learning with\nuniform features su\u000bers from the curse of dimensionality.\n6 Conclusion\nTo put things into perspective, let us \frst recall some results from [13].\n1.One can de\fne a space of functions called the Barron space. The Barron space is the\nunion of all RKHS with kernels de\fned by\nk(x;x0) =Eb\u0018\u0019[\u001b(bTx)\u001b(bTx0)]\nwith respect to all probability distributions \u0019.\n24\n2.For regularized models with a suitably crafted regularization term, optimal generalization\nerror estimates (i.e. rates that are comparable to the Monte Carlo rates) can be\nestablished for all target functions in the Barron space.\nIn the present paper, we have shown that for over-parametrized two-layer neural networks\nwithout explicit regularization, the gradient descent algorithm is su\u000ecient for the purpose of\noptimization. But to obtain dimension-independent error rates for generalization, one has to\nrequire that the target function be in the RKHS with a kernel de\fned by the initialization. In\nother words, given a target function in the Barron space, in order for implicit regularization\nto work, one has to know beforehand the kernel function for that target function and use that\nkernel function to initialize the GD algorithm. This requirement is certainly impractical. In\nthe absence of such a knowledge, one should expect to encounter the curse of dimensionality\nfor general target functions in Barron space, as is proved in this paper.\nWe have also studied the case with general network width. Our results point to the same\ndirection as for the over-parametrized regime although in the general case, one has to rely\non early stopping to obtain good generalization error bounds. Our analysis does not rule\nout completely the possibility that in some scaling regimes of n;m;t , the GD algorithm for\ntwo-layer neural network models may have better generalization properties than that of the\nrelated kernel method.\nOur analysis was carried out under special circumstances, e.g. with a particular choice of\n\u00190and a very special domain Sd\u00001for the input. While it is certainly possible to extend this\nanalysis to more general settings, we feel that the value of such an analysis is limited since our\nmain message is a negative one: Without explicit regularization, the generalization properties\nof two-layer neural networks are likely to be no better than that of the kernel method.\nFrom a technical viewpoint, our analysis was facilitated greatly by the fact that the\ndynamics of the b's is much slower than that of the a's, as a consequence of the smallness\nof\f. As a result, the b's are e\u000bectively frozen in the GD dynamics. While this is the same\nsetup as the ones used in practice, one can also imagine putting out an explicit scaling factor\nto account for the smallness of \f, e.g.\nfm(x;\u0002) =1\nmmX\nk=1ak\u001b(bT\nkx) (87)\nas in [ 28,25,27]. In this case, the separation of time scales is no longer valid and one can\npotentially obtain a very di\u000berent picture. While this is certainly an interesting avenue to\npursue, so far there are no results concerning the e\u000bect of implicit regularization in such a\nsetting.\nAcknowledgement: The work presented here is supported in part by a gift to Princeton\nUniversity from iFlytek and the ONR grant N00014-13-1-0338.\nReferences\n[1]Zeyuan Allen-Zhu, Yuanzhi Li, and Yingyu Liang. Learning and generalization in overpa-\nrameterized neural networks, going beyond two layers. arXiv preprint arXiv:1811.04918 ,\n2018.\n[2]Zeyuan Allen-Zhu, Yuanzhi Li, and Zhao Song. A convergence theory for deep learning\nvia over-parameterization. arXiv preprint arXiv:1811.03962 , 2018.\n25\n[3]Nachman Aronszajn. Theory of reproducing kernels. Transactions of the American\nmathematical society , 68(3):337{404, 1950.\n[4]Sanjeev Arora, Simon S Du, Wei Hu, Zhiyuan Li, and Ruosong Wang. Fine-grained anal-\nysis of optimization and generalization for overparameterized two-layer neural networks.\narXiv preprint arXiv:1901.08584 , 2019.\n[5]Andrew R. Barron. Universal approximation bounds for superpositions of a sigmoidal\nfunction. IEEE Transactions on Information theory , 39(3):930{945, 1993.\n[6]Mikio L Braun. Accurate error bounds for the eigenvalues of the kernel matrix. Journal\nof Machine Learning Research , 7(Nov):2303{2328, 2006.\n[7]Leo Breiman. Hinging hyperplanes for regression, classi\fcation, and function approxima-\ntion. IEEE Transactions on Information Theory , 39(3):999{1013, 1993.\n[8]Yuan Cao and Quanquan Gu. A generalization theory of gradient descent for learning\nover-parameterized deep ReLU networks. arXiv preprint arXiv:1902.01384 , 2019.\n[9]Lenaic Chizat and Francis Bach. A note on lazy training in supervised di\u000berentiable\nprogramming. arXiv preprint arXiv:1812.07956 , 2018.\n[10]Amit Daniely. SGD learns the conjugate kernel class of the network. In Advances in\nNeural Information Processing Systems , pages 2422{2430, 2017.\n[11]Simon S Du, Jason D Lee, Haochuan Li, Liwei Wang, and Xiyu Zhai. Gradient descent\n\fnds global minima of deep neural networks. arXiv preprint arXiv:1811.03804 , 2018.\n[12]Simon S. Du, Xiyu Zhai, Barnabas Poczos, and Aarti Singh. Gradient descent provably\noptimizes over-parameterized neural networks. In International Conference on Learning\nRepresentations , 2019.\n[13]Weinan E, Chao Ma, and Lei Wu. A priori estimates for two-layer neural networks. arXiv\npreprint arXiv:1810.06397 , 2018.\n[14]Arthur Jacot, Franck Gabriel, and Cl\u0013 ement Hongler. Neural tangent kernel: Convergence\nand generalization in neural networks. In Advances in neural information processing\nsystems , pages 8580{8589, 2018.\n[15]Kenji Kawaguchi. Deep learning without poor local minima. In Advances in neural\ninformation processing systems , pages 586{594, 2016.\n[16]Nitish S. Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping T.P.\nTang. On large-batch training for deep learning: Generalization gap and sharp minima.\nInIn International Conference on Learning Representations (ICLR) , 2017.\n[17]Jason M Klusowski and Andrew R Barron. Risk bounds for high-dimensional ridge\nfunction combinations including neural networks. arXiv preprint arXiv:1607.01434 , 2016.\n[18]Alex Krizhevsky, Ilya Sutskever, and Geo\u000brey E. Hinton. Imagenet classi\fcation with\ndeep convolutional neural networks. In Advances in neural information processing systems ,\npages 1097{1105, 2012.\n26\n[19] Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. Nature , 521(7553):436{444, 2015.\n[20]Yuanzhi Li and Yingyu Liang. Learning overparameterized neural networks via stochastic\ngradient descent on structured data. In Advances in Neural Information Processing\nSystems , 2018.\n[21]Behnam Neyshabur, Ryota Tomioka, and Nathan Srebro. In search of the real in-\nductive bias: On the role of implicit regularization in deep learning. arXiv preprint\narXiv:1412.6614 , 2014.\n[22]Ali Rahimi and Benjamin Recht. Random features for large-scale kernel machines. In\nAdvances in neural information processing systems , pages 1177{1184, 2008.\n[23]Ali Rahimi and Benjamin Recht. Uniform approximation of functions with random bases.\nIn2008 46th Annual Allerton Conference on Communication, Control, and Computing ,\npages 555{561. IEEE, 2008.\n[24]Ali Rahimi and Benjamin Recht. Weighted sums of random kitchen sinks: Replacing min-\nimization with randomization in learning. In Advances in neural information processing\nsystems , pages 1313{1320, 2009.\n[25]Grant Rotsko\u000b and Eric Vanden-Eijnden. Parameters as interacting particles: long time\nconvergence and asymptotic error scaling of neural networks. In Advances in neural\ninformation processing systems , pages 7146{7155, 2018.\n[26]Shai Shalev-Shwartz and Shai Ben-David. Understanding machine learning: From theory\nto algorithms . Cambridge university press, 2014.\n[27]Justin Sirignano and Konstantinos Spiliopoulos. Mean \feld analysis of neural networks:\nA central limit theorem. arXiv preprint arXiv:1808.09372 , 2018.\n[28]Mei Song, A Montanari, and P Nguyen. A mean \feld view of the landscape of two-layers\nneural networks. In Proceedings of the National Academy of Sciences , volume 115, pages\nE7665{E7671, 2018.\n[29]Bo Xie, Yingyu Liang, and Le Song. Diverse neural network learns true target functions.\nInArti\fcial Intelligence and Statistics , pages 1216{1224, 2017.\n[30]Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Under-\nstanding deep learning requires rethinking generalization. In International Conference\non Learning Representations , 2017.\n[31]Difan Zou, Yuan Cao, Dongruo Zhou, and Quanquan Gu. Stochastic gradient descent\noptimizes over-parameterized deep ReLU networks. arXiv preprint arXiv:1811.08888 ,\n2018.\nA Proof of Lemma 6\nProof. For anyB0, leta\u0003(B0) =fa\u0003(b0\nk)=mgm\nk=1, wherea\u0003is the function de\fned in Assump-\ntion 2. Let\nf(x;A\u0003(B0);B0) =mX\nk=1a\u0003(b0\nk)\u001b(xTb0\nk): (88)\n27\nThen we have EB0f(x;A\u0003(B0);B0) =f\u0003(x). Now, consider\nZ(B0) =p\nEx(f(x;A\u0003(B0);B0)\u0000f\u0003(x))2; (89)\nthen if ~B0is di\u000berent from B0at only one b0\nk, we have\njZ(B0)\u0000Z(~B0)j\u00142\r\nm: (90)\nHence, by McDiarmid's inequality, for any \u000e>0, with probability no less than 1 \u0000\u000e, we have\nZ(B0)\u0014EZ(B0) +\rr\n2 ln(1=\u000e)\nm: (91)\nOn the other hand,\nEZ(B0)\u0014p\nEZ2(B0) =p\nExVar(f(x;A\u0003(B0);B0))\u0014\rpm: (92)\nTherefore, we have\nR(a\u0003;B0) =Z2(B0)\u0014\r2\nm \n1 +r\n2 ln(1\n\u000e)!2\n: (93)\nFinally, by Assumption 2, ka\u0003k\u0014\rpm.\nB Proof of Lemma 7\nFor anyQ> 0, letFQ=ff(\u0001;a;B0) :kak\u0014Qg. We can bound the Rademacher complexity\nofFQas follows.\nRad(FQ) =1\nnE\u0018[ sup\nkak\u0014QnX\ni=1\u0018imX\nk=1ak\u001b(xT\nib0\nk)]\n\u00141\nnE\u0018[ sup\nkak\u0014Q;kb0\nkk\u00141mX\nk=1aknX\ni=1\u0018i\u001b(xT\nib0\nk)]\n= sup\nkak\u0014QmX\nk=1ak1\nnE\u0018[ sup\nkb0\nkk\u00141nX\ni=1\u0018i\u001b(xT\nib0\nk)]\n\u0014Qvuutm\nnE\u0018sup\nkb0\nkk\u00141nX\ni=1\u0018i\u001b(xT\nib0\nk)\n=Qq\nmRad(f\u001b(bTx) :kbk\u00141g)\n\u0014pmQ: (94)\nNext, letHQ=f(f(\u0001;a;B0)\u0000f\u0003)2:kak\u0014Qg. Sincejf\u0003(x)j\u00141 for any x, by the\nCauchy-Schwartz inequality, jf(x;a;B0)j\u0014pmQ. Hence we can bound the Rademacher\ncomplexity ofHQby\nRad(HQ)\u00142(pmQ+ 1)Rad(FQ)\u00142mQ2+ 2pmQ; (95)\n28\nusing that ( f(\u0001;a;B0)\u0000f\u0003)2is Lipschitz continuous with Lipschitz constant bounded by\n2pmQ+ 1. Therefore, for any \u000e>0, with probability larger than 1 \u0000\u000e, we have\n\f\f\fR(a;B0)\u0000^Rn(a;B0)\f\f\f\u00142mQ2+ 2pmQpn+ (pmQ+ 1)2r\n2 ln(1=\u000e)\nn; (96)\nfor any awithkak\u0014Q.\nFinally, for any integer k, letQk= 2kand\u000ek= 2\u0000jkj\u000e. Then, with probability larger than\n1\u00001X\nk=\u00001\u000ek\u00151\u00003\u000e; (97)\nwe have that (96) holds for all Q=Qk. Given any a2Rm, we can \fnd a Qksuch that\nQk\u00142kak, which means\n\f\f\fR(a;B0)\u0000^Rn(a;B0)\f\f\f\u00148mkak2+ 4pmkakpn+ (2pmkak+ 1)2r\n2 ln(1=\u000ek)\nn\n\u00142(2pmkak+ 1)2\npn \n1 +s\n2 ln(2\n\u000e(kak+1\nkak))!\n:\nThis completes the proof.\nC Proof of Lemma 1\nProof. De\fneF=fh(a;b) =a\u001b(bTx) :kxk\u00141g. By the standard Rademacher complexity\nbound (see Theorem 26.5 of [26]), we have, with probability at least 1 \u0000\u000e,\nsup\nkxk\u00141j1\nmmX\nk=1ak\u001b(bT\nkx)\u00000j\u00142Radm(F) +\fr\nln(1=\u000e)\nm:\nMoreover, since \u001ek(\u0001)def=ak\u001b(\u0001) is\f\u0000Lipschitz continuous, by applying the contraction\nproperty of Rademacher complexity (see Lemma 26.9 of [26]) we have\nRadm(F) =1\nmE\"[ sup\nkxk\u00141mX\nk=1\"kak\u001b(bT\nkx)]\n\u0014\f\nmE\"[ sup\nkxk\u00141mX\nk=1\"kbT\nkx]\n\u0014\fpm;\nwhere the last inequality follows from the Lemma 26.10 of [ 26]. Thus with probability 1 \u0000\u000e,\nwe have that for any kxk= 1,\njf(x; \u00020)j=mj1\nmmX\nk=1ak\u001b(bT\nkx)j\u0014pm\f(2 +p\nln(1=\u000e)):\nThus ^Rn(\u00020)\u00141\n2nPn\ni=1(1 +jf(xi; \u00020)j)2\u00141\n2(1 +pm\f(2 +p\nln(1=\u000e)))2.\n29\nD Proof of Lemma 2\nProof. For a given \"\u00150, de\fne events\nSa\ni;j=f\u00020:jGa\ni;j(\u00020)\u00001\nnka(xi;xj)j\u0014\"=ng\nSb\ni;j=f\u00020:jGb\ni;j(\u00020)\u00001\nnkb(xi;xj)j\u0014\"=ng:\nHoe\u000bding's inequality gives us that\nP[Sa\ni;j]\u00151\u0000e\u00002m\"2;P[Sb\ni;j]\u00151\u0000e\u00002m\"2:\nThus with probability at least (1 \u0000e\u00002m\"2)2n2\u00151\u00002n2e\u00002m\"2, we have\nmaxfkGa\u0000KakF;kGb\u0000KbkFg\u0014\":\nUsing Weyl's Theorem, we have\n\u0015min(G(\u00020))\u0015\u0015min(Ga) +\f2\u0015min(Gb)\n\u0015\u0015a\nn\u0000kGa\u0000KakF+\f2\u0010\n\u0015b\nn\u0000kGb\u0000KbkF\u0011\n\u0015\u0015a\nn+\f2\u0015b\nn\u0000(1 +\f2)\":\nTaking\"=\u0015n=4, we complete the proof.\n30",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "KduYEnDAuc",
"year": null,
"venue": "CoRR 2021",
"pdf_link": "http://arxiv.org/pdf/2109.02094v1",
"forum_link": "https://openreview.net/forum?id=KduYEnDAuc",
"arxiv_id": null,
"doi": null
}
|
{
"title": "TagPick: A System for Bridging Micro-Video Hashtags and E-commerce Categories",
"authors": [
"Li He",
"Dingxian Wang",
"Hanzhang Wang",
"Hongxu Chen",
"Guandong Xu"
],
"abstract": "Hashtag, a product of user tagging behavior, which can well describe the semantics of the user-generated content personally over social network applications, e.g., the recently popular micro-videos. Hashtags have been widely used to facilitate various micro-video retrieval scenarios, such as search engine and categorization. In order to leverage hashtags on micro-media platform for effective e-commerce marketing campaign, there is a demand from e-commerce industry to develop a mapping algorithm bridging its categories and micro-video hashtags. In this demo paper, we therefore proposed a novel solution called TagPick that incorporates clues from all user behavior metadata (hashtags, interactions, multimedia information) as well as relational data (graph-based network) into a unified system to reveal the correlation between e-commerce categories and hashtags in industrial scenarios. In particular, we provide a tag-level popularity strategy to recommend the relevant hashtags for e-Commerce platform (e.g., eBay).",
"keywords": [],
"raw_extracted_content": "TagPick: A System for Bridging Micro-Video Hashtags and\nE-commerce Categories\nLi He\nUniversity of Technology Sydney\nSydney, Australia\[email protected] Wang\neBay Research America\nSeattle, United States\[email protected] Wang\neBay Research America\nSeattle, United States\[email protected]\nHongxu Chen\nUniversity of Technology Sydney\nSydney, Australia\[email protected] Xu\nUniversity of Technology Sydney\nSydney, Australia\[email protected]\nABSTRACT\nHashtag, a product of user tagging behavior, which can well de-\nscribe the semantics of the user-generated content personally over\nsocial network applications, e.g., the recently popular micro-videos.\nHashtags have been widely used to facilitate various micro-video\nretrieval scenarios, such as search engine and categorization. In\norder to leverage hashtags on micro-media platform for effective e-\ncommerce marketing campaign, there is a demand from e-commerce\nindustry to develop a mapping algorithm bridging its categories\nand micro-video hashtags. In this demo paper, we therefore pro-\nposed a novel solution called TagPick that incorporates clues from\nall user behavior metadata (hashtags, interactions, multimedia in-\nformation) as well as relational data (graph-based network) into\na unified system to reveal the correlation between e-commerce\ncategories and hashtags in industrial scenarios. In particular, we\nprovide a tag-level popularity strategy to recommend the relevant\nhashtags for e-Commerce platform (e.g., eBay).\nCCS CONCEPTS\n•Information systems →Data mining .\nKEYWORDS\nHashtags; Micro-Video; E-commerce; Deep Learning; Graph Repre-\nsentation\nACM Reference Format:\nLi He, Dingxian Wang, Hanzhang Wang, Hongxu Chen, and Guandong\nXu. 2021. TagPick: A System for Bridging Micro-Video Hashtags and E-\ncommerce Categories. In Proceedings of the 30th ACM International Con-\nference on Information and Knowledge Management (CIKM ’21), November\n1–5, 2021, Virtual Event, QLD, Australia. ACM, New York, NY, USA, 4 pages.\nhttps://doi.org/10.1145/3459637.3481979\nPermission to make digital or hard copies of all or part of this work for personal or\nclassroom use is granted without fee provided that copies are not made or distributed\nfor profit or commercial advantage and that copies bear this notice and the full citation\non the first page. Copyrights for components of this work owned by others than ACM\nmust be honored. Abstracting with credit is permitted. To copy otherwise, or republish,\nto post on servers or to redistribute to lists, requires prior specific permission and/or a\nfee. Request permissions from [email protected].\nCIKM ’21, November 1–5, 2021, Virtual Event, QLD, Australia\n©2021 Association for Computing Machinery.\nACM ISBN 978-1-4503-8446-9/21/11. . . $15.00\nhttps://doi.org/10.1145/3459637.34819791 INTRODUCTION\nThe increasing penetration and rapid development of social media\nhave significantly shaped human’s lifestyle nowadays. Various on-\nline multimedia contents occupy most of our spare time through\nthe wide-spreading micro-video sharing platforms, e.g., Tiktok and\nKuaishou1, for socialising, sharing and advertising as micro-video.\nThese videos are compact and rich in multimedia content with\nmulti-modalities, i.e., textual, visual, as well as acoustic informa-\ntion [ 9]. Under such circumstances, the so-called social e-commerce,\nwhich allows us to make a purchase from a third-party merchant\nwithin the dedicated software, becomes one of the most popular\nmeans for online shopping. Many e-commerce providers (e.g., eBay\nand Taobao) start investing in such collaborated social platforms,\nand explore the drainage strategies on multimedia social platforms\nto increase the traffic for advertising efficiently [2].\nMicro-videos on these social platforms are usually associated\nwith hashtags, which are commonly used to annotate the content\naspect of the micro-videos and attract use attention [ 1]. Hashtag can\nbe expressed by any arbitrary combination of characters led by a\nhash symbol ‘#’ (e.g., #beautifullife and #heathystyle). Hashtags are\ncreated by users, they hence can be treated as the self-expression\nof users, conveying the users’ preferences on posts and their usage\nstyles of hashtags. With these hashtags, users can easily search and\nmanage their historical posts and track others’ posts. The wide and\nfast dissemination of above tagging behavior has made social media\nan ideal source for reflecting individual preference and collective\nintelligence. Therefore, e-commerce companies (e.g., eBay) have\nrealized the marketing values of hashtags for advertisements and\nutilized the search volumes for the drainage of social e-commerce.\nDespite the recent advancement in hashtag generation for vari-\nous multimedia content (e.g., rich textual, microblog, micro-video),\nexisting methods mainly focus on personalized hashtag recom-\nmendation [ 10]. However, little of them models the characteris-\ntics of user behavior in social media platforms. Moreover, many\ne-commerce companies (e.g.,eBay) urgently desire a hashtag selec-\ntion module of leveraging the activity analysis of user hash-tagging\non micro-video platforms (e.g., Tiktok), which would be incorpo-\nrated into their current advertising system when they intend to post\nthe video advertisements on those platforms, and provide highly\n1https://en.wikipedia.org/wiki/KuaishouarXiv:2109.02094v1 [cs.SI] 5 Sep 2021\nCIKM ’21, November 1–5, 2021, Virtual Event, QLD, Australia Li and Dingxian, et al.\nrelevant hashtags effectively so that the advertisements placed on\nthe micro-video platform can attract more attention of users.\nThe majority of existing studies on generating hashtags are\nmainly based on representation learning for the tagged objects on\nsemantic level, such as collaborative filtering, generative models\nand deep neural networks [ 3,6]. However, picking hashtags for\ne-commerce advertisements (e.g., micro-video ads), diverting to\nbit different purpose is non-trivial due to the following challenges:\nFirst, the hashtag used by individual often contains ambiguity. For\nexample, the hashtag “#fitness” may be annotated to totally different\ncategories of “dietary” or “sports” (weight loss). Users indeed al-\nways have their own preferences on hashtag usages. Second, in the\ncontext of e-commerce, the hashtag to be selected needs to reflect\nthe marketing trend in social networks, serving for the drainage\npurpose. Therefore, how to find accurate and trendy hashtags for\ne-commerce advertising is the key problem. Inspired by the rich\ninformation contained in user post and tagging behaviors on social\nnetworks, we propose to derive the correlation between hashtags\nand e-commerce categories by graph-based learning. The benefits\nof this are three-fold: 1) User-annotated hashtags may contain lex-\nical clues linking to categorical information of the products. As\nhashtags are generated to annotate, categorize, and describe a post\n(e.g., micro-video), they well describe the textual characteristics\nof targets. 2) The scenarios of user tagging have rich contextual\ninformation, e.g., textual comments, multimedia contents and differ-\nent interactions (e.g., “click”, “follow” or “like”), which can provide\ncomplementary information. 3) The post hashtags are inherently\nrelated to the individual preferences, and such collective folkson-\nomy of user preference does reveal the popularity and trend of user\ninterest.\nTo address the aforementioned problems, our system is built\nupon a graph-based learning model for bridging micro-video hash-\ntags and e-commerce categories, which can jointly learn hashtag\nsemantics and user behavior information from social networks.\nThen, the learned representations are used to measure the similar-\nity between hashtags and e-commerce categories. In order to well\nillustrate the mapping results, we provide a web-based interface\nthat enables users to input keywords of interested e-commerce cat-\negories and the corresponding output hashtags will be visualized\nin dashboard interactively for business use. The backend mapping\nalgorithm not only works out the mapping results but also serves\nas an efficient search engine with interactive database system that\nsupports a mass graph data processing from social media platforms\nand e-commerce platforms.\nThe main contributions of the demonstration are summarised as\nfollows: 1) We develop a novel hashtag mapping system, namely\nTagPick , which considers the implicitly linked data between social\nnetworks and e-commerce platform, and pick out the appropriate\nhashtags that match the category. 2) The backend of TagPick is\nbuilt upon on a graph-based learning framework that exploits both\nuser behavior and hashtag semantics for modeling the correlation\nbetween micro-video hashtags and e-commerce categories. We\nconduct extensive model training experiments on eBay and Tiktok\ndatasets and the proposed framework is successfully embedded in\neBay’s cross-platform advertising system. 3) To make it easy to\nuse and showcase how the hashtags are mapped to categories, we\nFigure 1: TagPick System Overview\ndesign an interactive web-based dashboard to visualize the results\nbased on the open source Elastic Search (ES)2.\n2 SYSTEM OVERVIEW\nThe TagPick system consists of two major components as shown\nin Figure 1: a web-based user interface and a backend which inte-\ngrates a search engine and our hashtag bridging model. Specially,\nthe web frontend consists of two components: 1) A user input in-\nterface to capture the entered keywords or trigger conditions. 2) A\nmulti-modal dashboard to illustrate basic data statistics and map-\nping results. The backend mainly includes multiple modules: 1) A\ndatabase to store the e-Commerce categories information and the\npre-trained hashtags and its metadata. 2) The TagPick algorithm\nmodule based on graph-based deep learning. 3) An exploratory\nsearch engine that manages hashtag corpus, trending and related\ndata distribution. The platform frontend, is comprised of a dash-\nboards and a search interface, which are implemented using HTML,\nJavascript, and React, whereas the platform backend, including the\nalgorithms, are written in Tensorflow with Python.\nFrontend. The implementation and development of the TagPick\nweb platform (corresponding to the left part of Figure 1). In Figure 1,\nwe can see a sample query and data visualization module being\nconducted on the TagPick platform. The three primary components\nin the figure are: 1) user controls modules, 2) dashboards, and 3) the\nsearch application. User Control Modules : with respect to the\ncontrol module, the top part represents a user query (the drop-down\nmenu provides several reference options); and the request from\ncontrol panel applies only to the current dashboard. Dashboards :\nas mentioned above, the dashboard takes the input from user, and\nit can overlay and clear previous requests. In our system, we set\nup three sub-dashboards to illustrate top 𝑁hashtags, query results\nand time-aware trending. Search Application : to facilitate global\nsearch, this application provides search engine an interface and\nseveral functionalities, and will sort the results by default based\non the ranking algorithm. In addition, the ranking tables contain\nadditional information compared to the dashboard visualization.\nBackend. In this section, we describe the database, search engine\nand algorithmic design behind the TagPick algorithm (correspond-\ning to the right part of Figure 1). After getting the eBay categories\n2https://www.elastic.co/\nTagPick: A System for Bridging Micro-Video Hashtags and E-commerce Categories CIKM ’21, November 1–5, 2021, Virtual Event, QLD, Australia\nand Tiktok hashtags datasets as input, the backend database stores\nthese two different data corpuses separately and feeds the data to\nthe TagPick algorithm. The algorithm module is the core of our\nplatform, consisting of the following stages: 1) local graph structure\naround three types of entities - <hashtags, users, contents>, using\na variation of deep walk with convolutional operations [ 5]; 2) the\nTag2Vec model, which exploits several hierarchical relations such\nas hashtag-user, hashtag-content, hashtag-word, hashtag-category,\nand word-word to semantically understand the posted hashtags;\n3) calculating the similarity score between hashtags and keywords\n(e.g., category name or product name), and ranking the results\naccording to several statistical metrics (e.g., search volumes, post\ncounts and trending).\n3 TECHNICAL DETAILS\nThe proposed TagPick algorithm encompasses three key algorith-\nmic components: 1) The construction of user behavior analysis\nnetwork based on graph model. 2) The semantic analysis module.\n3) The similarity score calculation over multi-layered network.\n3.1 Graph Representation Learning\nGiven multiple relations and entities in social networks, the goal of\nthis module is to learn the user behavior containing the individual\npreference and the relevant category. Our proposed graph-based\nmodel mainly focuses on Graph Convolutional Networks (GCNs),\nwhich is a widely-used kind of graph models. Each layer of CGNs\ngenerates the intermediate embeddings by aggregating the infor-\nmation of items’ neighbors. After stacking several GCN layers, we\nobtain the final embeddings, which integrate the entire receptive\nfield of the targeted node [ 4]. Specifically, on the constructed graph,\nwe adopt the message-passing schema [ 7] to learn the users’ prefer-\nence on hashtags uℎ\n𝑖and contents u𝑐\n𝑖based on a user 𝑢𝑖’s hashtag\nneighbors setH𝑖and contents neighbors set V𝑖, respectively. After\nthat, uℎ\n𝑖andu𝑐\n𝑖are aggregated to represent the user preference 𝑢𝑖.\nUser preference on hashtags In our model, a user 𝑢𝑖’s pref-\nerence on hashtags uℎ\n𝑖is modeled by aggregating the incoming\nmessages from all the neighbor hashtags H𝑖. According to the\nidea of message-passing, we transfer the message from a hashtag\nℎ𝑗∈H𝑖to the user𝑢𝑖. As above, uℎ\n𝑖can be defined as:\nuℎ\n𝑖=𝜙©\n«1\n|H𝑖|∑︁\nℎ𝑗∈H 𝑖W𝑢\nℎh𝑗ª®\n¬(1)\nwhere W𝑢\nℎis the weight matrix which maps the hashtag vector\ninto user embedding space. 𝜙(.)is the activation function and |H𝑖|\nrepresents the number of neighbor hashtags.\nUser preference on Content The user preference on the post\ncontent u𝑐\n𝑖can be learned in the same way by aggregating the\nmessages from all the neighbors’ contents C𝑖. The passed message\nc𝑘from one user’s post represents all the contents in each post\n𝑐𝑘. Notice that a user’s post content contains rich information,\nincluding text, a sequence of video clips and images. For example,\nthe hashtags of a micro-video are usually provided based on the\ncontent. Therefore, the hashtags can better characterize the user\npreference on the post content. Similar to Eq. 1, the user preferenceon contents is the aggregation of the messages from all the neighbor\ncontents.\n3.2 Semantic Encoder\nThe e-commerce category and social hashtag contain lexical clues\nat different levels such as word-level and sentence-level, which\nprovide different degrees of explainability of why these documents\nare inherently related.\nWord Encoder We learn the sentence representation via a bidi-\nrectional Recurrent Neural Network (RNN) with Gated Recurrent\nUnits (GRU). The bidirectional GRU contains the forward GRU− →𝑓\nwhich reads sentence 𝑠𝑖from word𝑤𝑖\n1to𝑤𝑖\n𝑀𝑖and a backward GRU\n← −𝑓which reads sentence 𝑠𝑖from𝑤𝑖\n𝑀𝑖to𝑤𝑖\n1. The sentence vector 𝑑𝑖\nis computed as bi-GRU processing.\nSentence Encoder Similar to word encoder, we use RNN with\nGRU to encode each sentence in news articles. Through the sentence\nencoder, we can learn the sentence representations. The annota-\ntion of sentence 𝑠𝑖is obtained by concatenating the forward and\nbackward operations, which capture the context from neighbor\nsentences.\n3.3 Ranking on Multi-layered Network\nAs discussed above, the user preference is obtained by combining\nthe user preference on hashtags and post contents. Many combina-\ntion methods can be applied here, such as concatenation and deep\nfusion models [ 8]. In this work, we tried the neural network-based\nfusion.\nFusion Module In this module, uℎ\n𝑖andu𝑐\n𝑖are first concatenated\nand then fed into a fully connected layer to obtain the final repre-\nsentation of the user preference. Formally, the user preference is\nobtained by:\nu𝑖=𝜙\u0010\nW𝑛𝑛𝐶𝑜𝑛𝑐𝑎𝑡\u0010\nu𝑣\n𝑖,uℎ\n𝑖\u0011\n+b𝑛𝑛\u0011\n(2)\nwhere Concat(.,.) is the concatenation operator; W𝑛𝑛and b𝑛𝑛\nindicate the learnable weight matrix and bias vector in the fully\nconnected layer, respectively.\nRanking Module Given a new e-commerce category keyword\n𝑘𝑗, the hashtags inHcould be arranged in the descending order of\ntheir similarity scores with respect to the category based on 𝑢𝑖’s\npreference. Specifically, the similarity score is computed by the dot\nproduct of uℎ\n𝑖,u𝑐\n𝑖and category keywords 𝑘𝑗.\n4 DEMONSTRATION\nIn this section, we show two scenarios of how TagPick can be used\nfor mapping eBay category (i.e., which category to be advertised)\nand Tiktok micro-video hashtags.\nLocal Control The frontend system consists control panels and\ndynamic dashboards. The control panel provides users with a multi-\nlevel relational menu to select eBay Categories from the database, as\nshown in Figure 2. For example, if a user selects the eBay category\nkeyword - \"beauty\", the range slider menu will provides filtering\nfor different post counts of hashtags. We also provide a multi-level\ncategory retrieval interface, which can be dynamically added ac-\ncording to the contents of the database. Consider our platform\nis to be linked to eBay advertising system, we provide a plug-in\nCIKM ’21, November 1–5, 2021, Virtual Event, QLD, Australia Li and Dingxian, et al.\nTrending\nDashboar dControl\nDashboardGlobal Dashboard\nGlobal Search\nSearch\nuser search \ninterfaceCategory optionsUser Control\nFigure 2: A Demonstration of TagPick: 1) User control panel and Dynamic dashboards (left); 2) Global static dashboard and global search user\ninterface (right).\ndashboard panel and support the export of retrieved result data as\ndifferent file format (e.g., CSV3). Users can select different category\nkeywords and related hashtags by clicking the menu, the visual\ndashboard in the panel switches accordingly.\nGlobal Search Consider a user who wants to query the rank-\ning list of hashtags associated with eBay categories, a user search\ninterface (USI) is developed. The user first enters the keywords\nin the input box on the top of USI home page in bottom right of\nFigure 2. The system would return several result panels which con-\ntain hashtag contents, similarity scores and indexes in database.\nThese result panels are sorted by the score, illustrating results for\ncategory-level through a filter option. The higher the score, the\nmore likely the hashtag matches the keyword in that category. The\nuser can click each query result in the panel to check the details of\nthe metadata, such as post counts, timestamp and other multimedia\nsocial content.\n5 CONCLUSION\nIn this demo paper, we present TagPick, a fully-functional and easy-\nto-use platform for bridging e-commerce advertising and social\nnetwork in industrial scenarios. TagPick leverages the distributed\nsearch engine as its data infrastructure, and adopts GCN-based\nmethods as its core algorithm to perform our user behavior model\ntraining and calculate the similarity score over multi-layer encoders.\nTagPick provides a user-friendly Dashboard panel and USI for\nquerying data and deploying it into their advertising system.\n3https://en.wikipedia.org/wiki/Comma-separated_values6 ACKNOWLEDGMENTS\nThis work is supported in part by ARC under grants DP200101374\nand LP170100891.\nREFERENCES\n[1]Da Cao, Lianhai Miao, Huigui Rong, Zheng Qin, and Liqiang Nie. 2020. Hashtag\nour stories: Hashtag recommendation for micro-videos via harnessing multiple\nmodalities. Knowledge-Based Systems 203 (2020), 106114.\n[2]Chen Gao, Tzu-heng Lin, Nian Li, Depeng Jin, Yong Li, and Senior Member. 2020.\nCross-platform Item Recommendation for Online Social E-Commerce. (2020).\n[3]Yeyun Gong, Qi Zhang, and Xuanjing Huang. 2015. Hashtag Recommendation\nUsing Dirichlet Process Mixture Models Incorporating Types of Hashtags. In\nProceedings of the 2015 Conference on Empirical Methods in Natural Language\nProcessing . Association for Computational Linguistics, Lisbon, Portugal.\n[4]Thomas N. Kipf and Max Welling. 2017. Semi-supervised classification with\ngraph convolutional networks. ICLR (2017). arXiv:1609.02907\n[5]Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. 2014. DeepWalk: Online Learn-\ning of Social Representations. SIGKDD (03 2014).\n[6]M. K. Su, T. A. Hoang, E. P. Lim, and F. Zhu. 2012. On Recommending Hashtags\nin Twitter Networks. In Proceedings of the 4th international conference on Social\nInformatics .\n[7]Qingqin Wang, Yun Xiong, Yao Zhang, Jiawei Zhang, and Yangyong Zhu. 2021.\nAutoCite: Multi-Modal Representation Fusion for Contextual Citation Generation.\nInWSDM . New York, NY, USA, 788–796. https://doi.org/10.1145/3437963.3441739\n[8]Xiao Wang, Deyu Bo, Chuan Shi, Shaohua Fan, Yanfang Ye, and Philip S. Yu.\n2020. A Survey on Heterogeneous Graph Embedding: Methods, Techniques,\nApplications and Sources. (2020).\n[9]Yinwei Wei, Xiang Wang, Liqiang Nie, Xiangnan He, Richang Hong, and Tat-Seng\nChua. 2019. MMGCN: Multi-modal graph convolution network for personalized\nrecommendation of micro-video. In ACM MM .\n[10] Yinwei Wei, Zhou Zhao, Zhiyong Cheng, Lei Zhu, Xuzheng Yu, and Liqiang Nie.\n2019. Personalized hashtag recommendation for micro-videos. MM (2019).",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "5Ex0AtswQd",
"year": null,
"venue": "CIKM 2021",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=5Ex0AtswQd",
"arxiv_id": null,
"doi": null
}
|
{
"title": "TagPick: A System for Bridging Micro-Video Hashtags and E-commerce Categories",
"authors": [
"Li He",
"Dingxian Wang",
"Hanzhang Wang",
"Hongxu Chen",
"Guandong Xu"
],
"abstract": "Hashtag, a product of user tagging behavior, which can well describe the semantics of the user-generated content personally over social network applications, e.g., the recently popular micro-videos. Hashtags have been widely used to facilitate various micro-video retrieval scenarios, such as search engine and categorization. In order to leverage hashtags on micro-media platform for effective e-commerce marketing campaign, there is a demand from e-commerce industry to develop a mapping algorithm bridging its categories and micro-video hashtags. In this demo paper, we therefore proposed a novel solution called TagPick that incorporates clues from all user behavior metadata (hashtags, interactions, multimedia information) as well as relational data (graph-based network) into a unified system to reveal the correlation between e-commerce categories and hashtags in industrial scenarios. In particular, we provide a tag-level popularity strategy to recommend the relevant hashtags for e-Commerce platform (e.g., eBay).",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "z0awZopdEMr",
"year": null,
"venue": "EACL 2021",
"pdf_link": "https://aclanthology.org/2021.eacl-main.244.pdf",
"forum_link": "https://openreview.net/forum?id=z0awZopdEMr",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Progressively Pretrained Dense Corpus Index for Open-Domain Question Answering",
"authors": [
"Wenhan Xiong",
"Hong Wang",
"William Yang Wang"
],
"abstract": "Commonly used information retrieval methods such as TF-IDF in open-domain question answering (QA) systems are insufficient to capture deep semantic matching that goes beyond lexical overlaps. Some recent studies consider the retrieval process as maximum inner product search (MIPS) using dense question and paragraph representations, achieving promising results on several information-seeking QA datasets. However, the pretraining of the dense vector representations is highly resource-demanding, e.g., requires a very large batch size and lots of training steps. In this work, we propose a sample-efficient method to pretrain the paragraph encoder. First, instead of using heuristically created pseudo question-paragraph pairs for pretraining, we use an existing pretrained sequence-to-sequence model to build a strong question generator that creates high-quality pretraining data. Second, we propose a simple progressive pretraining algorithm to ensure the existence of effective negative samples in each batch. Across three open-domain QA datasets, our method consistently outperforms a strong dense retrieval baseline that uses 6 times more computation for training. On two of the datasets, our method achieves more than 4-point absolute improvement in terms of answer exact match.",
"keywords": [],
"raw_extracted_content": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics , pages 2803–2815\nApril 19 - 23, 2021. ©2021 Association for Computational Linguistics2803Progressively Pretrained Dense Corpus Index for\nOpen-Domain Question Answering\nWenhan Xiong\u0003, Hong Wang\u0003, William Yang Wang\nUniversity of California, Santa Barbara\nfxwhan, hongwang600, william [email protected]\nAbstract\nCommonly used information retrieval meth-\nods such as TF-IDF in open-domain question\nanswering (QA) systems are insufficient to\ncapture deep semantic matching that goes be-\nyond lexical overlaps. Some recent studies\nconsider the retrieval process as maximum in-\nner product search (MIPS) using dense ques-\ntion and paragraph representations, achiev-\ning promising results on several information-\nseeking QA datasets. However, the pretraining\nof the dense vector representations is highly\nresource-demanding, e.g., requires a very large\nbatch size and lots of training steps. In this\nwork, we propose a sample-efficient method to\npretrain the paragraph encoder. First, instead\nof using heuristically created pseudo question-\nparagraph pairs for pretraining, we use an ex-\nisting pretrained sequence-to-sequence model\nto build a strong question generator that cre-\nates high-quality pretraining data. Second, we\npropose a simple progressive pretraining algo-\nrithm to ensure the existence of effective nega-\ntive samples in each batch. Across three open-\ndomain QA datasets, our method consistently\noutperforms a strong dense retrieval baseline\nthat uses 6 times more computation for train-\ning. On two of the datasets, our method\nachieves more than 4-point absolute improve-\nment in terms of answer exact match.\n1 Introduction\nWith the promise of making the vast amount of in-\nformation buried in text easily accessible via user-\nfriendly natural language queries, the area of open-\ndomain QA has attracted lots of attention in recent\nyears. Existing open-domain QA systems are typ-\nically made of two essential components (Chen\net al., 2017). A retrieval module first retrieves a\ncompact set of paragraphs from the whole corpus\n?Equal Contribution.\nOur code is available at https://github.com/\nxwhan/ProQA.git .(such as Wikipedia) that includes millions of docu-\nments. Then a reading module is deployed to ex-\ntract an answer span from the retrieved paragraphs.\nOver the past few years, much of the progress in\nopen-domain QA has been focusing on improving\nthe reading module of the system, which only needs\nto process a small number of retrieved paragraphs.\nSpecifically, improvements include stronger read-\ning comprehension models (Wang et al., 2018b;\nYang et al., 2019; Xiong et al., 2020; Min et al.,\n2019a) and paragraph reranking models (Wang\net al., 2018a; Lin et al., 2018) that assign more ac-\ncurate relevance scores to the retrieved paragraphs.\nHowever, the performance is still bounded by the\nretrieval modules, which simply rely on traditional\nIR methods such as TF-IDF or BM25 (Robertson\nand Zaragoza, 2009). These methods retrieve text\nsolely based on n-gram lexical overlap and can fail\non cases when deep semantic matching is required\nand when there are no common lexicons between\nthe question and the target paragraph.\nWhile neural models have proven effective at\nlearning deep semantic matching between text\npairs (Bowman et al., 2015; Parikh et al., 2016;\nChen et al., 2018; Devlin et al., 2019), they usu-\nally require computing question-dependent para-\ngraph encodings ( i.e., the same paragraph will have\ndifferent representations when considering differ-\nent questions), which is formidable considering\nspace constraints and retrieval efficiency in practice.\nMore recent studies (Lee et al., 2019; Chang et al.,\n2020; Guu et al., 2020) show that such a dilemma\ncan be resolved with large-scale matching-oriented\npretraining. These approaches use separate en-\ncoders for questions and paragraphs and simply\nmodel the matching between the question and para-\ngraph using inner products of the output vectors.\nThus, these systems only need to encode all para-\ngraphs in a question-agnostic fashion, and the re-\nsulted dense corpus index could be fixed and reused\n2804for all possible questions. While achieving signifi-\ncant improvements over the BM25 baseline across\na set of information-seeking QA datasets, existing\npretraining strategies are highly sample-inefficient\nand typically require a large batch size (up to thou-\nsands), such that diverse and effective negative\nquestion-paragraph pairs could be included in each\nbatch. When using a small batch size in our exper-\niments, the model ceases to improve after certain\nupdates. Given that a 12G GPU can only store\naround 10 samples with the BERT-base architec-\nture at training time, the wider usage of these meth-\nods to corpora with different domains ( e.g., non-\nencyclopedic web documents or scientific publica-\ntions) is hindered given modest GPU hardware.\nIn this work, we propose a simple and sample-\nefficient method for pretraining dense corpus repre-\nsentations. We achieve stronger open-domain QA\nperformance compared to an existing method (Lee\net al., 2019) that requires 6 times more computa-\ntion at training time. Besides, our method uses a\nmuch smaller batch size and can be implemented\nwith only a small number of GPUs, i.e., we use\nat most 4 TITAN RTX GPUs for all our experi-\nments. In a nutshell, the proposed method first uses\na pretrained sequence-to-sequence model to gener-\nate high-quality pretraining data instead of relying\non heuristics to create pseudo question-paragraph\npairs; for the training algorithm, we use cluster-\ning techniques to get effective negative samples\nfor each pair and progressively update the clusters.\nOur method’s efficacy is further validated through\nablation studies, where we replicate existing meth-\nods that use the same amount of resources. For the\ndownstream QA experiments, we carefully inves-\ntigate different finetuning objectives and show the\ndifferent configurations of the retrieval and span\nprediction losses have nontrivial effects on the fi-\nnal performance. We hope this analysis could save\nthe efforts on trying out various finetuning strate-\ngies of future research that focus on improving the\nretrieval component of open-domain QA systems.\nThe main contributions of this work include:\n\u000fWe show the possibility of pretraining an effec-\ntive dense corpus index for open-domain QA\nwith modest computation resources.\n\u000fOur data generation strategy demonstrates that\npretrained language models are not only useful\nas plug-and-play contextual feature extractors:\nthey could also be used as high-quality data gen-\nerators for other pretraining tasks.\u000fWe propose a clustering-based progressive\ntraining paradigm that improves the sample-\nefficiency of dense retrieval pretraining and can\nbe easily incorporated into existing methods.\n2 Framework\nWe begin by introducing the network architectures\nused in our retrieval and reading comprehension\nmodels. Next, we present how to generate high-\nquality question-paragraph pairs for pretraining and\nhow we progressively train the retrieval model with\neffective negative instances. Finally, we show how\nto finetune the whole system for QA.\n2.1 Model Architectures\nNotations We introduce the following notations\nwhich will be used through our paper. The goal\nof open-domain QA is to find the answer deriva-\ntion(p;s)from a large text corpus Cgiven a\nquestionq, wherepis an evidence paragraph\nandsis a text span within p. The start and\nend token of sare denoted as START(s)and\nEND(s)respectively. We refer the retrieval mod-\nule as P\u0012(pjq), with learnable parameters \u0012. Sim-\nilarly, we refer the reading comprehension mod-\nule as P\u001e(sjp;q), which can be decomposed as\nP\u001e(START(s)jp;q)\u0002P\u001e(END(s)jp;q). We use\nDkto represent the top-k paragraphs from the re-\ntrieval module; a subset of D\u00032Dkrepresents\nthe paragraphs in Dkthat cover the correct answer;\nfor each paragraph p2D\u0003, we defineS\u0003\npas all the\nspans inpthat match the answer string.\nThe Retrieval Module We uses two isomorphic\nencoders to encode the questions and paragraphs,\nand the inner product of the output vectors is used\nas the matching score. The encoders are based on\nthe BERT-base architecture. We add linear layers\nWq2R768\u0002128andWp2R768\u0002128above the\nfinal representations of the [CLS] token to derive\nthe question and paragraph representations:\nhq=WqBERTQ(q)([CLS])\nhp=WpBERTP(p)([CLS]);\nThe matching score is modeled as h>\nqhp. Thus, the\nprobability of selecting pgivenqis calculated as:\nP\u0012(pjq) =eh>\nqhp\nP\np02Ceh>qhp0:\nIn practice, we only consider the top-k retrieved\nparagraphsCfor normalization.\n2805\nAll Wiki ParagraphsParagraph Encoder…Paragraph RepresentationsK-meansParagraphs Clusters\nIn-batch Negative SamplingQuestion Encoder Paragraph EncoderRetrieval ModuleIndependent Encoding……+---Question RepresentationsParagraph RepresentationsParagraph BatchPaired with QuestionsSample from one cluster\nNER1. Wiki Paragraphs2. Entity Spans(Question, Paragraph) pairsQuestion GenerationQuestion GenerationPretraining MethodFinetuned BARTEncode as Dense VectorsUpdateRe-clusteringFigure 1: An overview of the progressive pretraining approach.\nThe Reading Module The architecture of our\nreading comprehension model is identical to the\none in the original BERT paper (Devlin et al., 2019).\nWe use two independent linear layers to predict the\nstart and end position of the answer span. At train-\ning time, when calculating the span probabilities,\nwe apply the shared-normalization technique pro-\nposed by Clark and Gardner (2018), which normal-\nizes the probability across all the top-k retrieved\nparagraphs. This encourages the model to produce\nglobally comparable answer scores. We denote this\nprobability as Psn\n\u001e(sjp;q)in contrast to the original\nformulation P\u001e(sjp;q)that normalizes the proba-\nbility within each paragraph.\n2.2 The Pretrainining Method\nWe now describe how to pretrain the retrieval mod-\nule using a better data generation strategy and a\nprogressive training paradigm. Figure 1 depicts the\nwhole pretraining process.\nPretraining Data Generation Previous dense\nretrieval approaches usually rely on simple heuris-\ntics to generate synthetic matching pairs for pre-\ntraining, which do not necessarily reflect the un-\nderlying matching pattern between questions and\nparagraphs. To minimize the gap between pre-\ntraining and the end task, we learn to generate\nhigh-quality questions from the paragraphs using\na state-of-the-art pretrained seq2seq model, i.e.,\nBART (Lewis et al., 2019). More specifically,\nwe finetune BART on the original NaturalQues-\ntions dataset (Kwiatkowski et al., 2019) such that it\nlearns to generate questions given the groundtruth\nanswer string and the groundtruth paragraph (la-beled as long answer in NaturalQuestions). We\nconcatenate the paragraph and the answer string\nwith a separating token as the input to the BART\nmodel. We find this simple input scheme is ef-\nfective enough to generate high-quality questions,\nachieving a 55.6 ROGUE-L score on the dev set.\nSamples of the generated questions could be found\nin the appendix. Afterward, we use spaCy1to rec-\nognize potential answer spans (named entities or\ndates) in all paragraphs in the corpus and use the\nfinetuned BART model to generate the questions\nconditioned on the paragraph and each of the po-\ntential answers.\nIt is worth noting that the groundtruth answer\nparagraph supervision at this step could be even-\ntually dropped and we could just use weakly su-\npervised paragraphs to train the question generator;\nthus, our system becomes fully weak-supervised.\nAs the pretraining process takes a long training\ntime and lots of resources, we are unable to re-\npeat the whole pretraining process with the weakly-\nsupervised question generator. However, additional\nquestion generation experiments suggest that while\nusing weakly-supervised paragraphs, the question\ngenerator still generates high-quality questions,\nachieving a ROUGE-L score of 49.6.2\nIn-batch Negative Sampling To save compu-\ntation and improve the sample efficiency of pre-\ntraining, we choose to use in-batch negative sam-\npling (Logeswaran and Lee, 2018) instead of gath-\nering negative paragraphs for each question to pre-\n1https://spacy.io\n2For reference, a state-of-the-art QG model (Ma et al.,\n2020) trained with strong supervision achieves 49.9 ROUGE-\nL on a similar QA dataset also collected from real-user queries.\n2806train the retrieval module. Specifically, for each\npair(q;p)within a batch B, the paragraphs paired\nwith other questions are considered negative para-\ngraphs forq. Thus, the pretraining objective for\neach generated question is to minimize the negative\nlog-likelihood of selecting the correct pamong all\nparagraphs in the batch:\nLpre=\u0000log P\u0012(pjq): (1)\nA graphic illustration of this strategy is shown in\nFigure 1. As the batch size is usually very small\ncompared to the number of all the paragraphs in\nthe corpus, the pretraining task is much easier com-\npared to the final retrieval task at inference time. In\nthe whole corpus, there are usually lots of similar\nparagraphs and these paragraphs could act as strong\ndistractors for each other in terms of both para-\ngraph ranking and answer extraction. The desired\nretrieval model should be able to learn fine-grained\nmatching instead of just learning to distinguish ob-\nviously different paragraphs. However, since exist-\ning dense retrieval methods typically use uniform\nbatch sampling, there could be many easy negative\nsamples in each batch, and they can only provide\nweak learning signals. Thus, a large batch size is\nusually adopted to include sufficient effective neg-\native samples. Unfortunately, this is generally not\napplicable without hundreds of GPUs.\nThe Progressive Training Paradigm To pro-\nvide effective negative samples, we propose a pro-\ngressive training algorithm, as shown in the lower\npart of Figure 1. The key idea is to leverage the\nretrieval model itself to find groups of similar para-\ngraphs. At a certain training step, we use the para-\ngraph encoder at that moment to encode the whole\ncorpus and cluster all (q;p)pairs into many groups\nbased on the similarity of their paragraph encod-\nings. These groups are supposed to include similar\nparagraphs and potentially related questions. Then,\nwe continue our pretraining by sampling each batch\nfrom one of the clusters. By doing this, we can pro-\nvide challenging and effective negative paragraphs\nfor each question, even with small batch size. Ev-\nery time we recluster the whole corpus, the model\nwill be encouraged to learn finer-grained matching\nbetween questions and paragraphs. Algorithm 1\nprovides a formal description of the entire process.\nNote that our training algorithm shares spirits with\nCurriculum Learning (Bengio et al., 2009) and Self-\nPaced Learning (Kumar et al., 2010; Jiang et al.,\n2015), in which the models are trained with harderAlgorithm 1 The Clustering-based Progressive Pretraining\n1:Input:\n2: a) all (q; p)pairs from the question generation model;\n3: b) the retrieval module BERT QandBERT P;\n4:while not finished do\n5: Encode the whole corpus with BERT P;\n6: Clustering all paragraphs into Cclusters using the\ndense encodings;\n7: forupdates = 1: Kdo\n8: Random sample a paragraph cluster;\n9: Sample Bparagraphs from the cluster;\n10: Fetch the corresponding questions;\n11: Calculate gradients wrt Lpre;\n12: ifupdates % U== 0 then\n13: Update BERT QandBERT P;\n14: end if\n15: end for\n16:end while\ninstances as the training progresses. Instead of\nutilizing a predefined or dynamically generated or-\nder of all the instances according to their easiness,\nour algorithm makes use of a dynamic grouping\nof all the training instances and is specifically de-\nsigned for the efficient in-batch negative sampling\nparadigm.\n2.3 QA Finetuning\nOnce pretrained, we use the paragraph encoder to\nencode the corpus into a large set of dense vectors.\nFollowing previous practice, we only finetune the\nquestion encoder and the reading module so that\nwe can reuse the same dense index for different\ndatasets. For every training question, we obtain\nthe question representation hqfrom the question\nencoder and retrieve the top-k paragraphs Dkon\nthe fly using an existing maximum inner product\nsearch package. To train the reading module, we ap-\nply the shared-normalization trick and optimize the\nmarginal probability of all matched answer spans\nin the top-k paragraphs:\nLreader =\u0000logX\np2D\u0003X\ns2S\u0003pPsn\n\u001e(sjp;q): (2)\nIn additional to the reader loss, we also incorporate\nthe “early” loss used by Lee et al. (2019), which\nupdates the question encoder using the top-5000\ndense paragraph vectors. If we define D\u0003\n5000 as\nthose paragraphs in the top-5000 that contain the\ncorrect answer, then the “early” loss is defined as:\nLearly =\u0000logX\np2D\u0003\n5000P\u0012(pjq): (3)\nThus our total finetuning loss is Learly+Lreader .\nNote this is different from the joint formulation\n2807MethodDataset\nNaturalQuestions-Open WebQuestions CuratedTREC\nDrQA (Chen et al., 2017) - 20.7 25.7\nR3(Wang et al., 2018a) - 17.1 28.4\nDSQA (Lin et al., 2018) - 25.6 29.1\nHardEM (Min et al., 2019a) 28.1 - -\nPathRetriever (Asai et al., 2020) 32.6 - -\nWKLM (Xiong et al., 2020) - 34.6 -\nGraphRetriever (Min et al., 2019b) 34.5 36.4 -\nORQA (Lee et al., 2019) 33.3 36.4 30.1\nProQA (Ours) 37.4 37.1 34.6\nTable 1: Open-domain QA results in terms of exact answer match (EM). The first part of the table shows results\nfrom methods that use the traditional IR component. Note that these methods retrieve more paragraphs (typically\ndozens) than dense retrieval methods listed in the second part of the table, which only finds answers from the top-5.\nused by Lee et al. (2019) and Guu et al. (2020),\nwhich consider the paragraphs as latent variables\nwhen calculating P(sjq). We find the joint objec-\ntive does not bring additional improvements, es-\npecially after we use shared normalization. More\nvariants of the finetuning objectives will be dis-\ncussed inx3.5. At inference time, we use a linear\ncombination of the retrieval score and the answer\nspan score to rank the answer candidates from the\ntop-5 retrieved paragraphs. The linear combination\nweight is selected based on the validation perfor-\nmance on each tested dataset.\n3 Experiments\n3.1 Datasets\nWe center our studies on QA datasets that reflect\nreal-world information-seeking scenarios. We con-\nsider 1) NaturalQuestions-Open (Kwiatkowski\net al., 2019; Lee et al., 2019), which includes\naround 10K real-user queries (79,168/8,757/3,610\nfor train/dev/test) from Google Search; 2) We-\nbQuestions (Berant et al., 2013), which is orig-\ninally designed for knowledge base QA and\nincludes 5,810 questions (3,417/361/2,032 for\ntrain/dev/test) generated by Google Suggest API; 3)\nCuratedTREC (Baudis and Sediv ´y, 2015), which\nincludes 2,180 real-user queries (1,353/133/694\nfor train/dev/test) from MSNSearch and AskJeeves\nlogs. Compared to other datasets such as\nSQuAD (Rajpurkar et al., 2016) and Trivi-\naQA (Joshi et al., 2017), questions in these datasets\nare created without the presence of ground-truth\nanswers and the answer paragraphs, thus are less\nlikely to have lexical overlap with the paragraph.3.2 Essential Implementation Details\nFor pretraining, we use a batch size of 80 and accu-\nmulate the gradients every 8 batches. We use the\nAdam optimizer (Kingma and Ba, 2015) with learn-\ning rate 1e-5 and conduct 90K parameter updates.\nFollowing previous work (Lee et al., 2019), we use\nthe 12-20-2018 snapshot of English Wikipedia as\nour open-domain QA corpus. When splitting the\ndocuments into chunks, we try to reuse the original\nparagraph boundaries and create a new chunk every\ntime the length of the current one exceeds 256 to-\nkens. Overall, we created 12,494,770 text chunks,\nwhich is on-par with the number (13M) reported in\nprevious work. These chunks are also referred to as\nparagraphs in our work. For progressive training,\nwe recluster all the chunks with k-means around\nevery 20k updates using the paragraph encodings.\nWhile finetuning the modules for QA, we fix the\nparagraph encoder in the retrieval module. For each\nquestion, we use the top-5 retrieved paragraphs for\ntraining and skip the question if the top-5 para-\ngraphs fail to cover the answer. The MIPS-based\nretrieval is implemented with FAISS (Johnson et al.,\n2019). On NaturalQuestions-Open, we finetune\nfor 4 epochs. To save the finetuning time on this\nlarge dataset, we only use a subset (2,000 out of\n8,757) of the original development set for model\nselection. For WebQuestions and CuratedTREC\n(both of them are much smaller), we finetune for\n10 epochs. The optimizer settings are consistent\nwith the pretraining. Hyperparameters and further\ndetails can be found in the appendix.\n3.3 QA Performance\nFollowing existing studies, we use the exact match\n(EM) as the evaluation metric, which indicates the\npercentage of the evaluation samples for which the\n2808Method EM model size batch size # updates\nORQA 33.3 330M 4096 100K\nT5 36.6 11318M - -\nREALM 40.4 330M 512 200K\nProQA 37.4 330M 80*8 90K\nTable 2: Resource comparison with SOTA models. EM\nscores are measured on NaturalQuestions-Open. batch\nsizeandupdates all refer to the dense index pretraining.\nNote that REALM uses ORQA to initialize its param-\neters and we only report the numbers after ORQA ini-\ntialization. “80*8” indicates that we use a batch size of\n80 and accumulate the gradients every 8 batches.\npredicted span matches the groundtruth answers. In\nTable 1, we first show that our progressive method\n(denoted as ProQA ) is superior to all of the open-\ndomain QA systems (the upper part of the table)\nthat use conventional IR methods, even though we\nonly use the top-5 paragraphs to predict the an-\nswer while these methods use dozens of retrieved\nparagraphs. For the dense retrieval methods, we\ncompare with ORQA (Lee et al., 2019), which is\nmost relevant to our study but simply uses pseudo\nquestion-paragraph pairs for pretraining and also\nrequires a larger batch size (4,096). We achieve\nmuch stronger performance than ORQA with much\nfewer updates and a limited number of GPUs. To\nthe best of our knowledge, this is the first work\nshowing that an effective dense corpus index can\nbe obtained without using highly expensive com-\nputational resources. The reduced requirement of\ncomputation also makes our method easier to repli-\ncate for corpora in different domains.\nIn Table 2, we compare our method other more\nrecently published QA systems in terms of both per-\nformance and computation cost. It is worth noting\nthat although we need to use a BART model to gen-\nerate training questions for Wikipedia documents,\nthe inference cost of the question generator is still\nmuch lower than the training cost of our system and\nis not significant for comparing the overall compu-\ntation cost: with the same GPU hardware, generat-\ning all the questions takes less than 1/6 of the train-\ning time. When using the batch size\u0002#of updates\nto approximate the training FLOPs, we see that our\nsystem is at least 6 times more efficient than ORQA.\nCompared to the recent proposed T5 (Roberts et al.,\n2020) approach, which converts the QA problem\ninto a sequence-to-sequence (decode answers af-\nter encoding questions) problem and relies on the\nlarge model capacity to answer questions withoutMethod R@5 R@10 R@20\nProQA (90k) 52.0 61.0 68.8\nORQA?(90k) 20.4 29.0 37.2\nProQA (no clustering, 90k) 42.9 52.6 60.8\nProQA (no clustering; 70k) 43.8 53.5 61.3\nProQA (no clustering; 50k) 38.8 48.2 56.7\nTable 3: Ablation studies on different pretraining strate-\ngies. The retrieval modules (Recall@k) are tested on\nWebQuestions.?Our reimplementation.\nretrieving documents, our system achieves better\nperformance and is also much faster at inference\ntime, due to the much smaller model size. The state-\nof-the-art REALM model (Guu et al., 2020) uses\na more complicated pretraining approach that re-\nquires asynchronously refreshing the corpus index\nat train time. As it relies on ORQA initialization\nand further pretraining updates, it is even more\ncomputational expensive at training time. Also,\nas our method directly improves the ORQA pre-\ntraining, our method could easily stack with the\nREALM pretraining approach.\nConcurrent to our work, Karpukhin et al. (2020)\nshow that it is possible to use the groundtruth an-\nswer paragraphs in the original NaturalQuestions\ndataset to train a stronger dense retriever. How-\never, they use a larger index dimension (768) while\nencoding paragraphs and also retrieve more para-\ngraphs (20\u0018100) for answer extraction. As a\nlarger index dimension naturally leads to better\nretrieval results (Luan et al., 2020) (despite sacrific-\ning search efficiency) and using more paragraphs\nincreases the recall of matched answer spans3, this\nconcurrent result is not directly comparable to ours,\nand we leave the combination effect of efficient pre-\ntraining and strong supervision ( i.e., using human-\nlabeled paragraphs) to future work.\n3.4 Ablation Studies\nTo validate the sample efficiency of our method,\nwe replicate the inverse-cloze pretraining approach\nfrom ORQA using the same amount of resource as\nwe used while training our model, i.e., the same\nbatch size and updates (90k). We also study the\neffect of the progressive training paradigm by pre-\ntraining the model with the same generated data\nbut without the clustering-based sampling. We test\nthe retrieval performance on the WebQuestions test\n3While using more paragraphs, we achieve 40.6 EM (com-\npared to 41.5 EM in (Karpukhin et al., 2020)) even with a\nmuch smaller index dimension. Also, we did not use the gold\nparagraphs (strong supervision) to train our reading module.\n2809set before any finetuning. We use Recall@k as\nthe evaluation metric, which measures how often\nthe answer paragraphs appear in the top-k retrieval.\nThe results are shown in Table 3. We can see that\nfor the non-clustering version of our method, the\nimprovements are diminishing as we reach certain\ntraining steps, while the progressive training algo-\nrithm brings around 8%improvements on different\nretrieval metrics. This suggests the importance of\nintroducing more challenging negative examples in\nthe batch when the batch size is limited. Compar-\ning the no-clustering version of our method against\nORQA, we see that using our data generation strat-\negy results in much better retrieval performance\n(more than 22% improvements on all metrics).\n3.5 Analysis on Finetuning Objectives\nNoting that different finetuning configurations have\nbeen used in existing studies , we conduct addi-\ntional experiments to investigate the efficacy of dif-\nferent finetuning objectives and provide insights for\nfuture research that intends to focus on improving\nthe model itself. Specifically, we study the effects\nof using a joint objective (Lee et al., 2019; Guu\net al., 2020) and adding an additional reranking\nobjective that is commonly used in sparse-retrieval\nQA systems (Min et al., 2019a; Xiong et al., 2020).\nThe joint objective treats the retrieved para-\ngraphs as latent variables and optimizes the\nmarginal probability of the matched answer spans\nin all paragraphs:\nLjoint=\u0000logX\np2D?P\u0012(pjq)X\ns2S\u0003pP\u001e(sjp;q):\n(4)\nThe reranking objective is usually implemented\nthrough a paragraph reranking module that uses\na question-dependent paragraph encoder, e.g., a\nBERT encoder that takes the concatenation of the\nquestion and paragraph as input. This kind of\nreranker has been shown to be beneficial to con-\nventional IR methods since it can usually provide\nmore accurate paragraph scores than the TF-IDF\nor BM25 based retriever while ranking the an-\nswer candidates. To implement this reranking\nmodule, we simply add another reranking scor-\ning layer to our BERT-based span prediction mod-\nuleP\u001e(sjp;q), which encodes the paragraphs in a\nquestion-dependent fashion. At inference time, we\nuse the paragraph scores predicted by this rerank-\ning component instead of the pretrained retrieval\nmodel to guide our final answer selection.idObjective SettingsEMjoint rerank shared-norm\n1 - - X 38.5\n2X - X 38.3\n3 - X X 38.2\n4X - - 36.2\n5 - - - 35.1\nTable 4: Analysis on different finetuning objectives on\nNaturalQuetions-Open. EM scores are measured on the\n2,000 validation samples we used for model selection.\nTable 4 shows the results of different objective\nsettings. Comparing the results of (4) and (5), we\ncan see that the joint objective can bring some im-\nprovements when shared-normalization is not ap-\nplied. However, it does not yield improvements\nwhen shared-normalization is applied, according\nto the results of (1) and (2). By comparing (1)\nand (3), we see that with the strong pretrained re-\ntrieval model, adding an extra reranking module\nthat uses question-dependent paragraph encodings\nis no longer beneficial. This is partially because our\npretrained retrieval model gets further improved\nduring finetuning, in contrast to a fixed TF-IDF\nbased retriever. Finally, from (1) and (5), we see\nthat the shared normalization brings much larger\nimprovements than the other factors. This aligns\nwith the findings from an existing work (Wang\net al., 2019) that only tested on SQuAD questions.\n4 Error Analysis\nTo investigate the fundamental differences of the\ndense and sparse retrieval methods in open-domain\nQA, we conduct an error analysis using both the\nproposed method and a baseline system that uses\nTF-IDF and BM25 for retrieval. This baseline uses\na similar retrieval pipeline as Min et al. (2019a) and\nis trained with the finetuning objective defined in\nEq. 2. This sparse retrieval baseline achieves 29.7\nEM on the official dev set of NaturalQuestions-\nOpen while our method achieves 36.7 EM4. Fig-\nure 2 shows the Venn diagram of the error sets from\nboth systems. Our key findings are summarized in\nthe following paragraphs.\nThe difference retrieval paradigms could com-\nplement each other. First, according to the error\nset differences (shown by the white regions in Fig-\nure 2), a considerable portion of the error cases\n(9:1%of the devt set) of our dense-retrieval system\n4Note that this number is different from the number in\nTable 4 as we did not use the whole dev set for model selection.\n2810Dense Errors\n(63:3%)Sparse Errors\n(70:3%)\nNQ Development SetShared Errors\n(54:2%)\nFigure 2: The error sets (from the official dev set of\nNaturalQuesitons-Open) of our dense retrieval method\nand a baseline system using sparse retrieval methods\nlike TF-IDF. We use circles to represent the error sets\nof both systems. The percentages show the relative size\nof each set in terms of the whole dev set.\ndoes not occur in the sparse-retrieval system and\nvice versa. This suggests the necessity of incor-\nporating different retrieval paradigm when build-\ning real-world applications. In fact, the hybrid ap-\nproach has already been adopted by a phrase-level\nretrieval method (Seo et al., 2019) and concurrent\nstudies (Luan et al., 2020; Karpukhin et al., 2020).\nBoth systems are underestimated. As shown in\nFigure 2, 54:2%of the questions cannot be cor-\nrectly answered by either system. However, our\nmanual inspection on 50 of the shared error cases\nsuggests that around 30% of these errors are due to\nannotation issues ( 14%) or the ambiguous nature of\nreal-user queries ( 16%). One obvious annotation\nissue is the incompleteness of the answer labels.\nExample questions include “When did Brazil lose\nto in 2014 World Cup?” , to which both “Germany”\nand“Netherlands” are correct answers. This issue\noccurs because the annotators of NaturalQuestions\nonly have a local view of the knowledge source as\nthey are only asked to label the answer span using\none document. In terms of the ambiguous ques-\ntions, many of them are due to constraints unspec-\nified by the question words, such as “What is the\npopulation of New York City?” (the time constraint\nis implicit) or “When did Justice League come\nout in Canada?” (needs entity disambiguation).\nThis kind of questions result from the information-\nseeking nature of the open-domain QA task where\nthe users usually use the minimal number of words\nfor searching and they are not aware of the potential\nambiguous factors. To solve this kind of questions,\nan interactive QA system might be necessary. In\nthe appendix, we show more ambiguous questions\nin which other kinds of constraints are missing.5 Related Work\nThe task of answering questions without specify-\ning specific domains has been intensively studied\nsince the earlier TREC QA competitions (V oorhees,\n1999). Studies in the early stage (Kwok et al., 2001;\nBrill et al., 2002; Ferrucci et al., 2010; Baudi ˇs,\n2015) mostly rely on highly sophisticated pipelines\nand heterogeneous resources. Built on the recent\nadvances in machine reading comprehension, Chen\net al. (2017) show that open-domain QA can be sim-\nply formulated as a reading comprehension prob-\nlem with the help of a standard IR component that\nprovides candidate paragraphs for answer extrac-\ntion. This two-stage formulation is simple yet ef-\nfective to achieve competitive performance while\nusing Wikipedia as the only knowledge resource.\nFollowing this formulation, a couple of recent\nstudies have proposed to improve the system us-\ning stronger reading comprehension models (Yang\net al., 2019; Wang et al., 2018b), more effective\nlearning objectives (Clark and Gardner, 2018; Min\net al., 2019a; Wang et al., 2019) or paragraph\nreranking models (Wang et al., 2018a; Lin et al.,\n2018; Lee et al., 2018). However, the retrieval\ncomponents in these systems are still based on tra-\nditional inverted index methods, which are efficient\nbut might fail when the target paragraph does not\nhave enough lexicon overlap with the question.\nIn contrast to the sparse term-based features\nused in TF-IDF or BM25, dense paragraph vec-\ntors learned by deep neural networks (Zhang et al.,\n2017; Conneau et al., 2017) can capture much\nricher semantics beyond the n-gram term features.\nTo build effective paragraph encoders tailed for the\nparagraph retrieval in open-domain QA, more re-\ncent studies (Lee et al., 2019; Chang et al., 2020;\nGuu et al., 2020) propose to pretrain Transformer\nencoders (Vaswani et al., 2017) with objectives that\nsimulate the semantic matching between questions\nand paragraphs. For instance, Lee et al. (2019)\nuses the inverse cloze pretraining task to train a\nbi-encoder model to match a sentence and the para-\ngraph in which the sentence belongs to. These\napproaches demonstrate promising performance\nbut require a lot of resources for pretraining. The\nfocus of this paper is to reduce the computational\nrequirements of building an effective corpus index.\n6 Conclusion\nWe propose an efficient method for pretraining the\ndense corpus index which can replace the tradi-\n2811tional IR methods in open-domain QA systems.\nThe proposed approach is powered by a better data\ngeneration strategy and a simple yet effective data\nsampling protocol for pretraining. With careful\nfinetuning, we achieve stronger QA performance\nthan ORQA that uses much more computational\nresources. We hope our method could encourage\nmore energy-efficient pretraining methods in this\ndirection such that the dense retrieval paradigm\ncould be more widely used in different domains.\nAcknowledgement\nThe research was partly supported by the J.P. Mor-\ngan AI Research Award. The views and conclu-\nsions contained in this document are those of the\nauthors and should not be interpreted as represent-\ning the official policies, either expressed or implied,\nof the funding agency.\nReferences\nAkari Asai, Kazuma Hashimoto, Hannaneh Hajishirzi,\nRichard Socher, and Caiming Xiong. 2020. Learn-\ning to retrieve reasoning paths over wikipedia graph\nfor question answering. In 8th International Confer-\nence on Learning Representations, ICLR 2020 , Ad-\ndis Ababa, Ethiopia.\nPetr Baudi ˇs. 2015. Yodaqa: a modular question an-\nswering system pipeline. In 19th International Stu-\ndent Conference on Electrical Engineering , pages\n1156–1165.\nPetr Baudis and Jan Sediv ´y. 2015. Modeling of the\nquestion answering task in the yodaqa system. In\n6th International Conference of the CLEF Associa-\ntion, CLEF 2015 , pages 222–228, Toulouse, France.\nSpringer.\nYoshua Bengio, J ´erˆome Louradour, Ronan Collobert,\nand Jason Weston. 2009. Curriculum learning. In\nProceedings of the 26th Annual International Con-\nference on Machine Learning, ICML 2009 , pages\n41–48, Montreal, Qu ´ebec, Canada. ACM.\nJonathan Berant, Andrew Chou, Roy Frostig, and Percy\nLiang. 2013. Semantic parsing on freebase from\nquestion-answer pairs. In Proceedings of the 2013\nConference on Empirical Methods in Natural Lan-\nguage Processing, EMNLP 2013 , pages 1533–1544,\nSeattle, Washington, USA. ACL.\nSamuel R. Bowman, Gabor Angeli, Christopher Potts,\nand Christopher D. Manning. 2015. A large anno-\ntated corpus for learning natural language inference.\nInProceedings of the 2015 Conference on Empiri-\ncal Methods in Natural Language Processing , pages\n632–642, Lisbon, Portugal. Association for Compu-\ntational Linguistics.Eric Brill, Susan T. Dumais, and Michele Banko. 2002.\nAn analysis of the askmsr question-answering sys-\ntem. In Proceedings of the 2002 Conference on\nEmpirical Methods in Natural Language Processing,\nEMNLP 2002 , Philadelphia, Pennsylvania, USA.\nWei-Cheng Chang, Felix X. Yu, Yin-Wen Chang, Yim-\ning Yang, and Sanjiv Kumar. 2020. Pre-training\ntasks for embedding-based large-scale retrieval. In\n8th International Conference on Learning Represen-\ntations, ICLR 2020 , Addis Ababa, Ethiopia.\nDanqi Chen, Adam Fisch, Jason Weston, and Antoine\nBordes. 2017. Reading Wikipedia to answer open-\ndomain questions. In Proceedings of the 55th An-\nnual Meeting of the Association for Computational\nLinguistics , pages 1870–1879, Vancouver, Canada.\nAssociation for Computational Linguistics.\nQian Chen, Zhen-Hua Ling, and Xiaodan Zhu. 2018.\nEnhancing sentence embedding with generalized\npooling. In Proceedings of the 27th International\nConference on Computational Linguistics , pages\n1815–1826, Santa Fe, New Mexico, USA. Associ-\nation for Computational Linguistics.\nChristopher Clark and Matt Gardner. 2018. Simple\nand effective multi-paragraph reading comprehen-\nsion. In Proceedings of the 56th Annual Meeting of\nthe Association for Computational Linguistics, ACL\n2018 , pages 845–855, Melbourne, Australia. Associ-\nation for Computational Linguistics.\nAlexis Conneau, Douwe Kiela, Holger Schwenk, Lo ¨ıc\nBarrault, and Antoine Bordes. 2017. Supervised\nlearning of universal sentence representations from\nnatural language inference data. In Proceedings of\nthe 2017 Conference on Empirical Methods in Nat-\nural Language Processing , pages 670–680, Copen-\nhagen, Denmark. Association for Computational\nLinguistics.\nJacob Devlin, Ming-Wei Chang, Kenton Lee, and\nKristina Toutanova. 2019. BERT: Pre-training of\ndeep bidirectional transformers for language un-\nderstanding. In Proceedings of the 2019 Confer-\nence of the North American Chapter of the As-\nsociation for Computational Linguistics: Human\nLanguage Technologies, NAACL-HLT 2019 , pages\n4171–4186, Minneapolis, Minnesota, USA. Associ-\nation for Computational Linguistics.\nDavid A. Ferrucci, Eric W. Brown, Jennifer Chu-\nCarroll, James Fan, David Gondek, Aditya Kalyan-\npur, Adam Lally, J. William Murdock, Eric Nyberg,\nJohn M. Prager, Nico Schlaefer, and Christopher A.\nWelty. 2010. Building watson: An overview of the\ndeepqa project. AI Magazine , 31(3):59–79.\nKelvin Guu, Kenton Lee, Zora Tung, Panupong Pasu-\npat, and Ming-Wei Chang. 2020. REALM: retrieval-\naugmented language model pre-training. CoRR ,\nabs/2002.08909.\n2812Lu Jiang, Deyu Meng, Qian Zhao, Shiguang Shan, and\nAlexander G. Hauptmann. 2015. Self-paced curricu-\nlum learning. In Proceedings of the Twenty-Ninth\nAAAI Conference on Artificial Intelligence , pages\n2694–2700, Austin, Texas, USA. AAAI Press.\nJeff Johnson, Matthijs Douze, and Herv ´e J´egou. 2019.\nBillion-scale similarity search with gpus. IEEE\nTransactions on Big Data .\nMandar Joshi, Eunsol Choi, Daniel S. Weld, and Luke\nZettlemoyer. 2017. Triviaqa: A large scale distantly\nsupervised challenge dataset for reading comprehen-\nsion. In Proceedings of the 55th Annual Meeting of\nthe Association for Computational Linguistics, ACL\n2017 , pages 1601–1611, Vancouver, Canada. Asso-\nciation for Computational Linguistics.\nVladimir Karpukhin, Barlas Oguz, Sewon Min, Ledell\nWu, Sergey Edunov, Danqi Chen, and Wen-tau Yih.\n2020. Dense passage retrieval for open-domain\nquestion answering. CoRR , abs/2004.04906.\nDiederik P. Kingma and Jimmy Ba. 2015. Adam: A\nmethod for stochastic optimization. In 3rd Inter-\nnational Conference on Learning Representations,\nICLR 2015 , San Diego, California, USA.\nM. Pawan Kumar, Benjamin Packer, and Daphne\nKoller. 2010. Self-paced learning for latent variable\nmodels. In 24th Annual Conference on Neural Infor-\nmation Processing Systems, NIPS 2010 , pages 1189–\n1197, Vancouver, British Columbia, Canada. Curran\nAssociates, Inc.\nTom Kwiatkowski, Jennimaria Palomaki, Olivia Red-\nfield, Michael Collins, Ankur P. Parikh, Chris Al-\nberti, Danielle Epstein, Illia Polosukhin, Jacob De-\nvlin, Kenton Lee, Kristina Toutanova, Llion Jones,\nMatthew Kelcey, Ming-Wei Chang, Andrew M. Dai,\nJakob Uszkoreit, Quoc Le, and Slav Petrov. 2019.\nNatural questions: a benchmark for question answer-\ning research. Transactions of the Association for\nComputational Linguistics , 7:452–466.\nCody C. T. Kwok, Oren Etzioni, and Daniel S. Weld.\n2001. Scaling question answering to the web. In\nProceedings of the Tenth International World Wide\nWeb Conference, WWW 2001 , pages 150–161, Hong\nKong, China. ACM.\nJinhyuk Lee, Seongjun Yun, Hyunjae Kim, Miyoung\nKo, and Jaewoo Kang. 2018. Ranking paragraphs\nfor improving answer recall in open-domain ques-\ntion answering. In Proceedings of the 2018 Con-\nference on Empirical Methods in Natural Language\nProcessing , pages 565–569, Brussels, Belgium. As-\nsociation for Computational Linguistics.\nKenton Lee, Ming-Wei Chang, and Kristina Toutanova.\n2019. Latent retrieval for weakly supervised open\ndomain question answering. In Proceedings of the\n57th Annual Meeting of the Association for Com-\nputational Linguistics , pages 6086–6096, Florence,\nItaly. Association for Computational Linguistics.Mike Lewis, Yinhan Liu, Naman Goyal, Mar-\njan Ghazvininejad, Abdelrahman Mohamed, Omer\nLevy, Veselin Stoyanov, and Luke Zettlemoyer.\n2019. BART: denoising sequence-to-sequence pre-\ntraining for natural language generation, translation,\nand comprehension. CoRR , abs/1910.13461.\nYankai Lin, Haozhe Ji, Zhiyuan Liu, and Maosong\nSun. 2018. Denoising distantly supervised open-\ndomain question answering. In Proceedings of the\n56th Annual Meeting of the Association for Compu-\ntational Linguistics , pages 1736–1745, Melbourne,\nAustralia. Association for Computational Linguis-\ntics.\nLajanugen Logeswaran and Honglak Lee. 2018. An\nefficient framework for learning sentence represen-\ntations. In 6th International Conference on Learn-\ning Representations, ICLR 2018 , Vancouver, British\nColumbia, Canada.\nYi Luan, Jacob Eisenstein, Kristina Toutanova, and\nMichael Collins. 2020. Sparse, dense, and at-\ntentional representations for text retrieval. CoRR ,\nabs/2005.00181.\nXiyao Ma, Qile Zhu, Yanlin Zhou, Xiaolin Li, and\nDapeng Wu. 2020. Improving question generation\nwith sentence-level semantic matching and answer\nposition inferring. In Proceedings of the Thirty-\nFourth AAAI Conference on Artificial Intelligence,\nAAAI 2020 , New York, New York, USA. AAAI\nPress.\nSewon Min, Danqi Chen, Hannaneh Hajishirzi, and\nLuke Zettlemoyer. 2019a. A discrete hard EM ap-\nproach for weakly supervised question answering.\nInProceedings of the 2019 Conference on Empirical\nMethods in Natural Language Processing and the\n9th International Joint Conference on Natural Lan-\nguage Processing (EMNLP-IJCNLP) , pages 2851–\n2864, Hong Kong, China. Association for Computa-\ntional Linguistics.\nSewon Min, Danqi Chen, Luke Zettlemoyer, and Han-\nnaneh Hajishirzi. 2019b. Knowledge guided text re-\ntrieval and reading for open domain question answer-\ning. CoRR , abs/1911.03868.\nAnkur Parikh, Oscar T ¨ackstr ¨om, Dipanjan Das, and\nJakob Uszkoreit. 2016. A decomposable attention\nmodel for natural language inference. In Proceed-\nings of the 2016 Conference on Empirical Methods\nin Natural Language Processing , pages 2249–2255,\nAustin, Texas. Association for Computational Lin-\nguistics.\nPranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and\nPercy Liang. 2016. Squad: 100, 000+ questions\nfor machine comprehension of text. In Proceedings\nof the 2016 Conference on Empirical Methods in\nNatural Language Processing, EMNLP 2016 , pages\n2383–2392, Austin, Texas, USA. The Association\nfor Computational Linguistics.\n2813Adam Roberts, Colin Raffel, and Noam Shazeer. 2020.\nHow much knowledge can you pack into the param-\neters of a language model? CoRR , abs/2002.08910.\nStephen E. Robertson and Hugo Zaragoza. 2009. The\nprobabilistic relevance framework: BM25 and be-\nyond. Foundations and Trends in Information Re-\ntrieval , 3(4):333–389.\nMin Joon Seo, Jinhyuk Lee, Tom Kwiatkowski,\nAnkur P. Parikh, Ali Farhadi, and Hannaneh Ha-\njishirzi. 2019. Real-time open-domain question an-\nswering with dense-sparse phrase index. In Proceed-\nings of the 57th Conference of the Association for\nComputational Linguistics, ACL 2019 , pages 4430–\n4441, Florence, Italy. Association for Computational\nLinguistics.\nAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob\nUszkoreit, Llion Jones, Aidan N. Gomez, Lukasz\nKaiser, and Illia Polosukhin. 2017. Attention is\nall you need. In Thirty-first Annual Conference on\nNeural Information Processing Systems, NIPS 2027 ,\npages 5998–6008, Long Beach, California, USA.\nEllen M. V oorhees. 1999. The TREC-8 question an-\nswering track report. In Proceedings of The Eighth\nText REtrieval Conference, TREC 1999 , Gaithers-\nburg, Maryland, USA. National Institute of Stan-\ndards and Technology.\nShuohang Wang, Mo Yu, Xiaoxiao Guo, Zhiguo Wang,\nTim Klinger, Wei Zhang, Shiyu Chang, Gerry\nTesauro, Bowen Zhou, and Jing Jiang. 2018a. R3:\nReinforced ranker-reader for open-domain question\nanswering. In Proceedings of the Thirty-Second\nAAAI Conference on Artificial Intelligence, AAAI\n2018 , pages 5981–5988, New Orleans, Louisiana,\nUSA. AAAI Press.\nShuohang Wang, Mo Yu, Jing Jiang, Wei Zhang,\nXiaoxiao Guo, Shiyu Chang, Zhiguo Wang, Tim\nKlinger, Gerald Tesauro, and Murray Campbell.\n2018b. Evidence aggregation for answer re-ranking\nin open-domain question answering. In 6th Inter-\nnational Conference on Learning Representations,\nICLR 2018 , Vancouver, British Columbia, Canada.\nZhiguo Wang, Patrick Ng, Xiaofei Ma, Ramesh Nal-\nlapati, and Bing Xiang. 2019. Multi-passage\nBERT: A globally normalized BERT model for\nopen-domain question answering. In Proceedings\nof the 2019 Conference on Empirical Methods in\nNatural Language Processing and the 9th Interna-\ntional Joint Conference on Natural Language Pro-\ncessing, EMNLP-IJCNLP 2019 , pages 5877–5881,\nHong Kong, China. Association for Computational\nLinguistics.\nWenhan Xiong, Jingfei Du, William Yang Wang, and\nVeselin Stoyanov. 2020. Pretrained encyclope-\ndia: Weakly supervised knowledge-pretrained lan-\nguage model. In 8th International Conference on\nLearning Representations, ICLR 2020 , Addis Ababa,\nEthiopia.Wei Yang, Yuqing Xie, Aileen Lin, Xingyu Li, Luchen\nTan, Kun Xiong, Ming Li, and Jimmy Lin. 2019.\nEnd-to-end open-domain question answering with\nBERTserini. In Proceedings of the 2019 Confer-\nence of the North American Chapter of the Asso-\nciation for Computational Linguistics (Demonstra-\ntions) , pages 72–77, Minneapolis, Minnesota. Asso-\nciation for Computational Linguistics.\nYizhe Zhang, Dinghan Shen, Guoyin Wang, Zhe Gan,\nRicardo Henao, and Lawrence Carin. 2017. De-\nconvolutional paragraph representation learning. In\nThirty-first Annual Conference on Neural Informa-\ntion Processing Systems, NIPS 2017 , pages 4169–\n4179, Long Beach, California, USA.\n2814A Appendix\nA.1 Further Implementation Details\nThe pretraining process takes 4 TITAN RTX GPUs,\neach with a 24G memory. We use the NVIDIA\nApex package for mixed-precision training. 90K\nparameter updates take around 7 days to finish. For\nQA experiments, We use the IndexIVFFlat index\nfor efficient search. We assign all the vectors to\n100 V oronoi cells and only search from the closest\n20 cells. The random seed is set as 3 for all QA\ndatasets. We use a batch size of 8 (8 questions,\neach of them are paired with 5 paragraphs) for\nNaturalQuestions-Open and 1 for the other datasets.\nWe limit the maximum answer length to 10 sub-\nword tokens. For NaturalQuestions-Open, we eval-\nuate the model every 1000 updates and save the best\ncheckpoint based on validation EM. For WebQue-\nsitons and CuratedTREC, we evaluate the model\nafter every epoch. As neither of these two small\ndatasets has an official dev set, we use a small split\nto find the best hyperparameters and then retrain\nthe model with all the training questions. To accel-\nerate training, especially for the early loss function\nwhich requires annotate the top5000 retrieved para-\ngraphs, we pre-annotate the top10000 paragraphs\nretrieved by the untuned retrieval module and build\nan answer paragraph set for each question. At fine-\ntuning time, we direct check whether a particular\nparagraph is in the precomputed paragraph set, in-\nstead of doing string matching for each of the 5000\nparagraphs. Our BERT implementations are based\non huggingface Transformers5.\nA.2 Qualitative Examples\nHere we include more examples that complement\nthe results and analysis of the paper. Table 5 shows\nthe generated questions from the finetuned BART\nmodel and Table 6 complements the error analysis.\n5https://github.com/huggingface/\ntransformers\n2815Gold Paragraph: “Does He Love You” is a song written by Sandy Knox and Billy Stritch, and recorded as a\nduet by American country music artists Reba McEntire and Linda Davis . It was released in August 1993 as the\nfirst single from Reba’s album Greatest Hits V olume Two. It is one of country music ’s several songs about a\nlove triangle.\nOriginal Question: Who sings does he love me with reba?\nGenerated Question: Who sings with reba mcentire on does he love you?\nGold Paragraph: Invisible Man First edition Author Ralph Ellison Country United States Language English\nGenre Bildungsroman African-American literature social commentary Publisher Random House Publication\ndate 1952 Media type Print (hardcover and paperback) Pages 581 (second edition) ...\nOriginal Question: How many pages is invisible man by ralph ellison?\nGenerated Question: How many pages in the invisible man by ralph ellison?\nGold Paragraph: The Great Lakes (French: les Grands-Lacs), also called the Laurentian Great Lakes and the\nGreat Lakes of North America, are a series of interconnected freshwater lakes located primarily in the upper\nmid-east region of North America, on the Canada–United States border, which connect to the Atlantic Ocean\nthrough the Saint Lawrence River . They consist of Lakes Superior, Michigan, Huron (or Michigan–Huron), Erie,\nand Ontario.\nOriginal Question: Where do the great lakes meet the ocean?\nGenerated Question: Where do the great lakes of north america meet the atlantic?\nGold Paragraph: My Hero Academia: Two Heroes , Hepburn:Boku no Hiro Academia THE MOVIE: Futari\nno Hiro) is a 2018 Japanese anime superhero film based on the manga My Hero Academia by Kohei Horikoshi.\nSet between the second and third seasons of the anime series, the film was directed by Kenji Nagasaki and\nproduced by Bones. Anime Expo hosted the film’s world premiere on July 5, 2018 , and it was later released to\ntheaters in Japan on August 3, 2018.\nOriginal Question: When does the new my hero academia movie come out?\nGenerated Question: When does the my hero academia two heroes movie come out?\nGold Paragraph: Victoria’s Secret Store, 722 Lexington Ave, New York, NY Type Subsidiary Industry Apparel\nFounded June 12, 1977; 40 years ago (1977-06-12 ) Stanford Shopping Center, Palo Alto, California, U.S.\nFounder Roy Raymond Headquarters Three Limited Parkway, Columbus , Ohio , U.S. Number of locations\n1,017 company - owned stores 18 independently owned stores Area served ...\nOriginal Question: Who was the creator of victoria’s secret?\nGenerated Question: Who is the founder of victoria’s secret and when was it founded?\nTable 5: Samples of the generated questions. The answer spans are underlined. Here we show the generated\nquestions for samples at the beginning of the official NaturalQuestions-Open dev data. We only skip the samples\nwhose gold paragraphs are not natural paragraphs ( e.g., incomplete sentences).\nQuestion: What is a ford mondeo in the usa?\nAnnotated Answers: ford contour, mercury mystique, ford fusion\nambiguous; could be asking about a particular car type (mid-sized car) instead of brand series\nQuestion: air flow in the eye of a hurricane?\nAnnotated Answers: no wind\nambiguous; question itself is hard to understand\nQuestion: Who wrote I’ll be there for you?\nAnnotated Answers: Michael Skloff, Marta Kauffman, Allee Willis, David Crane,\nPhil Solem, Danny Wilde, The Rembrandts\nambiguous; there are multiple songs having this name\nQuestion: Where do you go for phase 1 training?\nAnnotated Answers: army foundation college\nambiguous; the meaning of phase 1 is vague, could have different meanings in different context\nQuestion: When does the new spiderman series come out?\nAnnotated Answers: August 19 , 2017\nambiguous; time constraint missing\nQuestion: Where did the super bowl take place this year?\nAnnotated Answers: minneapolis, minnesota\nambiguous; the year cannot be inferred from the question words alone\nTable 6: Error cases that include ambiguous questions.",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "Vut-FkWtMB7",
"year": null,
"venue": "ICRA 2015",
"pdf_link": "https://ieeexplore.ieee.org/iel7/7128761/7138973/07139467.pdf",
"forum_link": "https://openreview.net/forum?id=Vut-FkWtMB7",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Estimating a Mean-Path from a set of 2-D curves",
"authors": [
"Amir M. Ghalamzan E.",
"Luca Bascetta",
"Marcello Restelli",
"Paolo Rocco"
],
"abstract": "To perform many common industrial robotic tasks, e.g. deburring a work-piece, in small and medium size companies where a model of the work-piece may not be available, building a geometrical model of how to perform the task from a data set of human demonstrations is highly demanded. In many cases, however, the human demonstrations may be sub-optimal and noisy solutions to the problem of performing a task. For example, an expert may not completely remove the burrs that result in deburring residuals on the work-piece. Hence, we present an iterative algorithm to estimate a noise-free geometrical model of a work-piece from a given dataset of profiles with deburring residuals. In a case study, we compare the profiles obtained with the proposed method, nonlinear principal component analysis and Gaussian mixture model/Gaussian mixture regression. The comparison illustrates the effectiveness of the proposed method, in terms of accuracy, to compute a noise-free profile model of a task.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "mr6KtWPJEuV",
"year": null,
"venue": "ICRA 2015",
"pdf_link": "https://ieeexplore.ieee.org/iel7/7128761/7138973/07139467.pdf",
"forum_link": "https://openreview.net/forum?id=mr6KtWPJEuV",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Estimating a Mean-Path from a set of 2-D curves",
"authors": [
"Amir M. Ghalamzan E.",
"Luca Bascetta",
"Marcello Restelli",
"Paolo Rocco"
],
"abstract": "To perform many common industrial robotic tasks, e.g. deburring a work-piece, in small and medium size companies where a model of the work-piece may not be available, building a geometrical model of how to perform the task from a data set of human demonstrations is highly demanded. In many cases, however, the human demonstrations may be sub-optimal and noisy solutions to the problem of performing a task. For example, an expert may not completely remove the burrs that result in deburring residuals on the work-piece. Hence, we present an iterative algorithm to estimate a noise-free geometrical model of a work-piece from a given dataset of profiles with deburring residuals. In a case study, we compare the profiles obtained with the proposed method, nonlinear principal component analysis and Gaussian mixture model/Gaussian mixture regression. The comparison illustrates the effectiveness of the proposed method, in terms of accuracy, to compute a noise-free profile model of a task.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "ar91SmGOeh",
"year": null,
"venue": "EUROSPEECH 1999",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=ar91SmGOeh",
"arxiv_id": null,
"doi": null
}
|
{
"title": "The IBM conversational telephony system for financial applications",
"authors": [
"K. Davies",
"Robert E. Donovan",
"Mark Epstein",
"Martin Franz",
"Abraham Ittycheriah",
"Ea-Ee Jan",
"Jean-Michel LeRoux",
"David M. Lubensky",
"Chalapathy Neti",
"Mukund Padmanabhan",
"Kishore Papineni",
"Salim Roukos",
"Andrej Sakrajda",
"Jeffrey S. Sorensen",
"Borivoj Tydlitát",
"Todd Ward"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "_cUFagajfzZ",
"year": null,
"venue": "CCS 2022",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=_cUFagajfzZ",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Kryvos: Publicly Tally-Hiding Verifiable E-Voting",
"authors": [
"Nicolas Huber",
"Ralf Küsters",
"Toomas Krips",
"Julian Liedtke",
"Johannes Müller",
"Daniel Rausch",
"Pascal Reisert",
"Andreas Vogt"
],
"abstract": "Elections are an important corner stone of democratic processes. In addition to publishing the final result (e.g., the overall winner), elections typically publish the full tally consisting of all (aggregated) individual votes. This causes several issues, including loss of privacy for both voters and election candidates as well as so-called Italian attacks that allow for easily coercing voters. Several e-voting systems have been proposed to address these issues by hiding (parts of) the tally. This property is called tally-hiding. Existing tally-hiding e-voting systems in the literature aim at hiding (part of) the tally from everyone, including voting authorities, while at the same time offering verifiability, an important and standard feature of modern e-voting systems which allows voters and external observers to check that the published election result indeed corresponds to how voters actually voted. In contrast, real elections often follow a different common practice for hiding the tally: the voting authorities internally compute (and learn) the full tally but publish only the final result (e.g., the winner). This practice, which we coin publicly tally-hiding, indeed solves the aforementioned issues for the public, but currently has to sacrifice verifiability due to a lack of practical systems. In this paper, we close this gap. We formalize the common notion of publicly tally-hiding and propose the first provably secure verifiable e-voting system, called Kryvos, which directly targets publicly tally-hiding elections. We instantiate our system for a wide range of both simple and complex voting methods and various result functions. We provide an extensive evaluation which shows that Kryvos is practical and able to handle a large number of candidates, complex voting methods and result functions. Altogether, Kryvos shows that the concept of publicly tally-hiding offers a new trade-off between privacy and efficiency that is different from all previous tally-hiding systems and which allows for a radically new protocol design resulting in a practical e-voting system.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "IdK4o4H5gUs",
"year": null,
"venue": "IEEE Trans. Parallel Distributed Syst. 2001",
"pdf_link": "https://ieeexplore.ieee.org/iel5/71/19653/00910869.pdf",
"forum_link": "https://openreview.net/forum?id=IdK4o4H5gUs",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Implementing E-Transactions with Asynchronous Replication",
"authors": [
"Svend Frølund",
"Rachid Guerraoui"
],
"abstract": "This paper describes a distributed algorithm that implements the abstraction of e-Transaction: a transaction that executes exactly-once despite failures. Our algorithm is based on an asynchronous replication scheme that generalizes well-known active-replication and primary-backup schemes. We devised the algorithm with a three-tier architecture in mind: the end-user interacts with front-end clients (e.g., browsers) that invoke middle-tier application servers (e.g., web servers) to access back-end databases. The algorithm preserves the three-tier nature of the architecture and introduces a very acceptable overhead with respect to unreliable solutions.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "XVzcOSSimHb",
"year": null,
"venue": "undefined 2012",
"pdf_link": "https://repositorio.ufmg.br/bitstream/1843/ESBF-8XFJT6/1/ericksonrangel.pdf",
"forum_link": "https://openreview.net/forum?id=XVzcOSSimHb",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Um descritor robusto e eficiente de pontos de interesse: desenvolvimento e aplicações",
"authors": [
"Erickson Rangel do Nascimento"
],
"abstract": "Diferentes metodologias para reconhecimento de objetos, reconstrução e alinhamento tridimensional, possuem no cerne de seu desenvolvimento o problema de correspondência. Devido à ambiguidade em nosso mundo e à presença de ruídos nos processos de aquisições de dados, obter correspondências de qualidade é umdos maiores desafios em Robótica e Visão Computacional. Dessa maneira, a criação de descritores que identifiquem os elementos a serem correspondidos e que sejam capazes de gerar pares correspondentes corretamente é de grande importância. Nesta tese, introduzimos três novos descritores que combinam de maneira eficiente aparência e informação geométrica de images RGB-D. Os descritores apresentados neste trabalho são largamente invariantes a rotação, mudanças de iluminação e escala. Além disso, para aplicações cujo principal requisito é o baixo consumo computacional em detrimento de alta precisão na correspondência, a invariância a rotação e escala podem ser facilmente desabilitadas sem grande perda na qualidadede discriminância dos descritores. Os resultados dos experimentos realizados nesta tese demonstram que nossos descritores, quando comparados a três descritores padrões da literatura, SIFT, SURF(para images com texturas) e Spin-Images (para dados geométricos) e ao estado da arte CSHOT, foram mais robustos e precisos. Foram também realizados experimentos com os descritores em duas aplicações distintas. Nós os utilizamos para a detecção e reconhecimento de objetos sob diferentes condições de iluminação para a construção de mapas com informações semânticas e para o registro de múltiplos mapas com profundidade e textura. Em ambas as aplicações, nossos descritores demonstraram-se mais adequados do que outras abordagens, tendo sido superiores em tempo de processamento, consumo de memória, taxa de reconhecimento e qualidade do registro.",
"keywords": [],
"raw_extracted_content": "UM DESCRITOR ROBUSTO E EFICIENTE DE\nPONTOS DE INTERESSE: DESENVOLVIMENTO E\nAPLICAÇÕES\n\nERICKSON RANGEL DO NASCIMENTO\nUM DESCRITOR ROBUSTO E EFICIENTE DE\nPONTOS DE INTERESSE: DESENVOLVIMENTO E\nAPLICAÇÕES\nTese apresentada ao Programa de Pós-\n-Graduação em Ciência da Computação\ndo Instituto de Ciências Exatas da Univer-\nsidade Federal de Minas Gerais como req-\nuisito parcial para a obtenção do grau de\nDoutor em Ciência da Computação.\nORIENTADOR : M ARIO FERNANDO MONTENEGRO CAMPOS\nBelo Horizonte\nAgosto de 2012\n\nERICKSON RANGEL DO NASCIMENTO\nON THE DEVELOPMENT OF A ROBUST, FAST AND\nLIGHTWEIGHT KEYPOINT DESCRIPTOR AND ITS\nAPPLICATIONS\nThesis presented to the Graduate Pro-\ngram in Computer Science of the Federal\nUniversity of Minas Gerais in partial ful-\nfillment of the requirements for the degree\nof Doctor in Computer Science.\nADVISOR : M ARIO FERNANDO MONTENEGRO CAMPOS\nBelo Horizonte\nAugust 2012\nc\r 2012, Erickson Rangel do Nascimento.\nTodos os direitos reservados.\nNascimento, Erickson Rangel do\nN244d Um descritor robusto e eficiente de pontos de\ninteresse: desenvolvimento e aplicações / Erickson\nRangel do Nascimento. — Belo Horizonte, 2012\nxxvi, 86 f. : il. ; 29cm\nTese (doutorado) — Universidade Federal de\nMinas Gerais — Departamento de Ciência da\nComputação.\nOrientador: Mario Fernando Montenegro\nCampos.\n1. Computação - Teses. 2. Visão Computacional -\nTeses. I. Orientador. II. Título.\nCDU 519.6*84(043)\n\n\nTo my parents, Pekison and Maria Geralda, my sisters, Tamarakajalgina and Kellyuri,\nmy beloved Marcela and my tutor and dear friend Cleber Resende (in memoriam), who\nbelieved in me and taught me to face the challenges with passion and humility.\nix\n\nAcknowledgments\nA doctorate is a work of numerous individuals providing indispensable support,\ncollaborating with their skills and goodwill. In fact, it is not a work of only one\nperson. Thus, I would like to seize this chance to express my gratitude towards all\nthat helped me in my research work.\nI would like to express my very great appreciation to my advisor Prof. Mario F.\nM. Campos for his valuable and constructive suggestions during the development\nof this thesis. His great advice on how to do high quality research significantly\ncontributed for the results of this work and I will always take with me.\nI would also like thank the thesis committee members, professors Flavio L. C.\nPádua, Renato C. Mesquita, Thomas M. Lewiner and William R. Schwartz for their\nsuggestions to the improvement of this research work. Special thanks should be\ngiven to Prof. William for his assistance in the object recognition experiments.\nMany thanks go to all VeRLab members for creating a pleasant working atmo-\nsphere. I would like to specially acknowledge the collaboration and prolific discus-\nsions with Gabriel L. Oliveira, Antônio W. Vieira, Vilar C. Neto and Armando A.\nNeto. My thanks are also extended to James Milligan and Emily LeBlanc for their\nvery careful review of this text.\nMy grateful thanks to colleagues, professors and staff of the Computer Science\nDepartment of the UFMG, specially Renata, Sonia, Sheila, Linda and Tulia, who\nalways were available to help me with the paperwork with a minimum of obstacles.\nI wish to thank my parents, Pekison and Maria Geralda, and my sisters Tama-\nrakajalgina and Kellyuri, for always believing in me. To my family, my deepest\ngratitude.\nFinally, I want to thank my beloved Marcela, for her continuous support and\naffection wich help me to maintain my sanity. This work would not be possible\nwithout your support.\nThis research was supported by CNPq and CAPES.\nxi\n\n“Olhe o mundo.”\n(Cleber Gonçalves Resende)\nxiii\n\nResumo\nDiferentes metodologias para reconhecimento de objetos, reconstrução e alin-\nhamento tridimensional, possuem no cerne de seu desenvolvimento o problema de\ncorrespondência. Devido à ambiguidade em nosso mundo e à presença de ruídos\nnos processos de aquisições de dados, obter correspondências de qualidade é um\ndos maiores desafios em Robótica e Visão Computacional. Dessa maneira, a criação\nde descritores que identifiquem os elementos a serem correspondidos e que sejam\ncapazes de gerar pares correspondentes corretamente é de grande importância.\nNesta tese, introduzimos três novos descritores que combinam de maneira efi-\nciente aparência e informação geométrica de images RGB-D. Os descritores apresen-\ntados neste trabalho são largamente invariantes a rotação, mudanças de iluminação\ne escala. Além disso, para aplicações cujo principal requisito é o baixo consumo\ncomputacional em detrimento de alta precisão na correspondência, a invariância a\nrotação e escala podem ser facilmente desabilitadas sem grande perda na qualidade\nde discriminância dos descritores.\nOs resultados dos experimentos realizados nesta tese demonstram que nossos\ndescritores, quando comparados a três descritores padrões da literatura, SIFT, SURF\n(para images com texturas) e Spin-Images (para dados geométricos) e ao estado da\narte CSHOT, foram mais robustos e precisos.\nForam também realizados experimentos com os descritores em duas apli-\ncações distintas. Nós os utilizamos para a detecção e reconhecimento de objetos sob\ndiferentes condições de iluminação para a construção de mapas com informações\nsemânticas e para o registro de múltiplos mapas com profundidade e textura. Em\nambas as aplicações, nossos descritores demonstraram-se mais adequados do que\noutras abordagens, tendo sido superiores em tempo de processamento, consumo de\nmemória, taxa de reconhecimento e qualidade do registro.\nPalavras-chave: Visão Computacional, Descritores, Pontos de Interesse, Imagens\nRGB-D.\nxv\n\nAbstract\nAt the core of a myriad of tasks such as object recognition, tridimensional recon-\nstruction and alignment resides the critical problem of correspondence. Due to the\nambiguity in our world and the presence of noise in the data aquisition process, per-\nforming high quality correspondence is one of the most challenging tasks in robotics\nand computer vision. Hence, devising descriptors, which identify the entities to be\nmatched and that are able to correctly and reliably establish pairs of corresponding\npoints is of central importance.\nIn this thesis, we introduce three novel descriptors that efficiently combine ap-\npearance and geometrical shape information from RGB-D images, and are largely\ninvariant to rotation, illumination changes and scale transformations. For applica-\ntions that demand speed performance in lieu of a sophisticated and more precise\nmatching process, scale and rotation invariance may be easily disabled. Results of\nseveral experiments described here demonstrate that as far as precision and robust-\nness are concerned, our descriptors compare favorably to three standard descrip-\ntors in the literature, namely: SIFT, SURF (for textured images) and Spin-Images\n(for geometrical shape information). In addition, they outperfom the state-of-the-\nart CSHOT, which, as well as our descriptors, combines texture and geometry.\nWe use these new descriptors to detect and recognize objects under different\nillumination conditions to provide semantic information in a mapping task. Fur-\nthermore, we apply our descriptors for registering multiple indoor textured depth\nmaps, and demonstrate that they are robust and provide reliable results even for\nsparsely textured and poorly illuminated scenes. In these two applications we com-\npare the performance of our descriptors against the standard ones in the literature\nand the state-of-the-art. Experimental results show that our descriptors are supe-\nrior to the others in processing time, memory consumption, recognition rate and\nalignment quality.\nKeywords: Computer Vision, Descriptors, Keypoints, RGB-D Images.\nxvii\n\nList of Figures\n1.1 Matching feature descriptors diagram. . . . . . . . . . . . . . . . . . . . . 2\n1.2 Examples from works that use geometrical and intensity information . . 4\n1.3 Examples of domestic tridimensional sensors . . . . . . . . . . . . . . . . 5\n2.1 Scale-space and image pyramid . . . . . . . . . . . . . . . . . . . . . . . . 10\n2.2 Steps to compute a SIFT descriptor . . . . . . . . . . . . . . . . . . . . . . 12\n2.3 Steps to compute a SURF descriptor . . . . . . . . . . . . . . . . . . . . . 13\n2.4 Spatial arrangement used by BRIEF to region analysis . . . . . . . . . . . 14\n2.5 Cylindrical coordinate system of Spin-Image . . . . . . . . . . . . . . . . 17\n2.6 Spin-Image accumulation process . . . . . . . . . . . . . . . . . . . . . . . 18\n2.7 Isotropic Spherical Grid used by CSHOT descriptor . . . . . . . . . . . . 19\n3.1 Methodology Diagram for Descriptor Creation . . . . . . . . . . . . . . . 24\n3.2 Computing SURF canonical orientation . . . . . . . . . . . . . . . . . . . 26\n3.3 Assembly diagram of EDVD descriptor . . . . . . . . . . . . . . . . . . . 28\n3.4 Representation of normals in sphere coordinates . . . . . . . . . . . . . . 28\n3.5 Binary descriptor diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . 30\n3.6 Example of the normal displacement test . . . . . . . . . . . . . . . . . . 31\n3.7 Example of ambiguity in the dot product . . . . . . . . . . . . . . . . . . 31\n3.8 Assembly diagram of BASE descriptor . . . . . . . . . . . . . . . . . . . . 32\n4.1 Frame samples from Freiburg RGB-D dataset . . . . . . . . . . . . . . . . 38\n4.2 Distribution used to select pixels . . . . . . . . . . . . . . . . . . . . . . . 40\n4.3 Canonical orientation and selection pattern for EDVD . . . . . . . . . . . 42\n4.4 Accurate versus Fast normal estimation for EDVD . . . . . . . . . . . . . 42\n4.5 The best angular threshold and descriptor size for BRAND . . . . . . . . 43\n4.6 Accurate versus Fast normal estimation for BRAND . . . . . . . . . . . . 44\n4.7 Best binary operator and fusion process for BRAND . . . . . . . . . . . . 44\n4.8 Canonical orientation and selection pattern for BRAND . . . . . . . . . . 46\nxix\n4.9 Histogram of the descriptor bit mean values for BRIEF, BASE and\nBRAND over 50k keypoints. . . . . . . . . . . . . . . . . . . . . . . . . . . 47\n4.10 PCA decomposition over 50k keypoints of BRIEF, BASE and BRAND. . 47\n4.11 Matching comparison results . . . . . . . . . . . . . . . . . . . . . . . . . 48\n4.12 Three-dimensional matching example for two scenes using BRAND de-\nscriptor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49\n4.13 Rotation Invariance Results . . . . . . . . . . . . . . . . . . . . . . . . . . 50\n4.14 Processing time and memory consumption . . . . . . . . . . . . . . . . . 51\n4.15 Descriptors accuracy with different keypoint detectors . . . . . . . . . . 52\n5.1 Objects from Intel Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . 57\n5.2 Confusion matrices for experiments in RGB-D Object Dataset . . . . . . 59\n5.3 Dataset used for semantic mapping experiments . . . . . . . . . . . . . . 61\n5.4 Semantic mapping experimental setup . . . . . . . . . . . . . . . . . . . . 62\n5.5 ROC curve of matching using BASE descriptor . . . . . . . . . . . . . . . 63\n5.6 CPU time for Learning and Classification steps . . . . . . . . . . . . . . . 64\n5.7 Confusion matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65\n5.8 Semantic Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66\n5.9 Dataset used in the alignment tests . . . . . . . . . . . . . . . . . . . . . . 71\n5.10 Three-dimensional point clouds alignment of VeRLab laboratory . . . . 72\n5.11 Registration of a partially illuminated lab . . . . . . . . . . . . . . . . . . 73\n5.12 Relative Pose Error (RPE) for rotational error in degrees. . . . . . . . . . 74\n5.13 Relative Pose Error (RPE) for translational error in meters. . . . . . . . . 74\n5.14 Translational error (RPE) for several differents distance between the frames. 75\n5.15 Rotational error (RPE) for several differents distance among the frames. 75\nxx\nList of Tables\n2.1 Descriptors rating based on the properties of an robust descriptor. . . . . 21\n3.1 Properties of descriptors EDVD, BRAND and BASE. . . . . . . . . . . . . 35\n4.1 Average processing time (over 300 point clouds) to compute normal sur-\nfaces from point cloud with 640\u0002480points. . . . . . . . . . . . . . . . . 41\n4.2 Ambiguity cases with the ORoperator . . . . . . . . . . . . . . . . . . . . 45\n5.1 Table with the mean values of the ICP error. . . . . . . . . . . . . . . . . . 70\n5.2 Table with the mean values of time spent to register two clouds. . . . . . 70\n5.3 Table with the average number of inliers retained by SAC in the coarse. . 70\nxxi\n\nAcronym List\nEDVD Enhanced Descriptor for Visual and Depth Data\nBASE Binary Appearance and Shape Elements\nBRAND Binary Robust Appearance and Normal Descriptor\nCSHOT Color-SHOT\nSHOT Signature of Histograms of Orientations\nVOSCH Voxelized Shape and Color Histograms\nSLAM Simultaneous Localization And Mapping\nSIFT Scale Invariant Feature Transform\nSURF Speeded-Up Robust Features\nBRIEF Binary Robust Independent Elementary Features\nROC Relative Operating Characteristic\nICP Iterative Closest Point\nLIDAR Light Detection And Ranging\nPCA Principal Component Analysis\nLBP Local Binary Patterns\nAUC Area Under Curve\nBoF Bag of Features\nPLS Partial Least Squares\nSAC Sampled Consensus-Initial Alignment\nxxiii\nFOV Field of View\nIMU Inertial Measurement Unit\nxxiv\nContents\nAcknowledgments xi\nResumo xv\nAbstract xvii\nList of Figures xix\nList of Tables xxi\nAcronym List xxiii\n1 Introduction 1\n1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4\n1.2 Thesis Goals and Contributions . . . . . . . . . . . . . . . . . . . . . . 5\n1.3 Organization of the Thesis . . . . . . . . . . . . . . . . . . . . . . . . . 7\n2 Related Work 9\n2.1 Keypoint Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9\n2.2 Descriptor Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11\n2.2.1 SIFT Descriptor . . . . . . . . . . . . . . . . . . . . . . . . . . . 11\n2.2.2 SURF Descriptor . . . . . . . . . . . . . . . . . . . . . . . . . . 13\n2.2.3 BRIEF Descriptor . . . . . . . . . . . . . . . . . . . . . . . . . . 14\n2.3 Geometrical Descriptor Extraction . . . . . . . . . . . . . . . . . . . . 15\n2.3.1 Spin-Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16\n2.4 Fusing Image and Geometrical Information . . . . . . . . . . . . . . . 17\n2.4.1 CSHOT Descriptor . . . . . . . . . . . . . . . . . . . . . . . . . 19\n2.5 Descriptors Rating based on the \u0005Set . . . . . . . . . . . . . . . . . . 20\n3 A Computational Approach to Creation of Keypoint Descriptors 23\nxxv\n3.1 General Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23\n3.1.1 Scale Assignment . . . . . . . . . . . . . . . . . . . . . . . . . . 24\n3.1.2 Canonical Orientation Estimation . . . . . . . . . . . . . . . . . 25\n3.1.3 Appearance and Geometry Fusion . . . . . . . . . . . . . . . . 26\n3.2 EDVD Descriptor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27\n3.3 BRAND Descriptor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29\n3.4 BASE descriptor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32\n3.5 Invariant Measurements of BRAND and BASE . . . . . . . . . . . . . 33\n3.6 Rating EDVD, BRAND and BASE based on the \u0005set . . . . . . . . . . 35\n4 Experiments 37\n4.1 Parameter Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39\n4.1.1 EDVD descriptors . . . . . . . . . . . . . . . . . . . . . . . . . . 41\n4.1.2 BRAND and BASE descriptors . . . . . . . . . . . . . . . . . . 43\n4.2 Matching Performance Evaluation . . . . . . . . . . . . . . . . . . . . 48\n4.3 Rotation Invariance and Robustness to Noise Experiments . . . . . . 49\n4.4 Processing time and Memory Consumption . . . . . . . . . . . . . . . 50\n4.5 Keypoint Detector versus Accuracy . . . . . . . . . . . . . . . . . . . . 51\n4.6 Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52\n5 Applications 55\n5.1 Semantic Mapping and Object Recognition . . . . . . . . . . . . . . . 55\n5.1.1 Object Recognition . . . . . . . . . . . . . . . . . . . . . . . . . 56\n5.1.2 Object Recognition Using the BASE descriptor . . . . . . . . . 58\n5.1.3 Mapping System . . . . . . . . . . . . . . . . . . . . . . . . . . 60\n5.1.4 Recognition Results . . . . . . . . . . . . . . . . . . . . . . . . . 60\n5.1.5 Mapping Results . . . . . . . . . . . . . . . . . . . . . . . . . . 64\n5.2 Three-dimensional Alignment . . . . . . . . . . . . . . . . . . . . . . . 65\n5.2.1 RGB-D Point Cloud Alignment Approach . . . . . . . . . . . . 67\n5.2.2 Alignment Results . . . . . . . . . . . . . . . . . . . . . . . . . 69\n6 Conclusions and Future Work 77\n6.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77\n6.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79\nBibliography 81\nxxvi\nChapter 1\nIntroduction\nAT THE HEART OF NUMEROUS TASKS both in robotics and computer vision re-\nsides the crucial problem of correspondence. Methodologies for building ac-\ncurate tridimensional models of scenes, Simultaneous Localization And Mapping\n(SLAM), tracking, and object recognition and detection algorithms are some exam-\nples of techniques in which the correspondence plays a central role in the pipeline\nprocess. Among these methodologies, we are particularly interested in 3D model\nbuilding and object recognition. Methodologies for 3D model building usually have\nto handle alignment and registration issues finding a set of corresponding points in\ntwo different views. The learning algorithms used in object detection and recog-\nnition rely on selecting corresponding points to reduce data dimensionality, which\nmakes data intensive model building a manageable task.\nThe correspondence problem consists of organizing pairs of entities, e.g.pixels\nin imagesXandY, according to a similarity function. The aim is to find a relation\nfcwhich determines for every element x2Xa correspondent element y2Y.\nFormally,8x2X,Pg(y;fc(x)) = 0 , wheregis a similarity function and yis the\ncorresponding element of x.\nDue to ambiguity in our world and the presence of noise in the data aqui-\nsition process, finding out fcis one of the most challenging tasks in robotics and\ncomputer vision. For example, in the correspondence of pixels in two images, the\nsame world point may be different under distinct imaging conditions and different\npoints may present identical appearance when observed from different viewpoints.\nFuthermore, image noise may severely interfere with the correspondence process.\nThe correspondence task, or matching process, can be broken down into three\nmain procedures (Figure 1.1):\n1\n2 C HAPTER 1. I NTRODUCTION\nFigure 1.1. Matching feature descriptors. For each detected keypoint, a sig-\nnature is computed called a descriptor. Theses descriptors are matched with\nanother set of keypoint descriptors in a different image.\n\u000fDetect and select a set of interest points, which we will call keypoints : Here\nwe use a keypoint detector. Keypoint detectors look for points in images with\nproperties such as repeatibility, which may create less ambiguity and are in\ndiscriminative regions. There is a vast amount of literature about keypoint\ndetectors [Harris and Stephens, 1988; Lowe., 2004; Bay et al., 2008; Rosten et al.,\n2010; Agrawal et al., 2008] and the development of algorithms will not be the\nfocus of this work;\n\u000fCompute a signature, commonly called descriptor , for each keypoint: This step\ncomputes an identification for the keypoints detected in the previous step.\nSuch identification is generated based on a local analysis of the region around\nthe keypoint and is represented by n-dimensional vector. This descriptor is\nthen used to compute the similarity distance between the keypoints;\n\u000fFind the nearest neighbor in descriptor space: This step is accomplished by\ncomparing the descriptors using some similarity distance, e.g.Euclidian dis-\ntance, Manhattan distance or Hamming distance.\nIt is clear that even a perfect similarity distance combinated with the best key-\n3\npoint detector will not compensate for a descriptor with poor discriminative charac-\nteristics. Hence, devising descriptors that are able to correctly and reliably establish\npairs of corresponding points is of central importance.\nIn this thesis, we focus on the creation and analysis of feature descriptors for\nkeypoints in color images and range images. The proposed methodology is based\non the concern of how to reach the best possible descriptor. Thus, the main problem\nof this work can be defined by the question:\nProblem 1.1 (Thesis Problem ).How to design and build a robust descriptor for keypoints\nin range and visual data?\nA robust feature descriptor or any approximation of such descriptor shares a\ncommon set of properties. These properties yield the requirements for an ideal de-\nscriptor which must compute strong discriminative signatures, providing unique\nidentification for keypoints independent of the viewpoint and illumination condi-\ntions. In this work we elected the set \u0005of eight properties:\n\u000f\u00190: Robustness to noise;\n\u000f\u00191: Scale invariance;\n\u000f\u00192: Rotation invariance;\n\u000f\u00193: Illumination invariance;\n\u000f\u00194: Robustness to textureless scenes;\n\u000f\u00195: Low processing time to compute;\n\u000f\u00196: Low processing time to compare;\n\u000f\u00197: Low memory consumption;\n\u000f\u00198: Keypoint detection independence.\nThe properties in set \u0005have been used as a design guide and in the evaluation\nand development process of descriptors during this work. These properties have a\nstrong relation with each step depicted in Figure 1.1. In particular, the \u00198property\nis linked to two important steps in correspondence problem: keypoint detection\nand keypoint description. Although this thesis addresses the keypoint description\nstep, it is crucial for descriptor algorithms to have high independence of keypoint\ndetection, which can avoid adding noise from keypoint detection to description and\n4 C HAPTER 1. I NTRODUCTION\n(a) Result from [Henry et al., 2010] (b) Result from [Lai et al., 2011a]\nFigure 1.2. Examples of works using geometrical and intensity information for\n(a) Indoor Environment Reconstruction and (b) Object Detection and Recogni-\ntion.\nallows to use any of detector algorithms proposed every year. The reasons we chose\nthe others properties will be detailed in the next section.\n1.1 Motivation\nThe matching of descriptors is at the core of a myriad of applications in computer\nvision and robotics. Three-dimensional alignment (Figure 1.2 (a)), SLAM, tracking,\ndetection and recognition of objects (Figure 1.2 (b)) and structure from motion are\nsome of the applications that rely on feature point matching methods. Hence, it is\nof great importance to the success of these systems to develop robust, invariant and\ndiscriminative descriptors, since rotation, illumination and the viewpoint are not\nfixed. Moreover, the data of these systems is noisy.\nAdditionally, the requirements from the online application for limited hard-\nware, as mobile phones and embedded systems, are not reached at an acceptable\nlevel by the state-of-the-art descriptors. This leads to the demand for descriptors\nthat are robust, fast to compute and match, and memory efficient.\nAlthough available 3-D sensing techniques have been available, such as tech-\nniques based on Light Detection And Ranging (LIDAR), time-of-flight (Canesta),\nand projected texture stereo (PR2 robot), they are still very expensive and demand a\nsubstantial engineering effort. With the recent introduction of fast and inexpensive\nRGB-D sensors (where RGB implies trichromatic intensity information and Dstands\nfor depth) the integration of synchronized intensity (color) and depth has become\neasier to obtain (Figure 1.3).\n1.2. T HESIS GOALS AND CONTRIBUTIONS 5\n(a) (b)\n(c) (d)\nFigure 1.3. Examples of recently released domestic tridimensional sensors: (a)\nMicrosoft Kinect; (b) ASUS WAVI Xtion; (c) Minoru Webcam 3D; (d) 3D LG\nOptimus cell phone.\nRGB-D systems output color images and the corresponding pixel depth infor-\nmation enabling the acquisition of both depth and visual cues in real-time. These\nsystems have opened the way to obtain 3D information with unprecedented trade-\noff of richness and cost. One such system is the Kinect [Microsoft, 2011], a low cost\ncommercially available system that produces RGB-D data in real-time for gaming\napplications.\nRobust, fast and low memory consumption descriptors that efficiently use the\navailable information, like color and depth, will play a central role in the search of\nthe optimal descriptor.\n1.2 Thesis Goals and Contributions\nIn this thesis, we present three novel descriptors, which efficiently combine inten-\nsity and shape information to substantially improve discriminative power enabling\nenhanced and faster matching. We aim to advance in the task of building robust and\nefficient descriptors suitable for online applications. Experimental results presented\nlater show that our approach is both robust and computationally efficient.\nMoreover, we tested the descriptor capabilities in indoor environment align-\n6 C HAPTER 1. I NTRODUCTION\nment and object recognition and obtained better results when comparing with three\nother descriptors well known in literature.\nThe main contributions of this thesis can be summarized by the three devel-\noped RGB-D descriptors:\n1. The Enhanced Descriptor for Visual and Depth Data (EDVD), which effi-\nciently combines visual and shape information to substantially improve dis-\ncriminative power, enabling high matching performance. Unlike most current\nmethodologies, our approach includes in its design scale and rotation trans-\nforms in both image and geometrical domains;\n2. Binary Appearance and Shape Elements (BASE) that, like EDVD, efficiently\nfuses visual and shape information to improve the discriminative power, but\nprovides faster matching and lower memory consumption;\n3. A fast, lightweight and robust feature point descriptor, called Binary Robust\nAppearance and Normal Descriptor (BRAND), which presents all the invari-\nant properties of EDVD and is as fast as BASE with the same memory con-\nsumption.\nIn summary, our main contribution is to exploit the techniques to build a ro-\nbust, fast and low memory consumption descriptor suitable for online 3D mapping\nand object recognition applications, which is able to work in modest hardware con-\nfigurations with limited memory and processor use.\nPortions of this work have been published in the following international peer\nreviewed journal, conference proceedings and workshop:\n1. Nascimento, E. R.; Oliveira, G. L.; Vieira, A. W.; Campos, M. F. M.. On The\nDevelopment of a Robust, Fast and Lightweight Keypoint Descriptor . Neurocom-\nputing;\n2. Nascimento, E. R; Schwartz W. R; Campos, M. F. M.. EDVD - Enhanced De-\nscriptor for Visual and Depth Data . IAPR International Conference on Pattern\nRecognition (ICPR), 2012, Tsukuba - Japan;\n3. Nascimento, E. R.; Oliveira, G. L.; Campos, M. F. M.; Vieira, A. W. and\nSchwartz, W. R. BRAND: A Robust Appearance and Depth Descriptor for RGB-\nD Images , in IEEE Intl. Proc. on Intelligent Robots and Systems (IROS), 2012,\nVilamoura - Algarve - Portugal;\n1.3. O RGANIZATION OF THE THESIS 7\n4. Nascimento, E. R.; Schwartz, W. R.; Oliveira, G. L.; Vieira, A. W.; Campos, M.\nF. M.; Mesquita, D. B.. Appearance and Geometry Fusion for Enhanced Dense 3D\nAlignment , in XXV Conference on Graphics, Patterns and Images (SIBGRAPI),\n2012, Ouro Preto - Minas Gerais - Brazil;\n5. Nascimento, E. R.; Oliveira, G. L.; Vieira, A. W.; Campos, M. F. M.. Improving\nObject Detection and Recognition for Semantic Mapping with an Extended Intensity\nand Shape based Descriptor . In: IROS 2011 workshop - Active Semantic Percep-\ntion and Object Search in the Real World (ASP-AVS-11), 2011, San Francisco -\nCalifornia - USA.\n1.3 Organization of the Thesis\nThis thesis is organized as follows. The next chapter is intended to provide the\nreader with an overview of the main and more recent techniques in the creation of\ndescriptors; it reviews methods to build image and geometrical descriptors where\nthe most relevant algorithms are discussed in detail. In Chapter 3, we describe the\nmethodology to build approximations of a robust descritor according to the prop-\nerties in Set \u0005and present an analysis of its capabilities in comparison to other de-\nscriptors in Chapter 4. Following in Chapter 5, we present the use of our descriptor\nin indoor environment reconstruction and semantic mapping applications. Chapter\n6concludes with a discussion about the limitations and contributions of this work,\nand highlights future research directions.\n\nChapter 2\nRelated Work\nIn spite of all the adversities in the correspondence of pixels in images, like noise\nand ambiguity, much progress towards estimating an aproximated correspondence\nrelationfc(defined in Chapter 1) has been made in the last decade for color images\nand range images.\nMethodologies for keypoint detectors, descriptor creation and matching have\nbeen proposed in the last decade with great success. In this chapter we review the\nrelated literature on the creation of descriptors. Due to the enormous quantity of\nwork that has been published on the subject, we will focus on those with a direct\nrelation with ours.\n2.1 Keypoint Detection\nAs seen in Figure 1.1, the first step in the matching process is keypoint detection.\nThus, we present in this section a discussion of the main concepts present in detector\nalgorithms.\nThe main goal of detectors is to assign a saliency score to each pixel of an\nimage. This score is used to select a small subset of pixels that present as properties\n[Tuytelaars and Mikolajczyk, 2008]:\n\u000fRepeatability: The selected pixels should be stable under several image pertu-\nbations;\n\u000fDistinctiveness: The neighborhood around each keypoint should have intesity\npattern with strong variantions;\n\u000fLocality: The features should be a function of local information;\n9\n10 C HAPTER 2. R ELATED WORK\nFigure 2.1. The original image L0is repeatedly subsampled and smoothed gen-\nerating a sequence of reduced resolution images L1,L2andL3in different scale\nlevels.\n\u000fAccurately localizable: The localization process should be less error-prone\nwith respect to scale and shape;\n\u000fEfficient: Low processing time.\nCorner detection was used in earlier techniques to detect keypoints [Zhang\net al., 1995; Schmid and Mohr, 1997], however, corner detection approaches have a\nlimited performance since they generally examine the image at only a single scale.\nThe more recent detector algorithms are designed to detect the same key-\npoints in different scalings of an image. Using the scale-space representation [Linde-\nberg, 1994], these algorithms are able to provide scale invariance in image features.\nFuthermore, the scale-space representation is useful in reducing the noise.\nThe scale-space representation Lfor an image Iis defined as\nL(x;y;\u001b ) =G(x;y;\u001b )\u0003I(x;y); (2.1)\nwhere\u0003is the 2D convolution operator and\nG(x;y;\u001b ) =1\n2\u0019\u001b2ex2+y2\n2\u001b2 (2.2)\nis a Gaussian kernel.\nGenerally, the detector methodologies implement the scale-space as an image\npyramid. In each level of the pyramid a smoothed and sub-sampled representation\nof the image is stored (see Figure 2.1). Thus, keypoint detection is performed com-\nparing the maxima and minima of a pixel response in the scale-space fuction in the\n2.2. D ESCRIPTOR EXTRACTION 11\npyramid. In addition to detection, this process determines the scale of each keypoint\nfound.\nIn order to provide invariance to rotation transformations, detector algorithms\nestimate the characteristic direction of the region around each keypoint. This direc-\ntion is called Canonical Orientation , and it is defined by the pattern of gradients in\nthe keypoint’s neighborhood.\n2.2 Descriptor Extraction\nThe approaches for assembling descriptors for keypoints can be categorized based\non the type of data acquired from the scene. That is, data may be composed of\ntextured images or of depth images.\nIn the last years, textured images have been the main choice. They provide a\nrich source of information which naturally ushered the use of texture based descrip-\ntors in several methods despite their inherent complexity. Computer Vision litera-\nture presents numerous works on using different cues for correspondence based on\ntexture [Lowe., 2004; Bay et al., 2008; Calonder et al., 2010; Leutenegger et al., 2011;\nRublee et al., 2011]. Virtually all of these techniques are based on the analysis of the\ndistribution of local gradients.\nScale Invariant Feature Transform (SIFT) and Speeded-Up Robust Features\n(SURF) are the most popular image descriptors algorithms. Thanks to their descrim-\ninative power and speed, they became standard for several tasks such as keypoint\ncorrespondence and object recognition. For these reasons we chose both as com-\npetitors with the methods proposed in this thesis and we detail them in the next\nsections. We also present Binary Robust Independent Elementary Features (BRIEF)\ndescriptor since our methodology was partially inspired on it.\n2.2.1 SIFT Descriptor\nLowe, in his landmark paper [Lowe., 2004], presents the keypoint descriptor called\nSIFT. Although Lowe proposed SIFT to be used in object recognition applications,\ndue to the high discriminative power and stability, his algorithm became the most\nused keypoint descriptor in a myriad of other tasks.\nThe whole creation process of SIFT is illustrated in Figure 2.2. The first step\nis to compute, for each pixel around the keypoint location, the gradient magnitude\n12 C HAPTER 2. R ELATED WORK\nFigure 2.2. Computing a SIFT descriptor. First, the magnitudes and orienta-\ntions of local gradients are computed around the keypoint. The magnitudes are\nweighted by a Gaussian window (blue circle) and accumulated in 4orientation\nhistograms. Each histogram has 8bins of orientation corresponding to a subre-\ngion. Then, the bins for all histograms are concatenated to form the descriptor.\nUnlike the example in this image, the standard implementation of SIFT uses a\n16\u000216sample on the left and 4\u00024histograms on the right. Illustration taken\nfrom [Lowe., 2004].\nand orientation. The magnitude m(x;y)of the pixel (x;y)is given by:\nm(x;y) =p\n[I(x+ 1;y)\u0000I(x\u00001;y)]2+ [I(x;y+ 1)\u0000I(x;y\u00001)]2; (2.3)\nand its orientation \u0012(x;y)is estimated as:\n\u0012(x;y) = arctan\u0014I(x;y+ 1)\u0000I(x;y\u00001)\nI(x+ 1;y)\u0000I(x\u00001;y)\u0015\n; (2.4)\nwhereI(x;y)is the intensity of pixel (x;y)of the smoothed and subsampled image\nin the level of the scale-space pyramid where the keypoint was detected.\nA region of 16\u000216pixels, centred in the keypoint localization, is subdivided in\n4\u00024subregions. These 16subregions are rotated relative to the canonical orientation\ncomputed for the keypoint. For each subregion, a histogram with 8orientation bins\nis computed. The magnitute values for all gradients inside of the region are weighed\nby a Gaussian window and accumulated into the orientation histograms.\nThe8bins of all 16histograms are concatenated forming the 128-vector, which\nafter normalization, represents the SIFT descriptor. The whole procedure makes the\ndescriptor scale and rotation invariant thanks to scale-space and the canonical orien-\ntation, and due to normalization the descriptors are partially robust to illumination\nchanges.\n2.2. D ESCRIPTOR EXTRACTION 13\nFigure 2.3. Computing a SURF descriptor. Two Haar wavelet filters (left) are\nused to estimate the local gradients inside of an oriented quadratic grid centred\nin the keypoint position (middle). The wavelet responses are weighted with a\nGaussian (blue circle) and, for each 2\u00022sub-region (right), is computed the sums\nofdx,jdxj,dyandjdyjrelatively to the canonical orientation (red arrow). The\nfinal descriptor is composed of 4\u00024vectors v= (Pdx;Pdy;Pjdxj;Pjdyj)\nconcatenated. Illustration taken from [Bay et al., 2008].\n2.2.2 SURF Descriptor\nAlthough SIFT brings forth discriminative descriptors, it has a high processing cost.\nTo overcome this issue, Bay et al. [2008] propose the faster algorithm SURF. This\nalgorithm can be seen as an approximation of SIFT and shares the idea of using\nhistograms based on local gradients. Despite the approximations in descriptor cre-\nation, there is no significant loss in robustness or rotation and scale invariance.\nLike SIFT, the creation process of SURF descriptor consists of centering a\nsquare region in the keypoint location. The keypoint scale in scale-space is used\nto determine the size of this region and the canonical orientation of its direction.\nHowever, differently from SIFT which uses pixel differences to compute the gra-\ndients, SURF creates the local gradient distribution using Haar wavelet responses\nin horizontal and in vertical directions (the Haar wavelet filters used are shown in\nFigure 2.3).\nSURF computes, in both xandydirections, four sums: i) sum of gradientsPdx; ii) sum of gradientsPdy; iii) sum of absolute gradientsPjdxjand iv) sum of\nabsolute gradientsPjdyj( see 2.3). These sums are computed for each one of the 4\u0002\n4sub-regions and concatenated forming 16vectors v= (Pdx;Pdy;Pjdxj;Pjdyj).\nThe final SURF descriptor is produced by concatenating all the 16vectors, creating\na64-dimensional vector.\n14 C HAPTER 2. R ELATED WORK\nFigure 2.4. Patch Pwith 48\u000248pixels indicating 256sampled pairs of pixel\nlocations used to construct the binary feature.\n2.2.3 BRIEF Descriptor\nDespite the high discriminative power of SIFT and SURF, they suffer with slow\nmatch and high processing time and memory consumption (vectors with 128and64\nfloats respectively). Hence, these algorithms are not feasible in applications where\nit is necessary to store millions of descriptors or have real-time constraints.\nDimensionality reduction methodologies, such as Principal Component Anal-\nysis (PCA) [Ke and Sukthankar, 2004], Linear Discriminant Embedding (LDE) [Hua\net al., 2007], algorithms based on L1-norm-based [Pang and Yuan, 2010] and [Pang\net al., 2010], and quantization techniques that convert floating-point coordinates into\nintegers coded on fewer bits are used by some of the approaches to solve the de-\nscriptor dimensionality problem. However, those techniques involve further post-\nprocessing, usually with high computation charge, of a long descriptor which is\nalready costly to compute. Furthermore, PCA and LDE methodologies may lead to\noverfitting and reduce the performance.\nMore recently, several compact descriptors, such as Calonder et al. [2010],\nLeutenegger et al. [2011], Rublee et al. [2011], Ambai and Yoshida [2011], Kemb-\nhavi et al. [2011] and Choi et al. [2012] have been proposed employing ideas similar\nto those used by Local Binary Patterns (LBP) [Ojala et al., 1996]. Those descriptors\nare computed using simple intensity difference tests, which have small memory\nconsumption and modest processing time in the creation and matching processes.\nThe use of binary strings as descriptors has been used with promising results\nand one successful example of this methodology is the BRIEF descriptor [Calonder\net al., 2010]. As this work is inspired by BRIEF we will detail its methodology.\n2.3. G EOMETRICAL DESCRIPTOR EXTRACTION 15\nAssembling binary descriptors In order to generate a string of bits, BRIEF’s ap-\nproach consists of computing individual bits by comparing the intensities of pairs of\npoints in a neighborhood around each detected keypoint. Similar to SIFT and SURF\ndescriptors, the BRIEF methodology estimates for every keypoint a gradient field.\nThe pairs are selected by a patch Pof sizeS\u0002S, centred at a keypoint posi-\ntion. This patch is smoothed to reduce sensitivity, increase stability and repeatibility.\nCalonder et al. [2010] tested five configurations to build a spatial arrangement of the\npatch and the best results were reached by using an isotropic Gaussian distribution\n(X;Y)i:i:d:N(0;S2\n25). Figure 2.4 illustrates this arrangement. Each pair of pixels\nare indicated with line segments. This pairs distribution is created with random\nfunction, however, is fixed for all keypoints and it is centred at keypoint’s location.\nFor all positions in a set of ( x;y)-locations, defined by the distribution, the\nfollowing function is evaluated:\nf(P;x;y) =8\n<\n:1ifp(x)<p(y)\n0otherwise,(2.5)\nwherep(x)is the pixel intensity at position x= (u;v)Tin the patch P.\nThe final descriptor is encoded as a binary string computed by:\nb(P) =256X\ni=12i\u00001f(P;xi;yi): (2.6)\nOne of the main disadvantages of BRIEF is the lack of invariance to scaling\nand rotation transform, differently from SIFT and SURF, BRIEF algorithm does not\ncompute the canonical orientation. Nevertheless, according to the authors, BRIEF\nhas shown be invariant to rotation of small degrees. Also, there are cases that the\nimage orientation can be estimated using other sensors, such as a mobile phone\nequipped with Inertial Measurement Unit (IMU) and a robot that knows its attitude\n[Calonder et al., 2010].\n2.3 Geometrical Descriptor Extraction\nIn nearly all approaches mentioned in previous section, feature descriptors are esti-\nmated from images alone, and they seldomly use other information such as geom-\netry. As a consequence, common issues concerning real scenes such as variation in\nillumination and textureless objects may dramatically decrease the performance of\ntechniques that are based only on texture.\n16 C HAPTER 2. R ELATED WORK\nWith the growing availability of inexpensive, real time depth sensors, depth\nimages are becoming increasingly popular and many new geometrical-based de-\nscriptors are proposed each year. The methodologies presented by Rusu et al.\n[2008a], Rusu et al. [2008b], Rusu et al. [2008c], Rusu et al. [2008d], Rusu et al. [2009],\nSteder et al. [2010], Tombari et al. [2010] and Steder et al. [2011] are some of the most\nrecent approaches.\nAs in the case of textured images, region matching on geometrical data is most\nadvantageous. However, due to the geometrical nature of the data, effective descrip-\ntors tend to present higher complexity, and large ambiguous regions may become a\nhinderance to the correspondence process.\nGeometrical descriptors take advantage of the matrix like structures of depth\nimages which have low discriminative power and are less useful. Nevertheless,\ninformation of such descriptors is most relevant for textureless scene regions where\ntexture based descriptors are doomed to fail.\nIn order to define robust descriptors for depth data, large amounts of data\nare necessary to encompass sufficient information and to avoid ambiguities. Spin-\nImage [Johnson and Hebert, 1999] is the most popular and used algorithm of such\ndescriptors.\n2.3.1 Spin-Image\nJohnson proposed in his paper [Johnson and Hebert, 1999] to represent a surface\nwith a set of images enconding global properties. He created a view-independent\ndescriptor, where an object-oriented coordinate system is fixed on the surface and\ndoes not change when viewpoint changes.\nThe object-oriented coordinate system is defined by a three-dimensional point\npand its normal n. It is usually trivial to estimate the direction of the normal for\neach point on the surface as the vector perpendicular to the surface in that point.\nThe origin of object-oriented coordinate systems is defined by a keypoint loca-\ntionp. A tangent plane Pto the point pis oriented perpendicularly to the surface\nnormal n. Using the point pand its normal n, the algorithm defines the line L, which\ntogether with the plane Pdetermine a cylindrical coordinate system Owithout the\npolar angle coordinate, since it is not possible to determine this coordinate using\njust the point and its normal surface. Thus, in this cylindrical coordinate system,\na point xis represented using \u000band\fcoordinates, where \u000bis the (non-negative)\nperpendicular distance to line Land\fis the signed perpendicular distance to plane\nP. Figure 2.5 depicts the creation of this cylindrical coordinate system.\n2.4. F USING IMAGE AND GEOMETRICAL INFORMATION 17\nFigure 2.5. Creation of a cylindrical coordinate system based on point pand the\nnormal surface non this point. A point xcan be represented in this cylindrical\ncoordinate system using coordinates \u000band\f. The planePdefines the object-\noriented coordinate system. Illustration taken from [Johnson and Hebert, 1999].\nAfter creation of the coordinate system, the next step is to generate a 2D image\ncalled a spin map. First, all point x2R3in the point cloud are projected onto the\ncylindrical coordinate system O, using the projection function SO:R3!R2\nSO(x)!(\u000b;\f) =\u0010p\nkx\u0000pk2\u0000hn;(x\u0000p)i2;hn;x\u0000pi\u0011\n; (2.7)\nwhereh:iis the dot product.\nThen, the spin map is assembled by an accumulating schema. Using a 2D his-\ntogram with discrete bins to \u000band\fvalues, the methodology updates these bins\naccording to the projected points in the cylindrical coordinate system. This accumu-\nlaton process is shown in Figure 2.6.\nBy using a local coordinate system attached to the object surface and oriented\nalong the keypoint normal to build the spin maps, the Spin-Image algorithm pro-\nvides robustness and invariance to rotation transformation. Since the signature is\nindependent of the viewpoint. It is well suited for depth maps and meshes in gen-\neral.\nEven though the geometrical descriptors, such as Spin-Image are accurate,\nthey present high computational cost and memory consumption, and constructing\na single descriptor for general raw point clouds or range images involves complex\ngeometric operations.\n2.4 Fusing Image and Geometrical Information\nThe combination of visual (from texture images), and geometrical shape (from depth\ninformation) cues has been adopted by some recent works as an alternative to im-\n18 C HAPTER 2. R ELATED WORK\n3D Object \nSurface MeshSpin map\nObject-orientated\nCoordinate System++\nFigure 2.6. Spin-Image creation using 2D histogram. After the projection of a\npoint xin the coordinate system O, the resulting 2D point is accumulated into a\ndiscrete bin. Illustration based on [Johnson and Hebert, 1999].\nprove object detection and recognition rate. The fusion of appearance and geometry\ninformation, which has shown a very promising approach for object recognition, is\nstill in its opening movements. For example, Lai et al. [2011a,b] and Henry et al.\n[2010] combine both sources of information, but they applied well-known descrip-\ntors for each type of data, such as SIFT for texture and Spin-Image for shape and\nthen concatenate both to form a new signature. However, as far as efficacy is con-\ncerned, Lai et al. [2011b] have shown that the combination of intensity and depth\ninformation outperforms approaches using either intensity or depth alone.\nMost likely, the main reason many descriptors have not used shape informa-\ntion can be partially explained by the fact that, until recently, object geometry could\nnot be easily or quickly obtained so as to be combined with image feature data.\nIn the last few years, the combination of multiple cues is becoming a pop-\nular approach for the design of descriptors. Zaharescu et al. [2009] proposed the\nMeshHOG descriptor using texture information of 3D models as scalar functions de-\nfined over 2D manifolds. Tombari et al. [2011] presented the Color-SHOT (CSHOT)\ndescriptor based on an extension of their shape only descriptor Signature of His-\ntograms of Orientations (SHOT) [Tombari et al., 2010] to incorporate texture. The\nauthors compared CSHOT against MeshHOG and reported that CSHOT approach\noutperforms MeshHOG in processing time and accuracy. In the case of global de-\nscriptor, Kanezaki et al. [2011] presented the Voxelized Shape and Color Histograms\n(VOSCH) descriptor, which by combining depth and texture, was able to increase\n2.4. F USING IMAGE AND GEOMETRICAL INFORMATION 19\nElevation division 1\nElevation division 2\nAzimuth divisionRadial division 1\nRadial division 2\nFigure 2.7. Isotropic Spherical Grid used by CSHOT descriptor. The space is\npartioned in 32sectors: 4azimuth divisions (the standard implementation uses\n8divisions), 2elevation divisions and 2radial divisions. Illustration based on\n[Tombari et al., 2011].\nthe recognition rate in cluttered scenes with obstruction. Since CSHOT is the state-\nof-the-art of shape and texture descriptor, we choose it as the main competitor of\nour methodology and we detail its construction process in next section.\n2.4.1 CSHOT Descriptor\nCSHOT signatures are composed of two concatenated histograms, one contains the\ngeometric features and other with the texture information enconded. An isotropic\nspherical grid is overlaid onto each keypoint location. This grid has 32sectors that\ndivides the space in 8azimuth divisons, 2elevation divisions and 2radial divisons\n(Figure 2.7 illustrates this spherical grid with 4azimuth divisons).\nFor each sector, two local histograms are computed, one based on geometrical\nfeatures and one with texture information. In the former, the algorithm accumulates\ninto histograms bins according to the geometric metric\nf(K;P ) =hNK;NPi; (2.8)\nwhereKis the keypoint, Prepresents a generic vertex belonging to the spherical\nsupport around K,NKandNPare the normals of keypoint and generic vertex,\nrespectively and h:iis the dot product.\nThe accumulation in the texture histograms is performed using color triplets\nin CIELab space and the metric is based on the L1norm, given by\nl(RK;RP) =3X\ni=1kRK(i)\u0000RP(i)k; (2.9)\n20 C HAPTER 2. R ELATED WORK\nwhere,RKandRPare the CIELab representation of the RGB triplet of the keypoint\nKand a generic vertex Pin spherical support.\nThe standard implementation of CSHOT uses 11bins for geometrical his-\ntograms and 31bins for texture histograms. Since 32histograms are computed for\neach cue, the resulting signature is a 1344 -length vector.\nIn this work we take a similar approach to the problem. Our technique builds\na descriptor which simultaneously takes into account both sources of information to\ncreate a unique representation of a region simultaneously considering texture and\nshape.\nOur method aims at fusing visual (texture) and shape (geometric) information\nto enrich the discriminative power of our matching process to registration. On one\nhand, image texture information can usually provide better perception of object fea-\ntures, on the other hand depth information produced by 3D sensors is less sensitive\nto lighting conditions. Our descriptor brings forth the advantages of both texture\nand depth information. Moreover, it uses less memory space, since it was designed\nas a bit string, and less processing and matching time due to the low cost computa-\ntions needed.\n2.5 Descriptors Rating based on the \u0005Set\nTable 2.1 summarizes the assigned properties of the descriptors described in this\nsection. We give an individual rating with respect to the eight properties of \u0005set.\nRecall the properties described in Chapter 1, the \u0005set is composed of eight proper-\nties:\n1. Robustness to noise;\n2. Scale invariance;\n3. Rotation invariance;\n4. Illumination invariance;\n5. Robustness to textureless scenes;\n6. Low processing time to compute;\n7. Low processing time to compare;\n8. Low memory consumption;\n2.5. D ESCRIPTORS RATING BASED ON THE \u0005SET 21\nSURF SIFT Spin-Image CSHOT BRIEF\nRobustness to noise \u000f\u000f\u0000\u000f\u000e\nScale invariance \u000f\u000f\u0000\u000f\u0000\nRotation invariance \u000f\u000f\u000f\u000f\u0000\nIllumination invariance \u000e\u000e\u000f\u000f\u0000\nRobustness to textureless scenes \u0000\u0000\u000f\u000f\u0000\nLow time to compute \u000f\u0000\u0000\u0000\u000f\nLow time to compare \u000e\u000e\u0000\u0000\u000f\nLow memory consumption \u0000\u0000\u0000\u0000\u000f\nDetection independence \u000e\u000e\u0000\u0000\u000e\nTable 2.1. Descriptors rating based on the properties of an robust descriptor.\n9. Keypoint detection independence.\nWe rate each descriptor of this chapter using the following criteria:\n\u0000: Descriptor does not have implemented any algorithm to cover the property;\n\u000e: The property is implemented using an approximation;\n\u000f: Descriptor has a robust implementation of the property.\nTable 2.1 shows a clear trade-off between computational efficiency and dis-\ncrimination power. This trade-off is highlighted mainly in the comparison of the\nfast descriptor BRIEF and the robust descriptor SIFT. The independence of keypoint\ndetection rate is base on the results shown in Chapter 4.\n\nChapter 3\nA Computational Approach to\nCreation of Keypoint Descriptors\nAS OBSERVED IN THE PRECEDING CHAPTER , on the one hand, the use of texture\ninformation results in highly discrimative descriptors, on the other hand, bi-\nnary strings and depth information from range images can reduce the cost of the\ndescriptor creation and matching steps and provide descriptors robust to lack of\ntexture and illumination changes.\nUnlike tradional descriptors, such as SIFT, SURF and Spin-Image, that use only\ntexture or geometry information, this chapter presents three novel descriptors to en-\ncode visual and shape information. However, differently from the work of Tombari\net al. [Tombari et al., 2011], our algorithms provide robust, fast and lightweight\nsignatures for keypoints. The methodology used by our descriptors uses the best\ninformation of both worlds in an efficient and low cost way.\nAll algorithms presented in this thesis receive as inputs a data pair (I;D),\nwhich denotes the output of a RGB-D sensor and a list Kof detected keypoints. For\neach pixel x,I(x)provides the intensity and D(x)the depth information. Further-\nmore, we estimated a normal surface for all xas a mapN, whereN(x)is efficiently\nestimated by Principal Component Analysis (PCA) over the surface defined by the\ndepth map.\n3.1 General Methodology\nIn this section we detail the methodology used to design three new descriptors. The\nstages of this methodology are illustrated in Figure 3.1.\nOur methodology is composed of three main steps. In the first step, we com-\n23\n24CHAPTER 3. A C OMPUTATIONAL APPROACH TO CREATION OF KEYPOINT\nDESCRIPTORS\nRGB-D Image KeypointPattern\nAnalysis\nAppearance\nand Geometry\nFusion\nDescriptorCanonical\nOrientationScale \nEstimation\nssθ\nFigure 3.1. Methodology diagram. After computing the scale factor susing\ndepth information from an RGB-D image, our methodology extracts a patch of\nthe image in the RGB domain to estimate the canonical orientation \u0012of the key-\npoint. Finally, appearance and geometric information are fused together based\non the features selected with a pattern analysis.\npute the scale factor using the depth information from RGB-D image. The scale fac-\ntor is used in the next step (canonical orientation estimation) and in feature analysis\nin the keypoint’s vicinity. In the canonical orientation estimating step, a patch in the\nRGB domain is extracted and used to estimate the characteristic angular direction of\nthe keypoint’s neighborhood. At last, we combine both appearance and geometric\ninformation to create keypoint descriptors that are robust, fast and lightweight.\n3.1.1 Scale Assignment\nDue to the lack of depth information in the images, approaches such as Lowe. [2004],\nBay et al. [2008] and Leutenegger et al. [2011] use scale-space representation to local-\nize keypoints at different scales. The image is represented by a multilevel, multiscale\npyramid in which for each level the image is smoothed and sub-sampled.\nSince RGB-D images are composed of color as well as depth information, in-\nstead of computing a pyramid and representing the keypoints in scale-space, we use\nthe depth information of each keypoint to define the scale factor sof the patch to be\nused in the neighborhood analysis. In this way, patches associated with keypoints\nfarther away from the camera will present smaller sizes.\nThe scale factor sis computed by the function:\ns= max\u0012\n0:2;3:8\u00000:4 max(dmin;d)\n3\u0013\n; (3.1)\nwhich linearly scales the radius of a circular patch Pfrom 9to24, and filters depths\nwith values less than dmin(in this work we use dmin= 2meters).\n3.1. G ENERAL METHODOLOGY 25\n3.1.2 Canonical Orientation Estimation\nThere are several algorithms to determine the canonical orientation of a keypoint.\nWe tested three methods to be used in our descriptors: Intensity Centroid (IC),\nSURF-like (HAAR) and SIFT-like (BIN). Our choice was based on the stability and\nsimplicity of the techniques, since they are robust and have small processing time.\nIntensity Centroid (IC) The canonical orientation of a keypoint Kcan be esti-\nmated by a fast and simple method using geometric moments. The idea behind this\nis to build a vector from the keypoint’s localization to the centroid patch defined by\nthe moments in the region around the keypoint.\nRosin [1999] defines the moments of a patch of a image Ias:\nmpq=X\nx;yxpyqI(x;y): (3.2)\nSimilar to Rublee et al. [2011], we compute the moments musing only pixels\n(x;y)remaining within a circular region of radius equal to the patch size.\nThe patch centroid Cis determined by:\nC=\u0012m10\nm00;m01\nm00\u0013\n; (3.3)\nand the canonical orientation \u0012is given by the angle of the vector ~KC:\n\u0012= atan2(m01;m10); (3.4)\nwhere atan2 is a implementation of the quadrant-aware version of the arctangent\nfunction.\nSURF-like (HAAR) This algorithm identifies the direction of keypoints using a\nprocess more robust to noise based on Haar wavelets responses and a sliding ori-\nentation window. In order to estimate the canonical orientation \u0012, a circular neigh-\nbourhood of radius 6s, wheresis the scale at which the keypoint was detected, is\ncentered around each keypoint.\nThus, for both xandydirections the Haar wavelet responses, with the wavelets\nsizes set to 4s, are calculated. These responses are weighted with a Gaussian func-\ntion centred at the keypoint. These values are plotted in a graph with he xdirection\nresponse strength along the abscissa and the ydirection response strength along the\nordinate axis. Finally, a sliding orientation window of size \u0019=3is used to produce\n26CHAPTER 3. A C OMPUTATIONAL APPROACH TO CREATION OF KEYPOINT\nDESCRIPTORS\nFigure 3.2. Computing SURF canonical orientation \u0012. In a circular neighbour-\nhood of radius 6s, two Haar wavelets filters with size 4sare used to compute the\nresponses in xandydirection (left image). The responses are plotted in a graph\n(blue points) and summed. The largest vector (red vector) defines the canonical\norientation (right image).\nthe keypoint’s orientation. The responses in the xandyaxes are added, yielding\nan orientation vector within the window. The canonical orientation is chosen as the\nvector with the largest magnitude. Figure 3.2 depicts this procedure.\nSIFT-like (BIN) The third method is similar to the one used in the SIFT algorithm.\nA histogram with 36directions is formed by taking values within a region around\nthe keypoint’s location. An accumulation is performed adding the values of m(x;y)\ncomputed using Equation 2.3 and weighted by a Gaussian function around the key-\npoint. The orientation of each pixel \u0012(x;y)is computed by Equation 2.4. The highest\npeak of the histogram determines the canonical orientation of the local gradients.\nHowever, differently from SIFT’s algorithm which includes more keypoints when\nthere are other peaks within 80% of the highest, we pick only a single orientation.\n3.1.3 Appearance and Geometry Fusion\nThe importance of combining shape and visual information comes from the possi-\nbility of creating descriptors robust to textureless objects, lack of illumination and\nscenes with ambiguous geometry.\nOur fusion process is divided into three main steps: In the first step, to exploit\nthe appearance information, we extract the visual features based on the direction of\nthe gradient around a keypoint. The idea behind this step is similar to the one used\nby the Local Binary Patterns (LBP) [Ojala et al., 1996]. Then, we build a point cloud\nwith the depth information and extract the features based on its normal surfaces.\n3.2. EDVD D ESCRIPTOR 27\nFinally, we combine the result of this analysis in a unique vector which represents\nthe signature of the keypoint.\nIn the next sections we will detail these steps and assemble three novel descrip-\ntors using our methodology as a design guide. The texture analysis step is shared\nby all the descriptors, therefore we will present it as follows.\nAppearance Analysis The gradient directions are computed using simple inten-\nsity difference tests, which have small memory consumption and modest process-\ning time. Given an image keypoint k2K, assume a circular image patch Pof size\nW\u0002W(in this work we consider 9\u0014W\u001448) centered at k. We use a fixed pattern\nwith locations given by distribution function D(k)for sampling pixel pairs around\nthe keypoint k. We also smooth the patch with a Gaussian kernel with \u001b= 2and a\nwindow with 9\u00029pixels to decrease the sensitivity to noise and increase stability\nin the pixel comparisons.\nLet the fixed set of sampled pairs from PbeS=f(xi;yi);i= 1;:::; 256g.\nBefore constructing the visual feature descriptor, the patch Pis translated to the\norigin and then rotated and scaled by the transformation T\u0012;s, which produces a set\nP , where\nP=f(T\u0012;s(xi);T\u0012;s(yi))j(xi;yi)2Sg: (3.5)\nThis transformation normalizes the patch to allow comparisons between patches.\nThen, for each pair (xi;yi)2P, we evaluate\n\u001ca(xi;yi) =8\n<\n:1ifpi(xi)<pi(yi)\n0otherwise,(3.6)\nwhere the comparison term captures gradient changes in the keypoint neighbor-\nhood.\n3.2 EDVD Descriptor\nIn this section, we present our descriptor called Enhanced Descriptor for Visual and\nDepth Data (EDVD). After extracting the gradient features according to our method-\nology, we group the results of eight tests and represent it as a floating point number.\nTherefore, we can use a vector Vawith 32elements to store the results of all 256\ncomparisons performed by the function \u001ca(Equation 3.6).\nThe EDVD’s approach builds a rotation invariant representation based on the\ndirection of the normals using an extended Gaussian image followed by the appli-\n28CHAPTER 3. A C OMPUTATIONAL APPROACH TO CREATION OF KEYPOINT\nDESCRIPTORS\nShape Information Visual InformationExtended \nGaussian Image\nFourierRGB-D ImagePixel Sampling\nI(x) N(x)\nFigure 3.3. The proposed descriptor combines shape and visual information\nbased on invariant measurement in both domains.\ncation of the Fourier transform. This process is illustrated in Figure 3.3.\nGeometrical Feature Analysis We use orientation histograms to capture the geo-\nmetric characteristics of the patch Pin the 3D domain. Since orientation histograms\nare approximations of Extended Gaussian Images (EGI) [Horn, 1984], they consti-\ntute a powerful representation invariant to translational shift transformations.\nThe first step in orientation histograms creation is to represent each normal\npn(x)in spherical coordinates (\u001e;!)(Figure 3.4). These angles are compute as [Het-\nzel et al., 2001]:\n\u001e= arctan\u0012nz\nny\u0013\n;!= arctan pn2\ny+n2\nz\nnx!\n: (3.7)\nFigure 3.4. Representation of normals in \u001eand!sphere coordinates.\n3.3. BRAND D ESCRIPTOR 29\nThen, the coordinates \u001eand!are discretized into 8values each, and the num-\nber of normals falling inside each discretized orientation is accumulated. Figure 3.3\ndepicts the accumulation of normal directions in the sphere. Dark spots represent a\nlarge number of normals accumulated in that orientation.\nSince rotations in the normal orientations become translations in the EGI do-\nmain, we apply the Fourier transform in the EGI domain to obtain a translation\ninvariant Fourier spectrum. Finally, the Fourier spectrum is linearized and con-\nverted to a 64-dimension vector Vs. In addition to the rotation invariance, the use\nof spectral information emphasizes differences among different descriptors.\nFusion Process Once the visual and geometrical features have been extracted,\nwe concatenate the geometrical vector Vgand the appearance vector Va, creating a\n96-dimension vector which captures both appearance and geometrical information.\nDespite the high quality of matching and invariance to rotation transforms,\nEDVD algorithm has drawbacks in the processing time to create the EGI histograms\nand the vector size. Furthermore, the EDVD vectors are compared using correlation\nfunction, which is slower than other approaches such as Hamming distance.\n3.3 BRAND Descriptor\nIn the following paragraphs we detail the design of our second descriptor, which we\ncall BRAND from Binary Robust Appearance and Normal Descriptor. Throughout\nthis section are shown the descriptor’s characteristics and how they cover all the\nproperties in \u0005set as well as how it overcomes the EDVD deficiencies.\nGeometrical Feature Analysis and Fusion Information There are several\nchoices available to compose a descriptor, and bit strings are among the best ap-\nproaches, mainly due to the reduction in dimensionality and efficiency in compu-\ntation achieved with their use. One of the greatest advantages of using a binary\nstring as descriptors, besides its simplicity, is its low computational cost and mem-\nory consumption, whereas each descriptor comparison can be performed using a\nsmall number of instruction on modern processors. For instance, modern architec-\ntures have only one instruction (POPCNT) to count the number of bit sets in a bit\nvector [Intel, 2007].\nAlthough our descriptor encodes point information as a binary string, similar\nto approaches described in [Calonder et al., 2010; Leutenegger et al., 2011; Rublee\n30CHAPTER 3. A C OMPUTATIONAL APPROACH TO CREATION OF KEYPOINT\nDESCRIPTORS\nFigure 3.5. Binary descriptor diagram. The patch of size S\u0002Sis centered at\nthe location of keypoint. For all positions in a set of ( x;y)-locations the intensity\nchanges in image and the displacement of normals inside of projected patch in\nthe point cloud is evaluated.\net al., 2011; Ambai and Yoshida, 2011], we embed geometric cues into our descriptor\nto increase robustness to changes in illumination and the lack of texture in scenes.\nFollowing the steps in our methodology, unlike EDVD that builds an EGI his-\ntogram and uses concatenation operator to form the final vector, BRAND evaluates\nthe function 3.8 for each pair (xi;yi)2P:\nf(xi;yi) =8\n<\n:1if\u001ca(xi;yi)_\u001cg(xi;yi)\n0otherwise,(3.8)\nwhere the function \u001ca(:)(Equation 3.6) captures the characteristic gradient changes\nin the keypoint neighborhood and \u001cg(:)function evaluates the geometric pattern on\nits surface. Figure 3.5 illustrates the construction process of the bit string.\nThe analysis of the geometric pattern using \u001cg(:)is based on two invariant geo-\nmetric measurements: i) the normal displacement (Figure 3.6 illustrates two possible\ncases of normal displacement for a pair (x;y)) and ii) the surface’s convexity. While\nthe normal displacement test is performed to check if the dot product between the\nnormalspn(xi)andpn(yi)is smaller than a displacement threshold \u001a, the convexity\n3.3. BRAND D ESCRIPTOR 31\ntest is accomplished by the local curvature signal, \u0014, estimated as:\n\u0014(xi;yi) =hps(xi)\u0000ps(yi);pn(xi)\u0000pn(yi)i; (3.9)\nwhereh:iis the dot product and ps(x)is the 3Dspatial point associated to the pixel\nxand the depth D(x). Figure 3.7 illustrates an example where the dot product be-\ntween surface normals is ambiguous, since \u00121=\u00122, but different signed curvatures,\n\u00141<0and\u00142>0, are used to unambiguously characterize these different shapes,\nbesides capturing convexity as additional geometric features.\nThe final geometric test is given by:\n\u001cg(xi;yi) = (hpn(xi);pn(yi)i<\u001a)^(\u0014(xi;yi)<0): (3.10)\nFinally, the descriptor extracted from a patch passociated with a keypoint kis\ny' x'\npn(y')pn(x')\n> 90\nxpn(y)\npn(x)\n< 90y\n(a) (b)\nFigure 3.6. Image (a) shows a surface where the normal displacement of points\nx0andy0is greater than 90degrees leading to bit value 1. In image (b) is shown\nthe normals of points xandythat lead to bit 0due to displacement less than 90\ndegrees.\npn(x)pn(y) pn(z)\nκ1 < 0θ2\nθ1 κ2 > 0 ps(x)ps(y) ps(z)\nFigure 3.7. Example of ambiguity in the dot product. Despite the fact that the\npointsps(x)andps(y)define a concave surface patch and ps(y)andps(z)define\na convex surface patch, the dot products hpn(x),pn(y)i=hpn(y),pn(z)i. In such\ncases, the curvature signals \u00141<0and\u00142>0are used to unambiguously\ncharacterize the patch shape.\n32CHAPTER 3. A C OMPUTATIONAL APPROACH TO CREATION OF KEYPOINT\nDESCRIPTORS\nKeypoint RGB-D Image\nAppearance\nand Geometry\nFusion\nBinary DescriptorPattern\nAnalysis\nFigure 3.8. BASE diagram. The appearance and geometric information are fused\nbased on the features selected with a pattern analysis.\nencoded as a binary string computed by:\nb(k) =256X\n12i\u00001f(xi;yi): (3.11)\nOnce the descriptors b(k1)andb(k2)have been estimated for two keypoints k1\nandk2, they are compared using the Hamming distance as\nh(b(k1);b(k2)) =256X\n12\u0000(i\u00001)(b(k1)\bb(k2))^1: (3.12)\n3.4 BASE descriptor\nNot all applications require scale and rotation invariance. For these applications our\nBRAND descriptor can turn off the invariance properties removing the orientation\nand scale transformation estimation phases. The new simplified descriptor, called\nBinary Appearance and Shape Elements (BASE), uses a circular patch with a fix ra-\ndius of size 24to select pairs of pixels and normals in the point cloud. In constrast\nto BRAND and EDVD, BASE does not compute the canonical orientation. Figure\n3.8 shows the BASE diagram. Similar to BRAND, the gradient information and geo-\nmetrical features (based on the normal displacements) are combined using function\n3.8.\nOne of the benefits of this version is that it requires modest computational\ncosts, since the steps to compute the canonical orientation and the keypoint scale\nare not performed. In spite of the simplicity of our descriptor, our experiments have\nshown robustness against small rotation and scale changes.\n3.5. I NVARIANT MEASUREMENTS OF BRAND AND BASE 33\n3.5 Invariant Measurements of BRAND and BASE\nAn important characteristic of the approach that we adopted to use geometry from\nRGB-D images is the relation between normal’s displacement and the transforma-\ntions of rotation, scale and translation.\nTo prove the invariace properties of our approach we will present some impor-\ntant definitions of invariance measurements in geometry [Andrade and Lewiner,\n2011].\nLetSbe a geometric object and Aa transformation.\nDefinition 3.1 (Invariant Measurement ).A geometric measurement is invariant if\n8S;8A;m(A(S)) =m(S), e.g surface curvature.\nDefinition 3.2 (Covariant Measurement ).A geometric measurement is covariant if\n8S;8A;m(A(S)) =A(m(S)), e.g tangent vector.\nDefinition 3.3 (Contravariant Measurement ).A geometric measurement is contravari-\nant if8S;8A;m(A(S)) =A\u00001(m(S)), e.g normal vector.\nLemma 3.1. Orthogonal transformations preserve the dot product.\nProof. LetAbe an orthogonal transformation and x;y2Rn:\nhAx;Ayi= (Ax)T(Ay)\n= (xTAT)(Ay)\n=xT(ATA)y\n=xTIy=xTy:\nLemma 3.2. The length of vector is preserved under orthogonal transformations.\nProof. LetAbe an orthogonal transformation and x2Rn:\nkAxk2= (Ax)(Ax)\n=xTATAx\n=xTIx\n=xTx=kxk2:\n34CHAPTER 3. A C OMPUTATIONAL APPROACH TO CREATION OF KEYPOINT\nDESCRIPTORS\nLemma 3.3. The angle between two vectors is preserved under orthogonal transformations.\nProof. Let\u000bbe the angle between vectors xandyand let\fbe angle between the\ntransformed vectors, AxandAy. According to Lemma 3.1, hAx;Ayi=hx;yi, thus:\nkAxkkAykcos(\f) =kxkkykcos(\u000b):\nAlso, according to Lemma 3.2, kAxk=kxkandkAyk=kyk, consequently cos(\f) =\ncos(\u000b). LetVbe a plane spanned by xandy, and let\u001ebe the angle of rotation for\nvector the xinVplane. For all \u001e, since cos(\f) = cos(\u000b),cos(\f+\u001e) = cos(\u000b+\u001e).\nDifferenting with respect to \u001e, we obtain:\n\u0000sin(\f+\u001e) =\u0000sin(\u000b+\u001e);\nfor\u001e= 0,sin(\f) = sin(\u000b), which implies \f=\u000b.\nTheorem 3.1. The BRAND and BASE measurement mbis invariant under rigid transfor-\nmations in the depth space.\nProof. The group of transformations considered is composed of rotation, translation\nand uniform scaling. We will show that BRAND and BASE measurement is invari-\nant to all these rigid transformations.\n\u000fRotation: Let mbbe the geometric measurement used in BRAND descriptor\nandAis a rotation matrix. We will show that mb(x;y) =mb(Ax;Ay), where\nx;y2R3.mb(x;y) =hx;yiand according to Lemma 3.1, orthogonal matrix\npreserves the dot product, as every rotation matrix is orthogonal, hAx;Ayi=\nhx;yi.\n\u000fTranslation: Let xbe a normal vector of surface S. Letp;q2Stwo points that\ndefine the normal x,x=p\u0000q. Applying a translation Ato the surface Susing\na vector t,pandqcan be rewrite as:\np0=p+t\nq0=q+t;\n3.6. R ATING EDVD, BRAND AND BASE BASED ON THE \u0005SET 35\nthe normal x, after applying Aisx=p0\u0000q0,\nx0=p+t\u0000(q+t)\n=p+t\u0000q\u0000t\n=p\u0000q\n=x:\n\u000fScale: Finally, to provide invariance in scale transformations, all normals used\nby BRAND are normalized. Indeed, if Ais a uniform scale transform, A(x) =\nsx, therefore\nsx\njjsxjj=sx\ns\u00031=x:\nTheorem 3.1 shows that our approach provides a way to extract features from\nan object’s geometry that do not suffer interference from rotation, scale and transla-\ntion transformations.\n3.6 Rating EDVD, BRAND and BASE based on the \u0005\nset\nTable 3.1 shows the classification of EDVD, BRAND and BASE descriptor according\nto the properties from \u0005set. Note that all the properties are covered by BRAND and\nBASE presents a clear improvement on the BRIEF approach for textureless scenar-\nios. In the independence detection property, the descriptors were rated according to\nresults presented in Chapter 4.\nSURF SIFT Spin-Image CSHOT EDVD BRAND BASE\nRobustness to noise \u000f\u000f\u0000\u000f\u000f\u000f\u000f\nScale invariance \u000f\u000f\u0000\u000f\u000f\u000f\u0000\nRotation invariance \u000f\u000f\u000f\u000f\u000f\u000f\u0000\nIllumination invariance \u000e\u000e\u000f\u000f\u000f\u000f\u000f\nTexture independence \u0000\u0000\u000f\u000f\u000f\u000f\u000f\nLow time to compute \u000f\u0000\u0000\u0000\u000e\u000f\u000f\nLow time to compare \u000e\u000e\u0000\u0000\u0000\u000f\u000f\nLow memory consumption \u0000\u0000\u0000\u0000\u0000\u000f\u000f\nDetection independence \u000e\u000e\u0000\u0000\u000e\u000f\u000f\nTable 3.1. Properties of descriptors EDVD, BRAND and BASE.\n\nChapter 4\nExperiments\nIN THIS CHAPTER WE DESCRIBE A SET OF EXPERIMENTS to analyze the behavior of\nour descriptors for matching tasks. Comparisons are performed with the stan-\ndard approaches of the two-dimensional images descriptors, SIFT [Lowe., 2004] and\nSURF [Bay et al., 2008], with the geometric descriptor, spin-images [Johnson and\nHebert, 1999], and with CSHOT [Tombari et al., 2011], the state-of-the-art approach\nin fusing both texture and shape information.\nFor the experiments we use the dataset presented in [Sturm et al., 2011]. This\ndataset is publicly available1and contains several real world sequences of RGB-D\ndata captured with a KinectTM. Images were acquired at a frame rate of 30Hz and\na resolution of 640\u0002480pixels. Each sequence in the dataset provides the ground\ntruth for the camera pose estimated by a motion capture system. We selected four\nsequences in the dataset to use in our experiments:\n\u000ffreiburg2_xyz : Kinect sequentially moved along the x/y/z axes;\n\u000ffreiburg2_rpy : Kinect sequentially rotated around the three axes (roll, pitch and\nyaw rotations);\n\u000ffreiburg2_desk : A handheld SLAM sequence with Kinect;\n\u000ffreiburg2_pioneer_slam2 : A SLAM sequence with a Kinect mounted on the top\nof a Pioneer mobile robot.\nFigure 4.1 shows a frame sample from each sequence.\n1https://cvpr.in.tum.de/data/datasets/rgbd-dataset\n37\n38 C HAPTER 4. E XPERIMENTS\n(a) (b)\n(c) (d)\nFigure 4.1. Frame samples from (a) freiburg2_xyz , (b) freiburg2_rpy , (c)\nfreiburg2_desk and (d) freiburg2_pioneer_slam2 .\nTo evaluate the performance of our descriptors and to compare with other ap-\nproaches, we applied the same criterion used by Ke and Sukthankar [2004] and\nMikolajczyk and Schmid [2005].\nUsing a brute force algorithm, we matched all pairs of keypoints from two dif-\nferent images. If the Euclidean (for SURF and SIFT), Correlation (for Spin-image\nand EDVD), Cosine (for CSHOT) or Hamming (for BRAND and BASE) distance\ncomputed between descriptors dropped below a threshold t, the pair was consid-\nered as a valid match . The number of valid matches which have two keypoints cor-\nrespond to the same physical location (as determined by ground truth) defines the\ntrue positives matches. On the other hand, if the keypoints in a valid match come from\ndifferent physical locations, then we increment the number of false positives . From\nthese values, we compute the recall and1\u0000precision .\nTherecall values were determined by:\nrecall =#truepositive\n#total of positives;\n4.1. P ARAMETER SETTINGS 39\nwhere the total of positives is given by the dataset. The 1\u0000precision were computed\nas\n1\u0000precision =#falsepositive\n#truepositive + #falsepositive;\nwhen the number of valid matches (true positives + false positives) is higher than\nzero, otherwise we assign zero to 1\u0000precision . Using that information, we plotted\nthe recall versus 1\u0000precision values, obtained by changing the values of t.\nWe also use Area Under Curve (AUC) of recall vs1\u0000precision curves in the\nparameter settings analysis where it is more clear to show our design decisions.\nFor a fair comparison, the AUC values were computed for the curves with their\nintervals extrapolated using the point with the highest Recall value. Furthermore,\nthe AUC measure was computed for only the well-behaved curves, e.g.the red and\nblue curves shown in Figure 4.4 (b). For curves like the green one in Figure 4.4 (b),\nthat are clearly worse than the others and misbehave, we did not compute AUC\nvalues.\nIn the match experiments, for each sequence, given an RGB-D frame i, we\ncomputed a set of keypoints Kiusing the STAR detector2. Using the groundtruth\ncamera trajectory provided by the dataset, we transformed all keypoints k2 Ki\nto framei+ \u0001 creating the second set Ki+\u0001. We computed a descriptor for each\nkeypoint in both sets and then perform the match.\nIn the following sections, we evaluate and compare computation time, mem-\nory consumption and accuracy of EDVD, BRAND and BASE against other descrip-\ntors.\n4.1 Parameter Settings\nIn this section we analyze what the best parameter values to be used by our\nthree descriptors are. All of the following experiments were performed using the\nfreiburg2_xyz sequence from the RGB-D SLAM dataset.\nPairs distribution inside the patch Our algorithms perform an analysis in the\nneighborhood around the keypoint (in image and depth domain). This analysis\nis based on a set of pixels selected by a distribution function D. We tested three\ndifferent distributions and the pattern of each is illustrated in Figure 4.2. Assuming\n2The STAR detector is an implementation of the Center Surrounded Extrema [Agrawal et al.,\n2008] in OpenCV 2.3.\n40 C HAPTER 4. E XPERIMENTS\n30\n 20\n 10\n 0 10 20 3030\n20\n10\n0102030\n(a)\n30\n 20\n 10\n 0 10 20 3030\n20\n10\n0102030\n15\n 10\n 5\n 0 5 10 1515\n10\n5\n051015\n(b) (c)\nFigure 4.2. (a) Uniform distribution; (b) Isotropic Gaussian distribution and (c)\nLearned distribution.\nthe origin of the patch coordinate system located at the keypoint, we selected 256\npairs of pixels using the following distributions:\n\u000fA uniform distribution U(\u000024;24);\n\u000fAn isotropic Gaussian distribution N(0;242\n25);\n\u000fA distribution created by Rublee et al. [2011].\nThe latter distribution was built using a learning method to reduce the corre-\nlation among pairs of pixels. Also, all pixels (xi;yi)outside the circle with radius\nr= 24 are removed in the uniform and Gaussian distributions to guarantee that all\npixels within the circle are preserved independently of patch rotation.\n4.1. P ARAMETER SETTINGS 41\nAccurate Fast\nTime in seconds 14:85 0:11\nTable 4.1. Average processing time (over 300 point clouds) to compute normal\nsurfaces from point cloud with 640\u0002480points.\nCanonical Orientation We tested three algorithms to compute the canonical ori-\nentation and have provided a more detailed description of these in Chapter 3. The\nfirst of these algorithms, called Intensity Centroid (IC) [Rosin, 1999], computes the\n\u0012orientation using the orientation of a vector defined by the patch’s center and its\ncentroid. The second algorithm, which we call HAAR, is based on the fast estima-\ntor presented in [Bay et al., 2008]. The orientation assignment for each keypoint is\nachieved by computing the Haar wavelet responses in both the xandydirections.\nThe third algorithm, which we call BIN, is a modified version of the SIFT algorithm.\nIt creates a histogram of gradient directions, but unlike SIFT, it chooses only the\nmaximum bin as the canonical orientation.\nNormal Surface Estimation All of the geometric descriptors used for compari-\nson in the experiments require that the point clouds have normals. There are sev-\neral methods to estimate these normals from a point cloud [Klasing et al., 2009].\nOne accurate approach consists of estimating the surface normal by PCA from a\ncovariance matrix created using the nearest neighbors of the keypoint [Berkmann\nand Caelli, 1994]. This was the method used to estimate normals in all of the match\nexperiments.\nA less accurate, but faster approach, is to use the pixel neighborhoods defined\nby the structure from RGB-D images [Holz et al., 2011]. Using two vectors, e.g.the\nleft and right neighboring pixel and upper and lower neighboring pixel, the algo-\nrithm computes the cross product to estimate the normal surface. Table 4.1 shows\nthe processing time to compute all normal surfaces from a typical point cloud with\n640\u0002480points. The less accurate approach is more 100times faster than accurate\none.\n4.1.1 EDVD descriptors\nExperimentally, we found that the combination of a Gaussian distribution and the\nHAAR algorithm is the best configuration for EDVD. We can readily see in Figure\n4.3 that the HAAR algorithm provides a more stable invariance to rotation and the\nhighest AUC values when combined with the Gaussian distribution. Therefore, we\n42 C HAPTER 4. E XPERIMENTS\nchose to use the HAAR algorithm in the canonical orientation and a Gaussian dis-\ntribution to select the pairs of pixels.\nAdditionally, we carried out several experiments to verify the influence of the\nnormal surface algorithm in the accuracy of EDVD. Figure 4.4 (a) shows that EDVD\nprovides the same accuracy independent of the algoritm used. In Figure 4.4 (b)\nwe can also see that, after fusing texture and geometrical information, the accuracy\nincreases.\nBIN HAAR IC00.10.20.30.40.50.60.7\nCanonical Orientation EstimatorArea Under Curve (AUC)\n \n0.560.550.530.55 0.55\n0.520.51 0.510.49UNIFORM GAUSSIAN LEARNED\n 20 40 60 80 100\n30\n21060\n24090\n270120\n300150\n330180 0\nDegreeInliers (%)\n HAAR BIN IC\n(a) (b)\nFigure 4.3. Parameter analysis of EDVD descriptor. (a) The match performance\nusing 9combinations of 3distributions and 3algorithms to estimate the canon-\nical orientation; (b) Invariance to orientation with the 3algorithms used to esti-\nmate the canonical orientation (using Gaussian distribution).\n0 0.1 0.2 0.3 0.4 0.500.10.20.30.40.50.6\n1−PrecisionRecall\n \nACCURATE\nFAST\n00.10.20.30.40.50.60.70.80.9100.10.20.30.40.50.6\n1−PrecisionRecall\n \nFUSION\nIntensity Only\nGeometric Only\n(a) (b)\nFigure 4.4. (a) Accurate versus Fast normal estimation; (b) Combining texture\nand geometrical information to increase the accuracy.\n4.1. P ARAMETER SETTINGS 43\n0 0.1 0.2 0.3 0.4 0.500.10.20.30.40.50.60.7\n1−PrecisionRecall\n \nρ = 0o\nρ = 15o\nρ = 30o\nρ = 45o\nρ = 60o\nρ = 75o\nρ = 90o\nρ = 105o\n0 0.1 0.2 0.3 0.4 0.500.10.20.30.40.50.60.7\n1−PrecisionRecall\n \nBRAND − 16 bytes\nBRAND − 32 bytes\nBRAND − 64 bytes\n(a) (b)\nFigure 4.5. (a) Angular threshold for the dot product test. On average, the best\nchoice is to use 15degrees. (b) Different sizes for the BRAND descriptor.\n4.1.2 BRAND and BASE descriptors\nFor BRAND and BASE, we tested different configurations of the angular displace-\nment threshold \u001aas well as the size and the best binary operator to be used in the\ninformation fusion step. Since BASE is a special case of BRAND, we perfomed all\ntest in this section using BRAND only.\nExperimentally, we found that a threshold \u001a, corresponding to 15degrees for\nthe maximum angular displacement of normals results in a larger number of inliers\n(Figure 4.5 (a)). The plot shown in Figure 4.5 (b) depicts the accuracy versus the\nnumber of bytes used for the BRAND descriptor. Moreover, the results show that\nthe accuracy for 32bytes is similar to the accuracy for 64bytes. Therefore, in order\nto obtain a more compact representation, we have chosen to use 32bytes in the\nexperiments.\nFigure 4.6 shows the matching accuracy and the time spent by BRAND using\nboth normal estimation techniques. We can see that, even with a less precise normal\nestimation, BRAND presents high accuracy in the correspondences. This shows that\nBRAND can be optimized if necessary for a given application without significantly\npenalizing its accuracy.\nBinary Operator We chose to use a bit operator to combine appearance with ge-\nometry in order to maintain the simplicity and computational efficiency of the de-\nscriptor. To fuse the required information, we evaluated different operators such\nasXOR ,AND , andOR, and the best result was obtained using the ORoperator\n(Figure 4.7).\n44 C HAPTER 4. E XPERIMENTS\n0 0.1 0.2 0.3 0.4 0.500.10.20.30.40.50.60.7\n1−PrecisionRecall\n \nACCURATE\nFAST\nFigure 4.6. Accurate versus Fast normal estimation. Even with the less pre-\ncise normal estimation, BRAND still had high accuracy in keypoint correspon-\ndences.\nWe also performed experiments with larger signatures to separately handle\nintensity and normal by concatenating them in order to avoid ambiguity. It can\nsee clearly in Figure 4.7 (a), that fusing both texture and geometrical information\nprovides a signature with better discriminative power than concatenating these fea-\ntures. The use of information from two different domains has the disadvantage of\nbeing exposed to two different sources of noise. However, using a binary operator\nrather than concatenation, our descriptors are able to balance noise in one domain\nusing other kinds of information.\n0 0.2 0.4 0.6 0.8 100.10.20.30.40.50.60.7\n1−precisionrecall\n \nBRAND\nTEXTURE ONLY\nGEOMETRY ONLY\nCONCATENATION\n0 0.2 0.4 0.6 0.8 100.10.20.30.40.50.60.7\n1−PrecisionRecall\n \nOR OPERATOR\nAND OPERATOR\nXOR OPERATOR\n(a) (b)\nFigure 4.7. (a) Accuracy with ORoperator, only intensity, only geometrical in-\nformation and concatenating intensity and geometrical features; (b) The best\nbinary operator to be used for fusing appearance and geometric was the OR\noperator.\n4.1. P ARAMETER SETTINGS 45\n1 2 3 4 5 6 7 8 9\nD1Normal 000111111\nIntensity 111000111\nD2Normal 011011011\nIntensity 101101101\nTable 4.2. This table shows all 9cases that can produce D1= 1andD2= 1. For\nall theses cases only 2can be ambiguous (columns 2and 4in bold). Changes in\nnormal or intensity are represented with bit equal to 1.\nBinary Operator versus Concatenation One of the problems with using binary\noperators to define bits of the descriptors is its ambiguity. We do not know if a bit\nwas set to 1due to a variation in the normal or intensity.\nLetD1andD2be two descriptors each of one bit size and the operator OR. For\nthese two descriptors there are four possible cases, outlined as follows:\n\u000fD1= 0,D2= 0: There is no normal variation in the surface or intensity varia-\ntion in the image determines a bit equals to zero;\n\u000fD1= 0,D2= 1: There is no normal or intensity variation in surface reported\nby descriptor D1, but some variation was detected by D2(normal or intensity);\n\u000fD1= 1,D2= 0: Similar case as the previous except with variation detected by\nD1;\n\u000fD1= 1,D2= 1: Both descriptors reported some variation.\nIn the latter case, the source of variation can be different and thus the descrip-\ntors should be different. This is the case that may have ambiguity. Table 4.2 shows\nall9cases that can produce D1= 1andD2= 1and2of them set the bit to 1when\nthey should be set to 0. These cases are shown in bold in the Table 4.2. This occurs\nwhen there are changes in normal direction but none in intensity of the surface that\ngenerated descriptor D1and the surface that produce D2does not have variation\nin the normal directions but has changes in intensity. Thus, the probability of com-\nparing ambiguous bits is1\n4\u00022\n9= 5:6%. In practice, the ambiguity is smaller. We\ncomputed for 420keypoints in 300pairs of images the number of ambiguities. We\nhave found the rate to be equal to 0:7%.\nThe probability of ambiguity with the XOR operator is higher than with OR.\nWhen using the XOR operator,D1andD2will be set to 1in4cases and two of\nthem will produce ambiguity: i) There is a variation only in intensity for D1, and for\n46 C HAPTER 4. E XPERIMENTS\nBIN HAAR IC00.10.20.30.40.50.60.7\nCanonical Orientation EstimatorArea Under Curve (AUC)\n \n0.600.620.64\n0.600.620.63\n0.580.590.60LEARNED GAUSSIAN UNIFORM\n 20 40 60 80 100\n30\n21060\n24090\n270120\n300150\n330180 0\nDegreeInliers (%)\n HAAR BIN IC\n(a) (b)\nFigure 4.8. (a) The match performance using 9combinations of 3distributions\nand 3algorithms to estimate the canonical orientation; (b) Invariance to orienta-\ntion with the 3algorithms used to estimate the canonical orientation (using the\nuniform distribution).\nD2there is a variation only in normal displacement, or ii) D1has variation only in\nnormal and D2has variation only in intensity. Thus, the probability of ambiguity is\n1\n4\u00022\n4= 12:5%. Although the probability of the AND operator generating ambiguity\nis null, the use of the AND operator is too restrictive since it requires a detection of\nvariation in normal and intensity to set a bit. Noise in either image or in depth map\ncan produce different descriptors for the same surface.\nFinally, Figure 4.8 shows that the combination which provides the largest AUC\nresult and a more stable invariance to rotation is that using the HAAR algorithm\nwith a uniform distribution.\nAnalysis of Correlation and Variance In this section we analyze the discrimina-\ntive power of BRAND and BASE. We also evaluate the variance of each bit in the\ndescriptor vector and examine their correlation.\nTo evaluate the discriminative power, we computed the bit variance and tested\nthe correlation between each pair of points in the distribution patch. The reason for\nthese experiments is the fact that when bits have high variance, there are different\nresponses to inputs, which leads to more discriminative descriptors. Additionally, a\nset of uncorrelated pairs is also desirable, since each pair being tested will contribute\nto the final result.\nFigure 4.9 shows the distribution of the averages for a descriptor with 256bits\nover 50k keypoints as computed by BRIEF, BASE and BRAND. Note that each bit\nfeature of the BRIEF descriptor has a large variance and a mean close to 0:5. This is\n4.1. P ARAMETER SETTINGS 47\n0.45 0.5 0.55 0.6 0.65 0.7 0.75050100150200250\nBit Response MeanNumber of Bits\n \nBRIEF\nBASE\nBRAND\nFigure 4.9. Histogram of the descriptor bit mean values for BRIEF, BASE and\nBRAND over 50k keypoints.\n0 5 10 15 20 25 30012345678910\nComponentEigenvalues\n \nBRIEF\nBASE\nBRAND\nFigure 4.10. PCA decomposition over 50k keypoints of BRIEF, BASE and\nBRAND.\nthe best case. Although BASE and BRAND do not show the same spread of means,\nthey do not present a uniform distribution pattern, which is the worst pattern in\nregard to variance measure.\n48 C HAPTER 4. E XPERIMENTS\nTo estimate the correlation among test pairs, we used PCA on the data and\nselected the highest 30eigenvalues. In Figure 4.10 we can see these values. In spite\nof the fact that BRIEF exhibits larger variance, it also has large initial eigenvalues,\nwhich indicates correlation among the pairs. In this test, our descriptors present less\ncorrelation among the bits, and BRAND is more discriminative than BASE, given\nthat BASE has smaller eigenvalues.\n4.2 Matching Performance Evaluation\n00.10.20.30.40.50.60.70.80.9100.10.20.30.40.50.60.7\n1−PrecisionRecall\n00.10.20.30.40.50.60.70.80.9100.10.20.30.40.50.60.7\n1−PrecisionRecall\n(a) (b)\n00.10.20.30.40.50.60.70.80.9100.10.20.30.40.50.60.7\n1−PrecisionRecall\n00.10.20.30.40.50.60.70.80.9100.020.040.060.080.10.120.14\n1−PrecisionRecall\n(c) (d)\nFigure 4.11. Precision-Recall curves for (a) freiburg2_xyz, (b) freiburg2_rpy, (c)\nfreiburg2_desk and (d) freiburg2_pioneer_slam2. The keypoints were detected\nusing the STAR detector. Our descriptors outperform all other approaches, in-\ncluding CSHOT, which like EDVD, BRAND and BASE, combines texture and\ngeometric information. Among our descriptors, BRAND stands out as the best.\nFigure 4.11 shows the results of the threshold-based similarity matching tests.\nAs illustrated in the precision-recall curves, the BRAND, BASE and EDVD descrip-\n4.3. R OTATION INVARIANCE AND ROBUSTNESS TO NOISE EXPERIMENTS 49\nFigure 4.12. Three-dimensional matching example for two scenes using BRAND\ndescriptor. Mismatches are shown with red lines and correct matches with green\nlines.\ntors demonstrated a significantly better performance than other approaches for\neach sequence. Even for the two more challenging sequences, freiburg2_desk and\nfreiburg2_pioneer_slam2 , which contain high speed camera motion, and in the case of\nthefreiburg2_pioneer_slam2 sequence, data that was acquired with a robot manually\ncontrolled by joystick along a long textureless hall.\nAmong the descriptors developed in this thesis, on the average, BRAND pre-\nsented the best results of accuracy followed by the BASE descriptor.\n4.3 Rotation Invariance and Robustness to Noise\nExperiments\nEach of the descriptor’s invariance to rotation was also evaluated as well as their\nrobustness to noise. For these tests we used synthetic in-plane rotation and added\nGaussian noise with several standard deviation values (Figure 4.13). After applying\nrotation and adding noise, we computed keypoint descriptors using BRAND, EDVD\nand SURF, followed by brute-force matching to find correspondences.\nFigure 4.13 (a) shows the results for the synthetic test when using standard de-\nviation of 15. We can see that both BRAND and EDVD outperform SURF descriptor\nin rotation invariance. The results are given by the percentage of inliers as a function\nof the rotation angle.\nIn Figure 4.13 (b), the results for the synthetic test for noise with standard de-\nviation of 15;30;45;60and75are shown. Notice that BRAND and EDVD are more\n50 C HAPTER 4. E XPERIMENTS\n 20 40 60 80 100\n30\n21060\n24090\n270120\n300150\n330180 0\nDegreeInliers (%)\n 20 40 60 80 100\n30\n21060\n24090\n270120\n300150\n330180 0\nDegreeInliers (%)\n(a) (b)\nFigure 4.13. Percentage of inliers as a function of rotation angle for BRAND,\nEDVD and SURF algorithms. (a) Matching performance for synthetic rotations\nwith Gaussian noise with a standard deviation of 15; (b) Matching sensitivity\nfor an additive noise with standard deviations of 0,15,30,45,60and 75. The\nnoise was applied in the image and depth domain for BRAND and EDVD ex-\nperiments.\nstable and outperform SURF in all scenarios and BRAND and EDVD are largely un-\naffected by noise. Figure 4.12 shows an example of a three-dimensional matching\nfor two scenes with a rotation transform.\n4.4 Processing time and Memory Consumption\nAnother important property for a descriptor is the processing time to create a sig-\nnature and to compare two vectors. We performed several experiments to measure\nthese times for our descriptors. Descriptor creation and matching times have been\nmeasured and the experiments executed on an Intel Core i5 2.53GHz (using only one\ncore) processor running Ubuntu 11.04 ( 64bits). Time measurements were averaged\nover 300runs and all keypoints (about 420) were detected by the STAR detector.\nFigure 4.14 (b) clearly shows that BRAND is faster than the other descriptors in\nthe matching step, and that it spends slightly more time than SURF for the creation\nstep (Figure 4.14 (a)). This is due to the scale and canonical orientation estimation, a\nnecessary step to rotate and scale the distribution pattern.\nAdditionally, Figure 4.14 (c) shows that BRAND and BASE present the lowest\nmemory consumption with 32bytes for the keypoint descriptors, while CSHOT,\nwhich also combines appearance and geometry, has descriptors of 5:25kBytes in\n4.5. K EYPOINT DETECTOR VERSUS ACCURACY 51\nEDVDBRAND BASESURFSIFTCSHOT SPIN0123456\n0.38\n0.03 0.030.250.505.25\n0.25Memory (kB)\n(a)\nEDVDBRAND BASESURFSIFTCSHOT SPIN00.511.522.53\n0.68\n0.070.030.050.452.532.88Time (ms)\nEDVDBRAND BASESURFSIFTCSHOT SPIN00.10.20.30.40.50.60.7\n0.18\n0.050.050.120.230.530.55Time (ms)\n(b) (c)\nFigure 4.14. Comparison among descriptors using: (a) memory consumption in\nKbytes; (b) processing time to compute a single keypoint descriptor and (c) to\nperform the matching between a pair of points.\nsize.\n4.5 Keypoint Detector versus Accuracy\nIn this section we evaluate the influence of keypoint detector algorithms in the\nmatching quality. For all descriptors, we match keypoints detected with four\ndifferents methologies: STAR [Agrawal et al., 2008], FAST [Rosten et al., 2010],\nSIFT [Lowe., 2004] and SURF [Bay et al., 2008]. All experiments were executed using\nthefreiburg2_xyz sequence.\nThe independence of a descriptor from keypoint dectector algorithms is highly\ndesirable. With this descriptor independence it is possible to take advantage of the\nvast number of methodologies that are proposed every year.\n52 C HAPTER 4. E XPERIMENTS\nEDVDBRAND BASESURFSIFTCSHOT SPIN00.10.20.30.40.50.60.7Area Under Curve (AUC)\nDescriptor \nSTAR\nFAST\nSIFT\nSURF\nEDVDBRAND BASESURFSIFTCSHOT SPIN−0.100.10.20.30.40.50.60.7\nDescriptorArea Under Curve (AUC)\n \nMean (STD error bars)\n(a) (b)\nFigure 4.15. Comparison between descriptors using four different keypoint de-\ntectors: (a) Respective AUC of the recall vs 1-precision curves for each combina-\ntion descriptor and detector; (b) The standard variation for each descriptor.\nFigure 4.15 (a) shows AUC values for all experiments. We can see, in Figure\n4.15 (b), that among all methologies, EDVD, BRAND and BASE stand out as de-\nscriptors with the smallest standard variation and highest average in the accuracy.\nThe plots also show that our main competitor, CSHOT, is the least stable method\nhaving the highest standard variation equals to 0:32for an average accuracy of 0:30.\nBRAND has a standard variation of 0:03and an average accuracy of 0:64.\n4.6 Remarks\nThis chapter presented several experiments that we performed to show the be-\nhaviour of our three descriptors. A comparative analysis in terms of robustness to\naffine transformations, processing time and memory consumption was conducted\nagainst the standard descriptors in the literature for appearance and geometric in-\nformation. In these experiments, EDVD, BRAND and BASE outperformed the other\napproaches, including the state-of-the-art CSHOT, which also fuses appearance and\ngeometry information.\nThanks to the strategy of combining different cues, our descriptors were more\nstable in matching experiments as well as in the invariance to rotation tests. As\nshown in the experiments, the combination of appearance and geometry informa-\ntion indeed enables better performance than using either information alone. More-\nover, our binary descriptors, BRAND and BASE, had superior performance in time\nand memory consumption and presented high accuracy in matching, achiving the\n4.6. R EMARKS 53\nproperties of being fast and lightweight. Finally, the three descriptors presented in\nthis thesis showed a small dependence on the keypoint detector.\n\nChapter 5\nApplications\nIN THIS CHAPTER WE APPLY OUR DESCRIPTORS to two important tasks in Com-\nputer Vision and Robotics: Semantic Mapping and Tridimensional Alignment.\nIn order for robots to achieve higher levels of abstraction, they must be able to build\nstructured representations of their environment by categorizing spatial information.\nThe building of accurate 3D models of a scene, however, is a fundamental problem\nin Computer Vision.\nAfter demonstrating good performance of our descriptors in the experiments\nshown in the previous chapter, the following sections will evaluated the behaviour\nof our descriptors in less controlled data acquisition.\n5.1 Semantic Mapping and Object Recognition\nThe use of categorization in mapping tasks can be used to generate semantic infor-\nmation which would enable robots to distinguish objects, to identify events and to\nexecute high-level tasks. The importance of including semantic information in un-\nderstanding the environment has been advocated in several works, some examples\nof which include [Chatila and Laumond, 1985] and [Kuipers and Byun, 1991].\nVisual classification tasks are typically tackled with the extraction of image\nfeatures which are then used to represent individual characteristics of objects and\nclasses. The high dimensionality of data is greatly reduced by using image features,\nwhich enables increased performance in the matching process and a reduction in\nmemory usage during the training and the recognition steps. Therefore, feature\npoint descriptors are a part of the underlying structure of a large number of state-\nof-the-art classification approaches.\nAs discussed in the previous chapters, the features of these other approaches\n55\n56 C HAPTER 5. A PPLICATIONS\nare estimated from images alone and they rarely use other information such as ge-\nometry. Consequently, variation in scene illumination and textureless objects, com-\nmon issues with real scenes, may dramatically decrease performance of classifiers\nbased solely on the image.\nThe combination of visual and shape cues is a very promising approach for\nobject recognition but is still in its prelude. As far as efficacy is concerned, however,\nLai et al. [2011b] have already shown that the combined use of intensity and depth\ninformation outperforms view-based distance learning using only one of the two.\nThe reason that many descriptors have not used shape information can be partially\nexplained by the fact that until recently object geometry was not easy to obtain, nor\nquick, so as to be combined with image feature data in a timely manner.\n5.1.1 Object Recognition\nIn this section we show the performance of our three descriptor algorithms in an\nobject recognition task. Our experiments were performed using the RGB-D Object\nDataset presented by Lai et al. [2011a]. This dataset is availabe from the Computer\nScience Department from Washington University1and contains 51categories for a\ntotal of 300objects. The images were acquired with a prototype RGB-D camera from\nPrime-Sense and a firewire camera from Point Grey Research. The images have 640\u0002\n480resolution, the color and depth informations were simultaneously recorded. The\ndata was recorded at three differents viewing heights at approximately 30,45and\n60degrees above the horizon. Figure 5.1 shows some samples of the objects used in\nour experiments.\nRecognition System To test the discriminative power of our descriptors, we built\na recognition system using the Bag of Features (BoF) approach [Csurka et al., 2004]\ncombined with Partial Least Squares (PLS) technique [Rosipal and Krämer, 2006].\nThe main reason for using the BoF approach was that it is not possible to extract\nkeypoints from the same location in differents samples, the choice of PLS, how-\never, was due to good results in several recognition tasks such as human detection\n[Schwartz et al., 2009] and face recognition [Schwartz et al., 2010].\nLike other recognition systems based on the BoF approach, our system is com-\nposed of four main steps:\n1.Feature extraction : In this step, we split RGB-D images into a grid with 1000\ncells and for each cell we compute a descriptor vector;\n1http://www.cs.washington.edu/rgbd-dataset\n5.1. S EMANTIC MAPPING AND OBJECT RECOGNITION 57\nFigure 5.1. Some samples of objects used for recognition experiments. From top\nleft to bottom right: two kinds of apples, a ball, a bowl, a calculator, a coffe mug,\na keyboard, a lemon, onion, a flashlight, two kinds of cereal box, a glue stick and\na Marker.\n2.Codebook creation : After running a k-means algorithm [Duda et al., 2001] for\nall descriptors in the dataset, we built a set of Kclusters, each represented by\na descriptor vector. Finally, we stack all of the Kclusters creating a matrix of\nKrows. The number of columns is defined by the size of the descriptors, e.g.\n32columns of bytes using BRAND and BASE or 512columns of bytes using\nSIFT. The number of clusters used in our experiments to build the codebook\nwasK= 512 for all descriptors;\n3.Bag of Feature vectors extraction : For every image in the dataset, we com-\nputed a histogram of the number of descriptors assigned to each cluster. These\nhistograms are called bag of features vectors and are used to represent each\nimage in the codebook domain;\n4.Learning : In this last step, we run the PLS algorithm with a set of bag of fea-\ntures vectors to build the classification model. Since our recognition system\nuses the one-against-all scheme, we build a model for each class using the re-\nmaining samples of other classes as negative samples.\nIn our experiments we used 20classes with 30samples from each class. For\nevery class we randomly selected 5objects. Our recognition system was trained\nusing four of the five objects and then tested with the fifth, never-before-seen, object.\nThis procedure is repeated three times to obtain the confusion matrices shown in\n58 C HAPTER 5. A PPLICATIONS\nFigure 5.2. Recognition using spin-image descriptors were not tested since the k-\nmeans algorithm was unabled to find the clusters due to the lack of discrimance of\ndescriptors.\nObject Recognition Results Figure 5.2 shows the confusion matrices for our\nthree descriptors, SIFT, SURF and CSHOT. As can be seen from the results, our\ndescriptors presented an accuracy similar to others, even though less memory was\nused and had a faster processing time.\n5.1.2 Object Recognition Using the BASE descriptor\nIn this section, we used the BASE descriptor as presented in Chapter 3 in a simple\nadaptive boost classification framework to provide semantic information in a map-\nping task. This framework was used to detect and recognize objects under different\nillumination conditions. We chose to use the BASE descriptor because it presented\nthe highest accuracy in experiments from the previous section.\nAlthough the recognition system using the BoF approach and the PLS algo-\nrithm worked well, a semantic mapping framework needs to be fast and have low\nmemory consumption since the classification algorithm is used in almost every\nframe grabbed during robot navigation. In this section we present a simpler and\nfaster classification strategy. We then used this second classification strategy to test\nour descriptor against the first strategy.\nExperimental results presented later show that in spite of the simplicity of\nthe recognition approach, high accuracy classification was obtained with process-\ning time on the order of few milliseconds running on current generation processors.\nLearning Algorithm Objects are modeled as weighted sets Oof descriptors fo\ncomputed at keypoints. A careful choice of these keypoints allows not only for good\nobject detection from multiple views, but also decreases the search space making it\nadequate for online applications.\nThe weight of each set is computed by a learning process using the Adaboost\nalgorithm [Freund and Schapire, 1995]. In order to classify a new RGB-D image,\nwe find the nearest neighbor matching for all sets of object models and a voting\nmechanism is used to extract a model from the weighted sets.\nOne of the simplest methods to classify a test set of descriptors Tas belong-\ning to an objectOis to find the nearest neighbors of each descriptor ft2T that\n5.1. S EMANTIC MAPPING AND OBJECT RECOGNITION 59\nPredictedActualEDVD\n \n1234567891011121314151617181920apple − 1\nball − 2\nbell_pepper − 3\nbowl − 4\ncalculator − 5\ncereal box − 6\ncoffe mug − 7\ndry battery − 8\nflashlight − 9\nfood bag − 10\nfood box − 11\nfood cup − 12\nfood jar − 13\ngarlic − 14\nglue stick − 15\ninstant noodles − 16\nkeyboard − 17\nkleenex − 18\nlemon − 19\nmarker − 2000.10.20.30.40.50.60.70.80.91\nPredictedActualBRAND\n \n1234567891011121314151617181920apple − 1\nball − 2\nbell_pepper − 3\nbowl − 4\ncalculator − 5\ncereal box − 6\ncoffe mug − 7\ndry battery − 8\nflashlight − 9\nfood bag − 10\nfood box − 11\nfood cup − 12\nfood jar − 13\ngarlic − 14\nglue stick − 15\ninstant noodles − 16\nkeyboard − 17\nkleenex − 18\nlemon − 19\nmarker − 2000.10.20.30.40.50.60.70.80.91\nPredictedActualBASE\n \n1234567891011121314151617181920apple − 1\nball − 2\nbell_pepper − 3\nbowl − 4\ncalculator − 5\ncereal box − 6\ncoffe mug − 7\ndry battery − 8\nflashlight − 9\nfood bag − 10\nfood box − 11\nfood cup − 12\nfood jar − 13\ngarlic − 14\nglue stick − 15\ninstant noodles − 16\nkeyboard − 17\nkleenex − 18\nlemon − 19\nmarker − 2000.10.20.30.40.50.60.70.80.91\nPredictedActualSIFT\n \n1234567891011121314151617181920apple − 1\nball − 2\nbell_pepper − 3\nbowl − 4\ncalculator − 5\ncereal box − 6\ncoffe mug − 7\ndry battery − 8\nflashlight − 9\nfood bag − 10\nfood box − 11\nfood cup − 12\nfood jar − 13\ngarlic − 14\nglue stick − 15\ninstant noodles − 16\nkeyboard − 17\nkleenex − 18\nlemon − 19\nmarker − 2000.10.20.30.40.50.60.70.80.9\nPredictedActualSURF\n \n1234567891011121314151617181920apple − 1\nball − 2\nbell_pepper − 3\nbowl − 4\ncalculator − 5\ncereal box − 6\ncoffe mug − 7\ndry battery − 8\nflashlight − 9\nfood bag − 10\nfood box − 11\nfood cup − 12\nfood jar − 13\ngarlic − 14\nglue stick − 15\ninstant noodles − 16\nkeyboard − 17\nkleenex − 18\nlemon − 19\nmarker − 2000.10.20.30.40.50.60.70.80.9\nPredictedActualCSHOT\n \n1234567891011121314151617181920apple − 1\nball − 2\nbell_pepper − 3\nbowl − 4\ncalculator − 5\ncereal box − 6\ncoffe mug − 7\ndry battery − 8\nflashlight − 9\nfood bag − 10\nfood box − 11\nfood cup − 12\nfood jar − 13\ngarlic − 14\nglue stick − 15\ninstant noodles − 16\nkeyboard − 17\nkleenex − 18\nlemon − 19\nmarker − 2000.10.20.30.40.50.60.70.80.91\nFigure 5.2. Confusion matrices (rows-normalized) among 20classes from the\nRGB-D Object Dataset [Lai et al., 2011a].\nminimizes the distance function Das given below:\nD(ft;O) = min\nfo2OD(ft;fo): (5.1)\nSince BASE descriptors are strings of bits, the Hamming distance Dis used\nas a distance metric. One of the greatest advantages of this approach, besides its\nsimplicity, is its low computational cost. On the downside of this naïve approach,\n60 C HAPTER 5. A PPLICATIONS\nhowever, is that it tends to produce several false positives in the final classification.\nTherefore, to improve classification, we use a multiclass discriminative algorithm\nwhich returns the probability of a datum belonging to a given class.\nOur classifier is composed of binary weak classifiers hi,i2f1;:::;nginte-\ngrated by the Adaboost algorithm. Each weak classifier contains a set of descriptors\nOwhich represents an object. The probability that a test set Tcorresponds to the\nobject is given by:\nh(T) =1\njTjX\nft2\u001f\u001f(D(ft;O)\u0014\u001c); (5.2)\nwhere\u001fis a indicator function that returns 1if the condition in the argument is true\nand0otherwise. The term ftis the descriptor vector of test object T. Threshold \u001c\nrestricts the minimum distance for a valid match.\nThe multi-class classifier then selects a classifier Hwith maximum member-\nship probability. Hence, the class of a test RGB-D image with a set of descriptors T\nis given by:\nc\u0003= arg max\nH2HjHjX\ni=1wihi(T); (5.3)\nwhereHis the set of trained classifiers, wiis the weight of the weak classifier hiand\nc\u0003is the class represented by classifier H.\n5.1.3 Mapping System\nA particle filter was used for robot localization by selecting its most probable posi-\ntion. The classifier returns a class label c\u0003for each frame acquired from the RGB-D\nsensor during robot navigation. If the label is different than “none” then it is in-\ndexed to the current location. Algorithm 1 describes the mapping process.\n5.1.4 Recognition Results\nTo evaluate the performance of the proposed classification approach, we initially\ncollected several images with a Kinect system mounted on a Pioneer P3-AT mobile\nground robot, as shown in Fig. 5.4. The final dataset was composed of 17samples of\n9objects with different shapes and textures and 30samples of random images from\nour lab to represent the negative dataset (Figure 5.3). Two images of the objects in\ndistinct views were used to extract the sets of keypoints to build weak classifiers in\n5.1. S EMANTIC MAPPING AND OBJECT RECOGNITION 61\nFigure 5.3. Objects used for classification and detection experiments. From top\nleft to bottom right: toolbox, cone, Nomad robot, Pionner Robot Model 2, iCreate\nRobot, PC, Pioneer Robot Model 1, keyboard box and cabinet. The last image is\nan example of negative sample used in the training and test steps.\nthe training step.\nWe then performed two tests: i) First we trained the classifier and verified the\nquality of the classification using all of the images in dataset that were not used in\nthe learning stage; ii) second, the objects in the dataset were randomly positioned in\nthe hallways of the laboratory building and a map with the location of each detected\nobject was created.\nFinally we compared the performance of our descriptor to that of SURF, a stan-\ndard 2D descriptor in the literature, and BRIEF, a binary and fast descriptor.\nAlgorithm 1 Semantic Map(H)\n1:while true do\n2:p Particle\flterPosition()\n3:f getRGBDimage()\n4:K FAST(f)\n5:T fb(p)jp2Kg\n6: Find label class c\u0003solving:\n7:\nc\u0003= arg max\nH2HjHjX\ni=1wihi(T)\n8: ifc\u00036=“none” then\n9: map[p] c\u0003\n10: end if\n11:end while\n62 C HAPTER 5. A PPLICATIONS\nReceiver operating characteristic To evaluate the correct matching rate using\nour descriptor, we selected one image from the dataset in which the object was di-\nrectly facing the sensor and computed the set of descriptors. These descriptors were\nmatched against the descriptors of all 16others images of the objects as well as with\nthe30negative images.\nFigure 5.5 summarizes the results of true positive and false positive rates as\nRelative Operating Characteristic (ROC) curves for all objects in the dataset. Better\nmatchings are closer to the upper-left corner. Six out of nine objects had their curves\nvery close to the upper-left corner. Even though the curves of three objects (PC,\nCone and Toolbox) were not as close as those of other objects, the true positive rate\nfor them were larger than 80% with a false positive rate lower than 20%.\nLearning and Classification Time Keypoint descriptors are at the heart of a large\nnumber of vision based machine learning algorithms. In spite of the ever growing\nperformance of computer systems, the unsurmountable volume on visual data now\navailable tends to be processed and used on mobile devices with limited resources.\nTherefore faster and more efficient keypoint descriptors need to be developed.\nIn order to estimate the performance of our descriptor, we ran the learning\nand classification algorithms on our dataset five times and measured CPU time. We\nFigure 5.4. Experimental Setup. Left: The mounted Kinect RGB-D camera in a\nPionner P3-AT. Right: RGB camera and the depth camera views.\n5.1. S EMANTIC MAPPING AND OBJECT RECOGNITION 63\n00.10.20.30.40.50.60.70.80.90.40.50.60.70.80.91\nFalse Positive rateTrue Positive rate\n \niCreate\nPioneer Model 2\nPioneer Model 1\nPC\nNomad\nCabinet\nKeyboard box\nCone\nToolbox\nFigure 5.5. ROC curve of matching using BASE descriptor. The best matching\nis closer to the upper-left corner. We note high true positive rate with low false\npositive rate for all objects in the dataset.\ncompared the performance with two intensity only descriptors: BRIEF and SURF\non the same dataset. Figure 5.6 shows that in both steps, learning and classification,\nour descriptor was faster than the others. The learning time of our descriptor was\n60% faster than SURF and 15% faster than BRIEF. For the classification step, our\ndescriptor ran 2times faster than BRIEF and almost 4times faster than SURF.\nOne reason why our descriptor runs faster than BRIEF is due to the fact that\nour descriptor includes more meaningful information to build the classifier. In our\nexperiments we observed that BRIEF uses more than one weak classifier for its bi-\nnary classifier. This leads to a matching with more than one set of descriptors. This\nalso demonstrates the discrimination superiority of our descriptor over BRIEF.\nA far as memory usage is concerned, our descriptor and BRIEF have similar\nperformances since both use binary strings which result in low memory utilization.\nClassification Rate As described in Section 5.1.2, we have adopted an ensemble\napproach using Adaboost algorithm. The descriptors used in the experiments were\nBRIEF, SURF and BASE. We train the classifiers with 9positive samples of objects\nin different views and 9negatives samples. By comparing the confusion matrices\namong nine classes, in Figure 5.7 we observe that classification using our descrip-\ntor obtains results significantly better than the others. Although the values in the\nconfusion matrix of our descriptor, shown in Figure 5.7 (a), are slightly more spread\nthan for BRIEF or SURF, this matrix clearly shows better accuracy by the diagonal\n64 C HAPTER 5. A PPLICATIONS\nFigure 5.6. CPU time for Learning and Classification steps. In both experiments\nBASE was faster than BRIEF and SURF. While the use of BASE in learning step\nis approximately 2times of SURF and it is closer to BRIEF, in the classification\nperformance our descriptor was almost 2times faster than BRIEF and 4times\nfaster than SURF.\nthat is not found in the confusion matrices of the other descriptors. Also, an analysis\nof the BRIEF and SURF confusion matrices show that the classifiers built with those\ndescriptors present a strong bias toward a given class (e.g. Toolbox).\n5.1.5 Mapping Results\nWe tested semantic mapping by spreading objects through the laboratory building\n(ICEx/UFMG). Navigation was based on the Vector Field Histogram [Borenstein\net al., 1991] and for localization a particle filter [Thrun, 2002] was used. These algo-\nrithms were implemented on Player 3.0.2 and tested on a computer running Linux\non a Intel Core i5 with 6Gb of RAM.\nFigure 5.8 (a) shows the results obtained with our approach. The green cir-\ncles indicate a correct detection and classification. Red stars represent false positive\ndetection and cyan squares mean correct detection, but wrong classification.\nWe note a superior number of true positive recognitions and higher rate of clas-\nsified objects, with a small amount of false positive detections. By comparing these\nresults respectively with BRIEF and SURF, Figure 5.8 makes clear the superiority of\nour descriptor both in recognition rate and robustness to false positives.\nIn spite of a large variation in illumination between the time when data was\ncollected for training and the testing itself, our method proved to be the least af-\nfected by lighting conditions as a result of taking advantage of geometrical informa-\ntion.\n5.2. T HREE -DIMENSIONAL ALIGNMENT 65\n(a)\n(b) (c)\nFigure 5.7. Confusion matrices (rows-normalized) among nine classes for (a)\nBASE descriptor, (b) BRIEF and (c) SURF. We observe a much better classification\nfor BASE descriptor justified by the clear diagonal on its confusion matrix even\nusing less time of CPU processing. We also note that the classifiers built with\nBRIEF and SURF descriptors present a strong bias toward Toolbox class.\n5.2 Three-dimensional Alignment\nA great challenge in registering multiple depth maps is related to the process of\nrecovering the rigid affine transformation Tto describe two depth maps in a sin-\ngle coordinate system. To address this issue, descriptors have been applied to find\ncorresponding points from two depth maps in order to constrain the search space\nfor the transformation T. The work proposed by Vieira et al. [2007] uses a descrip-\ntor to propose an iterative framework to address pair-wise alignment of a sequence\nof depth maps while ensuring global coherence of the registration for implicit re-\nconstruction purposes. A global alignment algorithm that does not use local feature\ndescriptors was presented by Makadia et al. [2006] using Extended Gaussian Images\n66 C HAPTER 5. A PPLICATIONS\n(a)\n(b) (c)\nFigure 5.8. Semantic Map result using (a) BASE, (b) BRIEF and (b) SURF de-\nscriptor. The use of BRIEF or SURF result in a large number of false positive\ndetections (red stars). While SURF detected only one object (cyan square) and\nBRIEF two our descriptor found five without generate too many false positive\ndetections. Green circles indicate correct detection and classification.\n(EGI).\nIndependently of strategies used to pre-align depth maps, a common require-\nment is that the data have sufficient overlap in order to establish correspondences\nand a graph defining which pairs, among all depth maps, have such an overlap.\nMost commercial packages such as DAVID Laserscanner [Winkelbach et al., 2006],\nrequire that users manually select the pairs to be aligned. Furthermore, this pre-\nalignment is generally refined by local minimization algorithms, such as the classical\nIterative Closest Point (ICP) algorithm [Besl and McKay, 1992], in order to achieve\nthe best alignment, given an initial guess of pre-alignment.\nNon-rigid and scale invariant registration such as proposed in [Cheng et al.,\n2010] and [Sehgal et al., 2010] are more often used for matching purposes than re-\n5.2. T HREE -DIMENSIONAL ALIGNMENT 67\nconstruction. A survey on range image registration has been presented in [Salvi\net al., 2007], where different methods for pre-alignment and fine registration are\ncompared in terms of robustness and efficiency.\nIn this section we applied our novel feature descriptors and also described\nthe method employed to perform the registration of multiple indoor textured depth\nmaps.\n5.2.1 RGB-D Point Cloud Alignment Approach\nThe main goal of the registration process is to find an affine transformation Tbe-\ntween two point clouds taken from different points fo view.\nThe approach used to register point clouds in this work is divided into two\nsteps: coarse and fine alignment. In coarse alignment, we compute an initial estima-\ntionTof the rigid motion between two clouds of 3D points using correspondences\nprovided by a feature descriptor. Then, in fine alignment, we employ the ICP algo-\nrithm to find a local optimum solution based on the previours coarse alignment. The\nICP algorithm uses an initial estimate of the alignment and then refine the transfor-\nmation matrix T\u0003by minimizing the distances between the closest points. The use\nof ICP was considered due to its simplicity and low computational time.\nThe registration process is summarized in Algorithm 2. It has four main steps:\n1.Keypoint Descriptors : The function ExtractDescriptor receives the source and\ntarget point clouds, denoted by PsandPtrespectively, and returns corre-\nsponding sets of keypoints with their descriptors, denoted by KsandKt. The\nfirst step to compute the set of descriptors for an image or, in our case, an RGB-\nD point cloud, is to select the subset of keypoints. This selection of keypoints\nAlgorithm 2 Point Cloud Alignment( Ps,Pt)\n1:(Ks;Kt) ExtractDescriptor( Ps;Pt)\n2:M matchDescriptor(Ks;Kt)\n3:R coarseAlignmentSAC( M)\n4:repeat\n5:A closestPoints(Ps;R(Pt))\n6: Find Tsolving:\n7:\nT arg min\nT\u00031\njAjX\n(ps;pt)2Ajps\u0000T\u0003(pt)j2\n8: R T\u0002R\n9:until MaxIter Reached orErrorChange( T)\u0014\u0012\n68 C HAPTER 5. A PPLICATIONS\nwith properties such as repeatability provides good detection from multiple\nviews and allows a constrained search space of features making the registra-\ntion suitable to online applications.\n2.Matching Features : The function matchDescriptor matches two sets of descrip-\ntors,KsandKt, using a force brute algorithm and return a set Mof correspon-\ndence pairs among source and target point clouds. The distance metric used\nvaries with the type of feature descriptor used. The BASE and BRAND de-\nscriptors consider the Hamming distance metric, EDVD and Spin-Images use\ncorrelation function, SIFT and SURF the Euclidian distance and CSHOT the\ncosine distance.\n3.Coarse Alignment with SAC : The function coarseAlignmentSAC is used to\nprovide an initial transformation Tusing the matching set M. We used a Sam-\npled Consensus-Initial Alignment (SAC) approach [Fischler and Bolles, 1981]\nto reduce the outliers in correspondences (false correspondences). Our SAC\napproach works as follow: there is a transformation Tfrom point cloud Ksto\npoint cloudKt, this transformation is our model. The algorithm’s goal is to es-\ntimate the model, i.e. find the matrix T. To achieves this, the algorithm select a\nrandom pairs of matchings in M, then it uses this pairs to estimate all the free\nparameters in the model. All other data are tested against the fitted model and,\nclassified as inliers (if it fits well to the estimated model) or outliers otherwise.\nIf estimated model has sufficiently many points it is classified as consensus\nset and its parameter are reestimated using only the inliers pairs. At last, the\nalgorithm computes the error of the inliers relative to the model. These steps\nare performed a fixed number of times and the model with smaller error is\nselected. The initial transformation Tis usually not accurate but constrains to\na local search for the optimal transformation using a fine alignment algorithm.\nWe noted, as expected, that less descriptive features provide smaller sets of\ninliers than more descriptive features.\n4.Fine Alignment : Finally, the function closestPoints receives the pre-aligned\nsetsPsandPtand, constructs the set of pairs A. The set of pre-aligned pairs\nAis then used to find a refined transformation in an iterative process. We use\na kd-tree for finding the closest point and, in contrast to the work by Henry\net al. [2010] which minimizes a non-linear error, we choose an ICP variant that\nminimizes the error function point-to-pointPjps\u0000T(pt)j2. This error function\ncan be solved using the Horn closed-form [Horn, 1987].\n5.2. T HREE -DIMENSIONAL ALIGNMENT 69\n5.2.2 Alignment Results\nWe examined the performance of our descriptor to the registration task for several\nimages of a research laboratory collected with a Kinect sensor (see Figures 5.9 and\n5.11). From this we created five challenging sets with different views:\n1.Lab180 : point cloud with holes (regions not seen by the sensor);\n2.Boxes : scene with three object (boxes) with similar geometry;\n3.Robots : scene with three robots with the same geometry and texture;\n4.Wall : scene rich with textureless regions;\n5.DarkLab : a set of point clouds acquired from a partially illuminated scene.\nThe experiments were performed on a computer running Linux on an Intel\nCore i5 with 4Gb of RAM. For each final alignment we evaluated the alignment\nerror returned by ICP, the number of inliers retained in the coarse alignment and\nthe time spent for fine and coarse alignment. In all experiments, the convergence\ncriteria was a maximum of 100iterations of ICP or an error less than 0:001. We\nalso used same parameter in SAC algorithm for all descriptors. Tables 5.1, 5.2 and\n5.3 show the registration results. We note that the alignment with the EDVD, BASE,\nBRAND descriptor provides the smallest error despite of its low computational cost.\nFigure 5.9 shows visual results of the alignment achieved using our descriptors for\nthe five sequences used in the experiments. The screenshots shown in Figure 5.10\npresent several three-dimensional alignments of our laboratory. These results were\nprovided running our alignment algorithm with BASE in a set of RGB-D images\nacquired from our laboratory moving a kinect sensor in 360degrees around its base.\nSince the our descriptors consider shape information and the RGB-D camera\nhas its own illumination, we were able to register point clouds even with sparsely\nilluminated environments. To test the proposed approach, an experiment was per-\nformed in a poorly illuminated room. We collected 229frames of the scene with\nimages ranging from well illuminated to complete lack of light. The final align-\nment, shown in the Figure 5.11, makes clear that even with some regions without\nillumination it was possible to align the clouds.\nAlso we examined the performance of our descriptor with the proposed reg-\nistration approach in the Freiburg’s dataset. We selected three sequences in the\nFreiburg’s dataset: freiburg2_xyz ,freiburg2_desk and freiburg2_pioneer_slam2 . To eval-\nuate a set of estimated poses we measure the Relative Pose Error (RPE), which is\n70 C HAPTER 5. A PPLICATIONS\nTable 5.1. This table shows mean values of the ICP error.\nICP Score\nDescriptor Robots Boxes Lab180 Wall DarkLab\nEDVD 0:0025 0:0002 0:0047 0:0001 0:0038\nBRAND 0:0025 0:0002 0:0041 0:0001 0:0059\nBASE 0:0025 0:0002 0:0041 0:0001 0:0043\nSURF 0:0035 0:0002 0:0070 0:0004\u0000\nSIFT 0:0058 0:0042 0:0281 0:0021\u0000\nSPIN 0:0046 0:0017 0:0356 0:0205\u0000\nCSHOT 0:0043 0:0002 0:0095 0:0013 0:0033\nTable 5.2. This table shows mean values of time spent to register two clouds.\nAlignment Time (seconds)\nDescriptor Robots Boxes Lab180 Wall DarLab\nEDVD 0:83 0:57 1:01 1:07 2:45\nBRAND 0:34 0:27 0:59 0:72 1:09\nBASE 0:30 0:27 0:68 0:71 0:81\nSURF 0:69 0:31 2:40 0:97\u0000\nSIFT 1:28 1:24 6:29 2:09\u0000\nSPIN 2:56 1:70 8:13 9:18\u0000\nCSHOT 2:29 1:30 2:60 2:40 2:20\nTable 5.3. This table shows average number of inliers retained by SAC in the\ncoarse.\nNumber of Inliers\nDescriptor Robots Boxes Lab180 Wall DarkLab\nEDVD 111:85 95:46 52:22 64:87 117:89\nBRAND 131:95 105:18 51:75 70:64 64:87\nBASE 116:95 108:96 53:00 70:96 63:99\nSURF 96:59 58:39 82:09 46:47\u0000\nSIFT 152:10 99:52 129:23 69:66\u0000\nSPIN 155:05 71:30 176:82 181:60\u0000\nCSHOT 143:49 53:54 113:52 66:29 50:28\nwell-suited for measuring the drift of a visual odometry system. The resuls are\nshown in Figure 5.12 and 5.13. One may readily see that our descriptors show less\nerror for all sequences, both in translation and rotation. In the most challenge se-\nquence, our descriptors provide an alignment with a translation error less than 1\nmeters and CSHOT presented an error of about 6meters.\nAdditionally, Figures 5.14 and 5.15 show that the most stable descriptor algo-\nrithms for differents distances between two frames were BRAND and BASE.\n5.2. T HREE -DIMENSIONAL ALIGNMENT 71\nRobots\nBoxes\nLab180\nWall\nFigure 5.9. Dataset used in the alignment tests. The images show clouds aligned\nusing the BASE descriptor.\n72 C HAPTER 5. A PPLICATIONS\nFigure 5.10. Three-dimensional point clouds alignment of VeRLab laboratory.\n5.2. T HREE -DIMENSIONAL ALIGNMENT 73\nFigure 5.11. Registration of a partially illuminated lab. The frames were used\nwith images from a scene ranging from well illuminated to complete darkness.\nAs BRAND contains geometric information, it is possible to perform the match\nof the keypoints even if the scene is under inadequate illumination.\n74 C HAPTER 5. A PPLICATIONS\nXYZ DESK SLAM2020406080100120\nDatasetRotational Error (degree)\n \nEDVD\nBRAND\nBASE\nSURF\nSIFT\nCSHOT\nSpin\nFigure 5.12. Relative Pose Error (RPE) for rotational error in degrees.\nXYZ DESK SLAM20123456\nDatasetTranslational Error (meter)\n \nEDVD\nBRAND\nBASE\nSURF\nSIFT\nCSHOT\nSpin\nFigure 5.13. Relative Pose Error (RPE) for translational error in meters.\n5.2. T HREE -DIMENSIONAL ALIGNMENT 75\n1234567891000.20.40.60.811.21.41.6\nInterval among framesTranslational error (meters)\n \nEDVD\nBRAND\nBASE\nSURF\nSIFT\nSPIN\nCSHOT\nFigure 5.14. Translational error (RPE) for several differents distance between\ntwo frames. The error is in meters.\n1234567891001020304050607080\nInterval among framesRotational error (degrees)\n \nEDVD\nBRAND\nBASE\nSURF\nSIFT\nSPIN\nCSHOT\nFigure 5.15. Rotational error (RPE) for several differents distance between two\nframes. The error is in degrees.\n\nChapter 6\nConclusions and Future Work\nIN THIS CHAPTER WE PRESENT A SUMMARY of the accomplished work, emphasiz-\ning the main contributions of this thesis. Afterwards, we conclude the chapter\nby presenting future directions of this work.\n6.1 Summary\nIn this thesis, the problem of how to design an approximation of an ideal descrip-\ntor has been addressed. We have proposed a general methodology of constructing\nrobust, scale and rotation invariant descriptors. We designed three novels descrip-\ntors using our methodology presented in the Chapter 3and we believe that this\nmethodology is adequate to be used as a design guide in the creation of new robust\nand invariant descriptors. Thereby, this work offers three main contributions to the\nstate-of-the-art:\n\u000fThe robust descriptor EDVD, which, besides providing invariance to orienta-\ntion, scale and different illumination conditions, presents an algorithm with\na low dependence on the normal estimation. Since, even using a coarse nor-\nmal estimation approach EDVD has the same accuracy when using a precision\napproach;\n\u000fThe fast, robust and lightweight descriptor BRAND, which like EDVD has\ninvariance to orientation, scale and illumination conditions, is fast to compute\nand compare and has low memory usage;\n\u000fAnd the BASE descriptor, a fast and ligthweight descriptor which in spite of\nthe lack of robustness to scale and orientation transforms, presents high ac-\n77\n78 C HAPTER 6. C ONCLUSIONS AND FUTURE WORK\ncuracy in matching tasks. As with EDVD and BRAND, the BASE descriptor\nefficiently combines intensity and shape information to improve the discrimi-\nnative power enhancing the matching process.\nFrom a theorical standpoint, our work exploits these techniques to build ro-\nbust, quick and low memory consumption descriptors suitable for online applica-\ntions such as 3D mapping and object recognition applications. These techniques\nare able to work in modest hardware configurations with limited memory and pro-\ncessor use. For instance, even being invariant to scale and rotation transform, the\nBRAND descriptor can be stored using just 256bits of memory.\nA comparative analysis was conducted against three standard descriptors in\nthe literature and the state-of-the-art, and we showed that our three descriptors\noutperform all of these other approaches in terms of robustness to affine transfor-\nmations estimation, processing time, memory consumption and matching accuracy.\nMoreover, our descriptors shown a smaller dependence on the keypoint detector.\nAdditionally, we applied our descriptors in two challenging applications: Se-\nmantic mapping and registering multiple indoor textured depth maps. Experiments\non registration tasks demonstrated that our technique provides small alignment er-\nrors similar to other less efficient descriptors and in some cases proved even better.\nThe experiments also show that our descriptors are robust under poor lighting and\nsparsely textured scenes as expected.\nTo evaluate the use of our descriptor for object detection and recognition, we\nproposed an efficient and simple framework based on Adaboost and a more accu-\nrate complex framework using the combination of bag of features and partial least\nsquares algorithm. We tested these approaches using two different datasets and our\ndescriptors demonstrated high accuracy in the confusion matrix and faster execu-\ntion times for both the learning and classification steps.\nWe also demonstrated the application of the proposed methodology in a clas-\nsification task for Semantic Mapping. We compared the performance of our descrip-\ntors with the results obtained with BRIEF and SURF for the same task, and showed\nthat our approach outperforms the other two descriptors both in detection and in\nrecognition rates.\nThe results presented here extend the conclusion of Lai et al. [2011b]; Tombari\net al. [2011] and Henry et al. [2010] where the combined use of intensity and shape\ninformation is advantageous not only in perception tasks, but also in improving the\nquality of other tasks such as the correspondence and registration process. Com-\nbined shape and intensity information indeed renders performances figures that are\n6.2. F UTURE WORK 79\nhigher than those attained using either information set alone.\nThe main constraint of our methodology are the bumpy surfaces. Since the\ngeometrical features are extracted using a threshold for the displacement between\nnormals, the small irregularities of these surfaces can be confused with noise. An-\nother important drawback in our methodology is due to RGB-D camera limitations.\nWhile laser scanners have Field of View (FOV) of about 180degrees, RGB-D sen-\nsors have FOV of 60degrees. And the maximum distance typically less than 5m for\nRGB-D. Morevover, the currently RGB-D sensors are confined to indoor scenes.\n6.2 Future Work\nThere are several possibilities of research in order to continue the work developed\nin this thesis. First of all, strong results shown in the experiments and applica-\ntions chapters have demonstrated the importance of using an appropriate strategy\nto combine texture and geometrical information. We believe that it is important and\nnecessary to proceed with a theorical investigation about the limits and best ways\nto perform such combinations.\nAnother important direction in the near future is to apply the information fu-\nsion approach in keypoint detection algorithms. We would like to try a similar strat-\negy to fuse intensity and geometrical features for keypoint detectors. With this, it\nwould be possible to extract keypoints from texturless data as well as from images\nacquired in scenes with lack of illumination and homogeneous surfaces.\nAlthough our descriptor EDVD presents higher matching accuracy than SIFT,\nSURF and CSHOT and has invariance to rotation transforms, it suffers with differ-\nences in the cell size. We can see in the Figure 3.3 of Chapter 3 that the cells closest to\nthe equator have the largest surface areas, and the bins closest to the north and south\npoles are the smallest. As future work, we intend to investigate how to overcome\nthis issue, for example, using spherical harmonics directly on the EGI histogram\ninstead of computing the fourier transform in the 2D histogram.\nIn this thesis, we have worked fusing information in the low level layer, i.e, cre-\nating signatures to identify keypoints. A very interesting possibility for continued\nresearch involves working with texture and geometrical features on a higher level.\nFor example, we could use the geometrical and image features of our descriptors\nseparately as input data for a learning algorithm such as Lai et al. [2011b].\nFinally, as far as mapping is concerned, we will use our methodology to en-\nhance loop closure and modelling of three-dimensional environments. We aspire\n80 C HAPTER 6. C ONCLUSIONS AND FUTURE WORK\nto developed a dense, real-time SLAM algorithm with our descriptors, where this\nlow consumption technique could be proposed for embedded systems. This work\nis related to a method of loop closure using RGB-D images using online learning,\nwhich aims to amend our registration, realigning point clouds based on the estima-\ntion error when a loop occurs.\nIn summary, as future work, we intend to continue the investigation about\nthe benefits of working with apperance and geometrical information to improve the\ndetection and description of keypoints as well as for use in object recognition and\ntridimensional alignment.\nBibliography\nAgrawal, M., Konolige, K., and Blas, M. R. (2008). CenSurE: Center Surround Ex-\ntremas for Realtime Feature Detection and Matching. In Proc. of the Europ. Conf.\non Comp. Vision (ECCV) , pages 102--115.\nAmbai, M. and Yoshida, Y. (2011). CARD: Compact And Real-time Descriptors. In\nIEEE Int. Conf. on Comp. Vision (ICCV) .\nAndrade, M. and Lewiner, T. (2011). Cálculo e Estimação de Invariantes Geométricos:\nUma Introdução às Geometrias Euclidiana e Afim . IMPA.\nBay, H., Ess, A., Tuytelaars, T., and Van Gool, L. (2008). Speeded-up robust features\n(surf). Computer Vision and Image Understanding , 110:346--359.\nBerkmann, J. and Caelli, T. (1994). Computation of surface geometry and segmen-\ntation using covariance techniques. IEEE Trans. Pattern Anal. Mach. Intell. (P AMI) ,\n16(11):1114–1116.\nBesl, P . J. and McKay, N. D. (1992). A method for registration of 3-d shapes. IEEE\nTrans. Pattern Anal. Mach. Intell. (P AMI) , 14:239--256.\nBorenstein, J., Koren, Y., and Member, S. (1991). The vector field histogram - fast\nobstacle avoidance for mobile robots. IEEE Journal of Robotics and Automation ,\n7:278--288.\nCalonder, M., Lepetit, V ., Strecha, C., and Fua, P . (2010). BRIEF: Binary Robust Inde-\npendent Elementary Features. In Proc. of the Europ. Conf. on Comp. Vision (ECCV) .\nChatila, R. and Laumond, J. (1985). Position referencing and consistent world mod-\neling for mobile robots. In IEEE Intl. Conf. on Robotics and Automation (ICRA) ,\npages 138--145.\nCheng, Z.-Q., Jiang, W., Dang, G., Martin, R. R., Li, J., Li, H., Chen, Y., Wang, Y., Li,\nB., Xu, K., and Jin, S. (2010). Non-rigid registration in 3d implicit vector space.\n81\n82 B IBLIOGRAPHY\nInProceedings of the 2010 Shape Modeling International Conference , SMI ’10, pages\n37--46.\nChoi, J., Schwartz, W. R., Guo, H., and Davis, L. S. (2012). A Complementary Local\nFeature Descriptor for Face Identification. In IEEE Workshop on Applications of\nComputer Vision .\nCsurka, G., Dance, C. R., Fan, L., Willamowski, J., and Bray, C. (2004). Visual catego-\nrization with bags of keypoints. In In Workshop on Statistical Learning in Computer\nVision, ECCV , pages 1--22.\nDuda, R., Hart, P ., and Stork, D. (2001). Pattern classification . Pattern Classification\nand Scene Analysis: Pattern Classification.\nFischler, M. A. and Bolles, R. C. (1981). Random Sample Consensus: A Paradigm for\nModel Fitting with Applications to Image Analysis and Automated Cartography.\nCommunications of the ACM , 24(6):381–395.\nFreund, Y. and Schapire, R. E. (1995). A decision-theoretic generalization of on-\nline learning and an application to boosting. In Proc. of the 2nd European Conf. on\nComputational Learning Theory , pages 23--37.\nHarris, C. and Stephens, M. (1988). A Combined Corner and Edge Detection. In\nProceedings of The Fourth Alvey Vision Conference , pages 147--151.\nHenry, P ., Krainin, M., Herbst, E., Ren, X., and Fox, D. (2010). Rgb-d mapping: Using\ndepth cameras for dense 3d modeling of indoor environments. In International\nSymposium on Experimental Robotics (ISER) .\nHetzel, G., Leibe, B., Levi, P ., and Schiele, B. (2001). 3D Object Recognition from\nRange Images using Local Feature Histograms. In IEEE Conf. on Comp. Vision and\nPattern Recog. (CVPR) , page 394–399.\nHolz, D., Holzer, S., Rusu, R. B., and Behnke, S. (2011). Real-Time Plane Segmenta-\ntion using RGB-D Cameras. In RoboCup Symposium .\nHorn, B. K. P . (1984). Extended gaussian images. Proceedings of the IEEE , 72(2):1671-\n-1686.\nHorn, B. K. P . (1987). Closed-form solution of absolute orientation using unit quater-\nnions. Journal of the Optical Society of America A , 4(4):629--642.\nBIBLIOGRAPHY 83\nHua, G., Brown, M., and Winder, S. (2007). Discriminant Embedding for Local Image\nDescriptors. In IEEE Int. Conf. on Comp. Vision (ICCV) , volume 0, pages 1–8.\nIntel (2007). SS4 Programming Reference. http://software.intel.com/file/18187.\nJohnson, A. E. and Hebert, M. (1999). Using Spin Images for Efficient Object Recog-\nnition in Cluttered 3D Scenes. IEEE Trans. Pattern Anal. Mach. Intell. (P AMI) ,\n21(5):433--449.\nKanezaki, A., Marton, Z.-C., Pangercic, D., Harada, T., Kuniyoshi, Y., and Beetz, M.\n(2011). Voxelized Shape and Color Histograms for RGB-D. In IROS Workshop on\nActive Semantic Perception .\nKe, Y. and Sukthankar, R. (2004). PCA-SIFT: A More distinctive Representation for\nLocal Image Descriptors. In IEEE Conf. on Comp. Vision and Pattern Recog. (CVPR) .\nKembhavi, A., Harwood, D., and Davis, L. (2011). Vehicle Detection Using Partial\nLeast Squares. IEEE Trans. Pattern Anal. Mach. Intell. (P AMI) , (6):1250 –1265.\nKlasing, K., Althoff, D., Wollherr, D., and Buss, M. (2009). Comparison of surface\nnormal estimation methods for range sensing applications. In IEEE Intl. Conf. on\nRobotics and Automation (ICRA) , pages 3206 –3211.\nKuipers, B. and Byun, Y. (1991). A robot exploration and mapping strategy based\non semantic hierarchy of spatial representation. Journal of Robotics and Autonomous\nSystems , 1(8):47--63.\nLai, K., Bo, L., Ren, X., and Fox, D. (2011a). A large-scale hierarchical multi-view\nrgb-d object dataset. In IEEE Intl. Conf. on Robotics and Automation (ICRA) .\nLai, K., Bo, L., Ren, X., and Fox, D. (2011b). Sparse distance learning for object\nrecognition combining rgb and depth information. In IEEE Intl. Conf. on Robotics\nand Automation (ICRA) .\nLeutenegger, S., Chli, M., and Siegwart, R. (2011). BRISK: Binary Robust Invariant\nScalable Keypoints. In IEEE Int. Conf. on Comp. Vision (ICCV) .\nLindeberg, T. (1994). Scale-Space Theory in Computer Vision . Kluwer Academic Pub-\nlishers.\nLowe., D. G. (2004). Distinctive image features from scale-invariant keypoints. In-\nternationl Journal of Computer Vision , pages 91--110.\n84 B IBLIOGRAPHY\nMakadia, A., Iv, E. P ., and Daniilidis, K. (2006). Fully automatic registration of 3d\npoint clouds. In IEEE Conf. on Comp. Vision and Pattern Recog. (CVPR) , pages 1297-\n-1304.\nMicrosoft (2011). Microsoft kinect.\nMikolajczyk, K. and Schmid, C. (2005). A performance evaluation of local descrip-\ntors. IEEE Trans. Pattern Anal. Mach. Intell. (P AMI) , 27(10):1615--1630.\nOjala, T., Pietikäinen, M., and Harwood, D. (1996). A comparative study of texture\nmeasures with classification based on featured distributions. Pattern Recognition ,\n29(1):51 – 59.\nPang, Y., Li, X., and Yuan, Y. (2010). Robust Tensor Analysis With L1-Norm. Circuits\nand Systems for Video Technology, IEEE Transactions on , 20(2):172 –178.\nPang, Y. and Yuan, Y. (2010). Outlier-resisting graph embedding. Neurocomputing ,\n73(4â ˘A¸ S6):968 – 974.\nRosin, P . (1999). Measuring Corner Properties. Computer Vision and Image Under-\nstanding , 73(2):291--307.\nRosipal, R. and Krämer, N. (2006). Overview and recent advances in partial least\nsquares. In Subspace, Latent Structure and Feature Selection Techniques, Lecture Notes\nin Computer Science , pages 34--51. Springer.\nRosten, E., Porter, R., and Drummond, T. (2010). FASTER and better: A machine\nlearning approach to corner detection. IEEE Trans. Pattern Anal. Mach. Intell.\n(P AMI) , 32:105--119.\nRublee, E., Rabaud, V ., Konolige, K., and Bradski, G. (2011). ORB: an efficient alter-\nnative to SIFT or SURF. In IEEE Int. Conf. on Comp. Vision (ICCV) .\nRusu, R., Blodow, N., and Beetz, M. (2009). Fast Point Feature Histograms (FPFH)\nfor 3D registration. In IEEE Intl. Conf. on Robotics and Automation (ICRA) .\nRusu, R., Marton, Z., Blodow, N., and Beetz, M. (2008a). Aligning Point Cloud\nViews using Persistent Feature Histograms. In IEEE Intl. Proc. on Intelligent Robots\nand Systems (IROS) , pages 22–26.\nBIBLIOGRAPHY 85\nRusu, R., Marton, Z., Blodow, N., and Beetz, M. (2008b). Learning Informative Point\nClasses for the Acquisition of Object Model Maps. In Proceedings of the 10th In-\nternational Conference on Control, Automation, Robotics and Vision (ICARCV) , pages\n17–20.\nRusu, R., Marton, Z., Blodow, N., Dolha, M., and Beetz, M. (2008c). Towards 3D\nPoint Cloud Based Object Maps for Household Environments. Robotics and Au-\ntonomous Systems Journal (Special Issue on Semantic Knowledge) .\nRusu, R. B., Marton, Z. C., Blodow, N., and Beetz, M. (2008d). Persistent Point\nFeature Histograms for 3D Point Clouds. In Proceedings of the 10th International\nConference on Intelligent Autonomous Systems (IAS-10) .\nSalvi, J., Matabosch, C., Fofi, D., and Forest, J. (2007). A review of recent range\nimage registration methods with accuracy evaluation. Image and Vision Computing ,\n25(5):578 – 596.\nSchmid, C. and Mohr, R. (1997). Local grayvalue invariants for image retrieval. IEEE\nTrans. Pattern Anal. Mach. Intell. (P AMI) , 19:530--535.\nSchwartz, W. R., Guo, H., and Davis, L. S. (2010). A Robust and Scalable Approach\nto Face Identification. In Proc. of the Europ. Conf. on Comp. Vision (ECCV) , volume\n6316 of Lecture Notes in Computer Science , pages 476–489.\nSchwartz, W. R., Kembhavi, A., Harwood, D., and Davis, L. S. (2009). Human De-\ntection Using Partial Least Squares Analysis. In IEEE Int. Conf. on Comp. Vision\n(ICCV) , pages 24–31.\nSehgal, A., Cernea, D., and Makaveeva, M. (2010). Real-time scale invariant 3d\nrange point cloud registration. In International Conference on Image Analysis and\nRecognition (ICIAR) , pages I: 220–229.\nSteder, B., Grisetti, G., and Burgard, W. (2010). Robust Place Recognition for 3D\nRange Data based on Point Features. In IEEE Intl. Conf. on Robotics and Automation\n(ICRA) .\nSteder, B., Rusu, R. B., Konolige, K., and Burgard, W. (2011). Point feature extraction\non 3d range scans taking into account object boundaries. In IEEE Intl. Conf. on\nRobotics and Automation (ICRA) .\n86 B IBLIOGRAPHY\nSturm, J., Magnenat, S., Engelhard, N., Pomerleau, F., Colas, F., Burgard, W., Cre-\nmers, D., and Siegwart, R. (2011). Towards a benchmark for rgb-d slam evalua-\ntion. In Proc. of the RGB-D Workshop on Advanced Reasoning with Depth Cameras at\nRobotics: Science and Systems Conf. (RSS) .\nThrun, S. (2002). Particle filters in robotics. In Proceedings of the 17th Annual Confer-\nence on Uncertainty in AI (UAI) .\nTombari, F., Salti, S., and Di Stefano, L. (2010). Unique signatures of histograms\nfor local surface description. In Proc. of the Europ. Conf. on Comp. Vision (ECCV) ,\nECCV’10, pages 356--369.\nTombari, F., Salti, S., and Stefano, L. D. (2011). A combined texture-shape descriptor\nfor enhanced 3D feature matching. In IEEE Intl. Conf. on Image Processing (ICIP) .\nTuytelaars, T. and Mikolajczyk, K. (2008). Local invariant feature detectors: a survey.\nFound. Trends. Comput. Graph. Vis. , 3(3):177--280.\nVieira, T., Peixoto, A., Velho, L., and Lewiner, T. (2007). An iterative framework for\nregistration with reconstruction. In Vision, Modeling, and Visualization 2007 , pages\n101--108.\nWinkelbach, S., Molkenstruck, S., and Wahl, F. M. (2006). Low-Cost Laser Range\nScanner and Fast Surface Registration Approach. In DAGM Symposium for Pattern\nRecognition , pages 718--728.\nZaharescu, A., Boyer, E., Varanasi, K., and Horaud, R. P . (2009). Surface Feature\nDetection and Description with Applications to Mesh Matching. In IEEE Conf. on\nComp. Vision and Pattern Recog. (CVPR) .\nZhang, Z., Deriche, R., Faugeras, O. D., and Luong, Q. T. (1995). A Robust Technique\nfor Matching two Uncalibrated Images Through the Recovery of the Unknown\nEpipolar Geometry. Artificial Intelligence , 78(1-2):87--119.",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "45kU2QUahWO",
"year": null,
"venue": "SCN 2012",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=45kU2QUahWO",
"arxiv_id": null,
"doi": null
}
|
{
"title": "On the Centrality of Off-Line E-Cash to Concrete Partial Information Games",
"authors": [
"Seung Geol Choi",
"Dana Dachman-Soled",
"Moti Yung"
],
"abstract": "Cryptography has developed numerous protocols for solving “partial information games” that are seemingly paradoxical. Some protocols are generic (e.g., secure multi-party computation) and others, due to the importance of the scenario they represent, are designed to solve a concrete problem directly. Designing efficient and secure protocols for (off-line) e-cash, e-voting, and e-auction are some of the most heavily researched concrete problems, representing various settings where privacy and correctness of the procedure is highly important. In this work, we initiate the exploration of the relationships among e-cash, e-voting and e-auction in the universal composability (UC) framework, by considering general variants of the three problems. In particular, we first define ideal functionalities for e-cash, e-voting, and e-auction, and then give a construction of a protocol that UC-realizes the e-voting (resp., e-auction) functionality in the e-cash hybrid model. This (black-box) reducibility demonstrates the centrality of off-line e-cash and implies that designing a solution to e-cash may bear fruits in other areas. Constructing a solution to one protocol problem based on a second protocol problem has been traditional in cryptography, but typically has concentrated on building complex protocols on simple primitives (e.g., secure multi-party computation from Oblivious Transfer, signature from one-way functions, etc.). The novelty here is reducibility among mature protocols and using the ideal functionality as a design tool in realizing other ideal functionalities. We suggest this new approach, and we only consider the very basic general properties from the various primitives to demonstrate its viability. Namely, we only consider the basic coin e-cash model, the e-voting that is correct and private and relies on trusted registration, and e-auction relying on a trusted auctioneer. Naturally, relationships among protocols with further properties (i.e., extended functionalities), using the approach advocated herein, are left as open questions.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "UnyXP5oxyB",
"year": null,
"venue": "ICALT 2004",
"pdf_link": "https://ieeexplore.ieee.org/iel5/9382/29792/01357650.pdf",
"forum_link": "https://openreview.net/forum?id=UnyXP5oxyB",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Open Hypermedia Management for E-learning in the Humanities",
"authors": [
"Ralf Klamma",
"Marc Spaniol"
],
"abstract": "Knowledge-based hypermedia systems are used in scholarly communication since more than 2000 years, e.g. the Babylonian Talmud. Many different computer-based hypermedia systems have been successfully created for the humanities, too. But, cross-university or even university-wide use of these systems for e-learning purposes in the humanities is seldom if ever. In this paper, we argue, that only a deep analysis of hypermedia usage in scholarly communication and teaching can disclose non-trivial technical and community-aware requirements for open e-learning systems in the humanities. With these requirements we analyze some of the existing solutions and provide a case study for a cross-university hypermedia e-learning system called MECCA.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "rJapli7sx",
"year": null,
"venue": null,
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=rJapli7sx",
"arxiv_id": null,
"doi": null
}
|
{
"title": "nice method and analysis",
"authors": [],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "di__5nsIBjw",
"year": null,
"venue": "EACL 2021",
"pdf_link": "https://aclanthology.org/2021.eacl-main.228.pdf",
"forum_link": "https://openreview.net/forum?id=di__5nsIBjw",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Do Syntax Trees Help Pre-trained Transformers Extract Information?",
"authors": [
"Devendra Singh Sachan",
"Yuhao Zhang",
"Peng Qi",
"William L. Hamilton"
],
"abstract": "Much recent work suggests that incorporating syntax information from dependency trees can improve task-specific transformer models. However, the effect of incorporating dependency tree information into pre-trained transformer models (e.g., BERT) remains unclear, especially given recent studies highlighting how these models implicitly encode syntax. In this work, we systematically study the utility of incorporating dependency trees into pre-trained transformers on three representative information extraction tasks: semantic role labeling (SRL), named entity recognition, and relation extraction. We propose and investigate two distinct strategies for incorporating dependency structure: a late fusion approach, which applies a graph neural network on the output of a transformer, and a joint fusion approach, which infuses syntax structure into the transformer attention layers. These strategies are representative of prior work, but we introduce additional model design elements that are necessary for obtaining improved performance. Our empirical analysis demonstrates that these syntax-infused transformers obtain state-of-the-art results on SRL and relation extraction tasks. However, our analysis also reveals a critical shortcoming of these models: we find that their performance gains are highly contingent on the availability of human-annotated dependency parses, which raises important questions regarding the viability of syntax-augmented transformers in real-world applications.",
"keywords": [],
"raw_extracted_content": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics , pages 2647–2661\nApril 19 - 23, 2021. ©2021 Association for Computational Linguistics2647Do Syntax Trees Help Pre-trained Transformers Extract Information?\nDevendra Singh Sachan1;2, Yuhao Zhang3, Peng Qi3, William Hamilton1;2\n1Mila - Quebec AI Institute\n2School of Computer Science, McGill University\n3Stanford University\[email protected], [email protected]\n{yuhaozhang, pengqi}@stanford.edu\nAbstract\nMuch recent work suggests that incorporat-\ning syntax information from dependency trees\ncan improve task-specific transformer models.\nHowever, the effect of incorporating depen-\ndency tree information into pre-trained trans-\nformer models (e.g., BERT) remains unclear,\nespecially given recent studies highlighting\nhow these models implicitly encode syntax.\nIn this work, we systematically study the util-\nity of incorporating dependency trees into pre-\ntrained transformers on three representative in-\nformation extraction tasks: semantic role label-\ning (SRL), named entity recognition, and rela-\ntion extraction.\nWe propose and investigate two distinct strate-\ngies for incorporating dependency structure: a\nlate fusion approach, which applies a graph\nneural network on the output of a transformer,\nand a joint fusion approach, which infuses syn-\ntax structure into the transformer attention lay-\ners. These strategies are representative of prior\nwork, but we introduce additional model de-\nsign elements that are necessary for obtaining\nimproved performance. Our empirical anal-\nysis demonstrates that these syntax-infused\ntransformers obtain state-of-the-art results on\nSRL and relation extraction tasks. However,\nour analysis also reveals a critical shortcom-\ning of these models: we find that their perfor-\nmance gains are highly contingent on the avail-\nability of human-annotated dependency parses,\nwhich raises important questions regarding the\nviability of syntax-augmented transformers in\nreal-world applications.1\n1 Introduction\nDependency trees—a form of syntactic represen-\ntation that encodes an asymmetric syntactic rela-\ntion between words in a sentence, such as sub-\n1Our code is available at: https://github.com/\nDevSinghSachan/syntax-augmented-bertjectoradverbial modifier —have proven very use-\nful in various NLP tasks. For instance, features\ndefined in terms of the shortest path between\nentities in a dependency tree were used in rela-\ntion extraction (RE) (Fundel et al., 2006; Björne\net al., 2009), parse structure has improved named\nentity recognition (NER) (Jie et al., 2017), and\njoint parsing was shown to benefit semantic role\nlabeling (SRL) (Pradhan et al., 2005) systems.\nMore recently, dependency trees have also led to\nmeaningful performance improvements when in-\ncorporated into neural network models for these\ntasks. Popular encoders to include dependency tree\ninto neural models include graph neural networks\n(GNNs) for SRL (Marcheggiani and Titov, 2017)\nand RE (Zhang et al., 2018), and biaffine attention\nin transformers for SRL (Strubell et al., 2018).\nIn parallel, there has been a renewed interest in\ninvestigating self-supervised learning approaches\nto pre-training neural models for NLP, with recent\nsuccesses including ELMo (Peters et al., 2018),\nGPT (Radford et al., 2018), and BERT (Devlin\net al., 2019). Of late, the BERT model based on\npre-training of a large transformer model (Vaswani\net al., 2017) to encode bidirectional context has\nemerged as a dominant paradigm, thanks to its\nimproved modeling capacity which has led to state-\nof-the-art results in many NLP tasks.\nBERT’s success has also attracted attention to\nwhat linguistic information its internal representa-\ntions capture. For example, Tenney et al. (2019)\nattribute different linguistic information to different\nBERT layers; Clark et al. (2019) analyze BERT’s\nattention heads to find syntactic dependencies;\nHewitt and Manning (2019) show evidence that\nBERT’s hidden representation embeds syntactic\ntrees. However, it remains unclear if this linguistic\ninformation helps BERT in downstream tasks dur-\ning finetuning or not. Further, it is not evident if\nexternal syntactic information can further improve\n2648BERT’s performance on downstream tasks.\nIn this paper, we investigate the extent to which\npre-trained transformers can benefit from integrat-\ning external dependency tree information. We per-\nform the first systematic investigation of how de-\npendency trees can be incorporated into pre-trained\ntransformer models, focusing on three representa-\ntive information extraction tasks where dependency\ntrees have been shown to be particularly useful\nfor neural models: semantic role labeling (SRL),\nnamed entity recognition (NER), and relation ex-\ntraction (RE).\nWe propose two representative approaches to in-\ntegrate dependency trees into pre-trained transform-\ners (i.e., BERT) using syntax-based graph neural\nnetworks (syntax-GNNs). The first approach in-\nvolves a sequential assembly of a transformer and a\nsyntax-GNN, which we call Late Fusion , while the\nsecond approach interleaves syntax-GNN embed-\ndings within transformer layers, termed Joint Fu-\nsion. These approaches are inspired by recent work\nthat combines transformers with external input, but\nwe introduce design elements such as the alignment\nbetween dependency tree and BERT wordpieces\nthat lead to obtaining strong performance. Compre-\nhensive experiments using these approaches reveal\nseveral important insights:\n•Both our syntax-augmented BERT models\nachieve a new state-of-the-art on the CoNLL-\n2005 and CoNLL-2012 SRL benchmarks when\nthe gold trees are used, with the best variant out-\nperforming a fine-tuned BERT model by over 3\nF1points on both datasets. The Late Fusion ap-\nproach also provides performance improvements\non the TACRED relation extraction dataset.\n•These performance gains are consistent across\ndifferent pre-trained transformer approaches\nof different sizes (i.e. BERT BASE/LARGE and\nRoBERTa BASE/LARGE ).\n•The Joint Fusion approach that interleaves GNNs\nwith BERT achieves higher performance im-\nprovements on SRL, but it is also less stable and\nmore prone to errors when using noisy depen-\ndency tree inputs such as for the RE task, where\nLate Fusion performs much better, suggesting\ncomplementary strengths from both approaches.\n•In the SRL task, the performance gains of both\napproaches are highly contingent on the availabil-\nity of human-annotated parses for both training\nand inference, without which the performancegains are either marginal or non-existent. In the\nNER task, even the gold trees don’t show perfor-\nmance improvements.\nAlthough our work does obtain new state-of-the-\nart results on SRL tasks by introducing dependency\ntree information from syntax-GNNs into BERT,\nhowever, our most important result is somewhat\nnegative and cautionary: the performance gains are\nonly substantial when human-annotated parses are\navailable. Indeed, we find that even high-quality\nautomated parses generated by domain-specific\nparsers do not suffice, and we are only able to\nachieve meaningful gains with human-annotated\nparses. This is a critical finding for future work—\nespecially for SRL—as researchers routinely de-\nvelop models with human-annotated parses, with\nthe implicit expectation that models will generalize\nto high-quality automated parses.\nFinally, our analysis provides indirect evidence\nthat pre-trained transformers do incorporate suffi-\ncient syntactic information to achieve strong per-\nformance on downstream tasks. While human-\nannotated parses can still help greatly, with our\nproposed models it appears that the knowledge\nin automatically extracted syntax trees is largely\nredundant with the implicit syntactic knowledge\nlearned by pre-trained models such as BERT.\n2 Models\nIn this section, we will first briefly review the trans-\nformer encoder, then describe the graph neural\nnetwork (GNN) that learns syntax representations\nusing dependency tree input, which we term the\nsyntax-GNN. Next, we will describe our syntax-\naugmented BERT models that incorporate such\nrepresentations learned from the GNN.\n2.1 Transformer Encoder\nThe transformer encoder (Vaswani et al., 2017) con-\nsists of three core modules in sequence: embedding\nlayer, multiple encoder layers, and a task-specific\noutput layer. The core elements in these mod-\nules are different sets of learnable weight matrices\nthat perform linear transformations. The embed-\nding layer consists of wordpiece embeddings, posi-\ntional embeddings, and segment embeddings (De-\nvlin et al., 2019). After embedding lookup, these\nthree embeddings are added to obtain token embed-\ndings for an input sentence. The encoder layers\nthen transform the input token embeddings to hid-\nden state representations. The encoder layer con-\n2649\n𝑊𝐾𝑊𝑉𝑊𝑄𝑊𝐹Layer Norm𝑊1𝑊2\nGELU\nLayer Norm𝛼31\n𝛼32𝛼33𝛼35Graph-Attention SublayerFeed-Forward \nNetwork Sublayer𝑁×\n𝑧3\n𝑣3Figure 1: Block diagram illustrating syntax-GNN ap-\nplied over a sentence’s dependency tree. In the example\nshown, for the word “ have ”, the graph-attention sub-\nlayer aggregates representations from its three adjacent\nnodes in the dependency graph.\nsists of two sublayers: multi-head dot-product self-\nattention and feed-forward network, which will be\ncovered in the following section. Finally, the out-\nput layer is task-specific and consists of one layer\nfeed-forward network.\n2.2 Syntax-GNN: Graph Neural Network\nover a Dependency Tree\nA dependency tree can be considered as a multi-\nattribute directed graph where the nodes represent\nwords and the edges represent the dependency re-\nlation between the head and dependent words. To\nlearn useful syntax representations from the depen-\ndency tree structure, we apply graph neural net-\nworks (GNNs) (Hamilton et al., 2017; Battaglia\net al., 2018) and henceforth call our model syntax-\nGNN . Our syntax-GNN encoder as shown in Fig-ure 1 is a variation of the transformer encoder\nwhere the self-attention sublayer is replaced by\ngraph attention (Veli ˇckovi ´c et al., 2018). Self-\nattention can also be considered as a special case\nof graph-attention where each word is connected\nto all the other words in the sentence.\nLetV=fvi2Rdgi=1:Nvdenote the input node\nembeddings and E=f(ek;i;j)k=1:Negdenote the\nedges in the dependency tree, where the edge ekis\nincident on nodes iandj. Each layer in our syntax-\nGNN encoder consists of two sublayers: graph\nattention and feed-forward network.\nFirst, interaction scores ( sij) are computed for\nall the edges by performing dot-product on the\nadjacent linearly transformed nodes embeddings\nsij= (viWQ)(vjWK)>: (1)\nThe termsviWQandviWKare also known as\nquery andkeyvectors respectively. Next, an at-\ntention score ( \u000bij) is computed for each node by\napplying softmax over the interaction scores from\nall its connecting edges:\n\u000bij=exp(sij)P\nk2Niexp (sik); (2)\nwhereNirefers to the set of nodes connected to ith\nnode. The graph attention output ( zi) is computed\nby the aggregation of attention scores followed by\na linear transformation:\nzi= (X\nj2Ni\u000bij(vjWV))WF: (3)\nThe termvjWVis also referred to as value vec-\ntor. Subsequently, the message ( zi) is passed to\nthe second sublayer that consists of two layer fully\nconnected feed-forward network with GELU acti-\nvation (Hendrycks and Gimpel, 2016).\nFFN(zi) = GELU( ziW1+b1)W2+b2:(4)\nTheFFN sublayer outputs are given as input to\nthe next layer. In the above equations WK,WV,\nWQ,WF,W1,W2are trainable weight matrices\nandb1,b2are bias parameters. Additionally, layer\nnormalization (Ba et al., 2016) is applied to the\ninput and residual connections (He et al., 2016) are\nadded to the output of each sublayer.\n2.2.1 Dependency Tree over Wordpieces\nAs BERT models take as input subword units (also\nknown as wordpieces) instead of linguistic tokens,\n2650\nSyntax-GNN\nBERT ModelHighway Gate\nWordpiece Embeddings(a)Late Fusion\n𝑊𝐾 𝑊𝑉𝑊𝑄𝑊𝐹\nJoint AttentionLayer Norm𝑊1𝑊2\nGELU\nLayer Norm\nSyntax-GNN\nHidden States𝑁×\n𝑃𝑉 𝑃𝐾Add Add\nPrevious Layer \nHidden States\n(b)Joint Fusion\nFigure 2: Block diagrams illustrating our proposed\nsyntax-augmented BERT models. Weights shown in\ncolor are pre-trained while those not colored are either\nnon-parameterized operations or have randomly initial-\nized weights. The inputs to each of these models are\nwordpiece embeddings while their output goes to task-\nspecific output layers. In subfigure 2b, N\u0002indicates\nthat there are N layers, with each of them being passed\nthe same set of syntax-GNN hidden states.\nthis also necessitates to extend the definition of a\ndependency tree to include wordpieces. For this,\nwe introduce additional edges in the original de-\npendency tree by defining new edges from the first\nsubword (head word) of a token to the remaining\nsubwords (tail words) of the same token.\n2.3 Syntax-Augmented BERT\nIn this section, we propose parameter augmen-\ntations over the BERT model to best incorpo-\nrate syntax information from a syntax-GNN. To\nthis end, we introduce two models— Late Fusion\nand Joint Fusion. These models represent novel\nmechanisms—inspired by previous work—through\nwhich syntax-GNN features are incorporated at dif-\nferent sublayers of BERT (Figure 2). We refer\nto these models as Syntax-Augmented BERT (SA-\nBERT) models. During the finetuning step, the new\nparameters in each model are randomly initialized\nwhile the existing parameters are initialized from\npre-trained BERT.Late Fusion: In this model, we feed the BERT\ncontextual representations to the syntax-GNN en-\ncoder i.e.syntax-GNN is stacked over BERT (Fig-\nure 2a). We also use a Highway Gate (Srivastava\net al., 2015) at the output of the syntax-GNN en-\ncoder to adaptively select useful representations\nfor the training task. Concretely, if viandziare\nthe representations from BERT and syntax-GNN\nrespectively, then the output ( hi) after the gating\nlayer is computed as,\ngi=\u001b(Wgvi+bg) (5)\nhi=gi\fvi+ (1\u0000gi)\fzi; (6)\nwhere\u001bis the sigmoid function 1=(1 +e\u0000x)and\nWgis a learnable parameter. Finally, we map the\noutput representations to linguistic space by adding\nthe hidden states of all the wordpieces that map to\nthe same linguistic token respectively.\nJoint Fusion: In this model, syntax represen-\ntations are incorporated within the self-attention\nsublayer of BERT. The motivation is to jointly at-\ntend over both syntax- and BERT representations.\nFirst, the syntax-GNN representations are com-\nputed from the input token embeddings and its\nfinal layer hidden states are passed to BERT. Sec-\nond, as is shown in Figure 2b, the syntax-GNN hid-\nden states are linearly transformed using weights\nPK,PVto obtain additional syntax-based key and\nvalue vectors. Third, syntax-based key and value\nvectors are added with the BERT’s self-attention\nsublayer key and value vectors respectively. Fourth,\nthe query vector in self-attention layer now attends\nover this set of keys and values, thereby augment-\ning the model’s ability to fuse syntax information.\nOverall, in this model, we introduce two new set of\nweights per layer { PK,PV}, which are randomly\ninitialized.\n3 Experimental Setup\n3.1 Tasks and Datasets\nFor our experiments, we consider information ex-\ntraction tasks for which dependency trees have been\nextensively used in the past to improve model per-\nformance. Below, we provide a brief description\nof these tasks and the datasets used and refer the\nreader to Appendix A.1 for full details.\nSemantic Role Labeling (SRL) In this task, the\nobjective is to assign semantic role labels to text\nspans in a sentence such that they answer the query:\n2651Who didwhat towhom andwhen ? Specifically, for\nevery target predicate (verb) of a sentence, we de-\ntectsyntactic constituents (arguments) and classify\nthem into predefined semantic roles. In our exper-\niments, we study the setting where the predicates\nare given and the task is to predict the arguments.\nWe use the CoNLL-2005 SRL corpus (Carreras\nand Màrquez, 2005) and CoNLL-2012 OntoNotes2\ndataset, which contains PropBank-style annota-\ntions for predicates and their arguments, and also\nincludes POS tags and constituency parses.\nNamed Entity Recognition (NER) NER is the\ntask of recognizing entity mentions in text and\ntagging them to entity categories. We use the\nOntoNotes 5.0 dataset (Pradhan et al., 2012), which\ncontains 18 named entity types.\nRelation Extraction (RE) RE is the task of pre-\ndicting the relation between the two entity mentions\nin a sentence. We use the label corrected version of\nthe TACRED dataset (Zhang et al., 2017; Alt et al.,\n2020), which contains 41 relation types as well as a\nspecial no_relation class indicating that no relation\nexists between the two entities.\n3.2 Training Details\nWe select bert-base-cased to be our reference pre-\ntrained baseline model.3It consists of 12 layers,\n12 attention heads, and 768 model dimensions. In\nboth the variants, the syntax-GNN component con-\nsists of 4 layers, while other configurations are kept\nthe same as bert-base . Also, for the Joint Fusion\nmethod, syntax-GNN hidden states were shared\nacross different layers. It is worth noting that as our\nobjective is to assess if the use of dependency trees\ncan provide performance gains over pre-trained\ntransformer models, it is important to tune the hy-\nperparameters of these baseline models to obtain\nstrong reference scores. Therefore, for each task,\nduring the finetuning step, we tune the hyperpa-\nrameters of the default bert-base model and use\nthe same hyperparameters to train the SA-BERT\nmodels. We refer the reader to Appendix A.2 for\nadditional training details.\n4 Results and Analysis\nIn this section, we present our main empirical anal-\nyses and key findings.\n2conll.cemantix.org/2012/data.html\n3bert-base configuration was preferred due to computa-\ntional reasons and we found that bert-cased provided substan-\ntial gains over bert-uncased in the NER task.Test Set P R F 1\nBaseline Models (without dependency parses)\nSA+GloVey84.17 83.28 83.72\nSA+ELMoy86.21 85.98 86.09\nBERT BASE 86.97 88.01 87.48\nGold Dependency Parses\nLate Fusion 89.17 91.09 90.12\nJoint Fusion 90.59 91.35 90.97\nTable 1: SRL results on the CoNLL-2005 WSJ test\nset averaged over 5independent runs. ymarks results\nfrom Strubell et al. (2018).\nTest Set P R F 1\nBaseline Models (without dependency parses)\nSA+GloVey82.55 80.02 81.26\nSA+ELMoy84.39 82.21 83.28\nDeep-LSTM+ELMoz- - 84.60\nStructure-distilled BERT\u0003- - 86.39\nBERT BASE 85.91 87.07 86.49\nGold Dependency Parses\nLate Fusion 88.06 90.32 89.18\nJoint Fusion 89.34 90.44 89.89\nTable 2: SRL results on the CoNLL-2012 test set\naveraged over 5independent runs. ymarks results\nfrom Strubell et al. (2018); zmark result from Peters\net al. (2018);\u0003mark result from Kuncoro et al. (2020).\n4.1 Benchmark Performance\nTo recap, our two proposed variants of the Syntax-\nAugmented BERT models in Section 2.3 mainly\ndiffer at the position where syntax-GNN outputs\nare fused with the BERT hidden states. Following\nthis, we first compare the effectiveness of these\nvariants on all the three tasks, comparing against\nprevious state-of-the-art systems such as (Strubell\net al., 2018; Jie and Lu, 2019; Zhang et al., 2018),\nwhich are outlined in Appendix B due to space\nlimitations. For this part we use gold dependency\nparses to train the models for SRL and NER, and\npredicted parses for RE, since gold dependency\nparses are not available for TACRED.\nWe present our main results for SRL in Table 1\nand Table 2, NER in Table 3, and RE in Table 4.\nAll these results report average performance over\nfive runs with different random seeds. First, we\nnote that for all the tasks, our bert-base baseline is\nquite strong and is directly competitive with other\nstate-of-the-art models.\nWe observe that both the Late Fusion and Joint\nFusion variants of our approach yielded the best\nresults in the SRL tasks. Specifically, on CoNLL-\n2652Test Set P R F 1\nBaseline Models (without dependency parses)\nBiLSTM-CRF+ELMoy88.25 89.71 88.98\nBERT BASE 88.75 89.61 89.18\nGold Dependency Parses\nDGLSTM-CRF+ELMoy89.59 90.17 89.88\nLate Fusion 88.75 89.19 88.97\nJoint Fusion 88.58 89.31 88.94\nTable 3: NER results on the OntoNotes-5.0 test set\naveraged over 5independent runs. ymarks results\nfrom Jie and Lu (2019).\nTest Set P R F 1\nBaseline Models (without dependency parses)\nBERT BASE 78.04 76.36 77.09\nStanford CoreNLP Dependency Parses\nGCNy74.2 69.3 71.7\nGCN+BERT BASEy74.8 74.1 74.5\nLate Fusion 78.55 76.29 77.38\nJoint Fusion 70.22 75.12 72.52\nTable 4: Relation extraction results on the revised TA-\nCRED test set (Alt et al., 2020), averaged over 5 in-\ndependent runs.ymarks results reported by Alt et al.\n(2020).\n2005 and CoNLL-2012 SRL, Joint Fusion im-\nproves over bert-base by an absolute 3:5F1points,\nwhile Late Fusion improves over bert-base by2:65\nF1points. On the RE task, the Late Fusion model\nimproves over bert-base by approximately 0:3F1\npoints while the Joint Fusion model leads to a drop\nof4:5F1points in performance (which we suspect\nis driven by the longer sentence lengths observed in\nTACRED). On NER, the SA-BERT models lead to\nno performance improvements as their scores lies\nwithin one standard deviation to that of bert-base .\nOverall, we find that syntax information is most\nuseful to the pre-trained transformer models in the\nSRL task , especially when intermixing the interme-\ndiate representations of BERT with representations\nfrom the syntax-GNN. Moreover, when the fusion\nis done after the final hidden layer of the pre-trained\nmodels, apart from providing good gains on SRL,\nit also provides small gains on RE task. We further\nnote that, as we trained all our syntax-augmented\nBERT models using the same hyperparameters as\nthat of bert-base , it is possible that separate hy-\nperparameter tuning would further improve their\nperformance.4.2 Impact of Parsing Quality\nIn this part, we study to what extent parsing quality\ncan affect the performance results of the syntax-\naugmented BERT models. Specifically, following\nexisting work, we compare the effect of using parse\ntrees from three different sources: (a) gold syntac-\ntic annotations4; (b) a dependency parser trained\nusing gold, in-domain parses5; and (c) available\noff-the-shelf NLP toolkits.6In previous work, it\nwas shown that using in-domain parsers can pro-\nvide good improvements on SRL (Strubell et al.,\n2018) and NER tasks (Jie and Lu, 2019), and the\nperformance can be further improved when gold\nparses were used at test time. Meanwhile, in many\npractical settings where gold parses are not read-\nily available, the only option is to use parse trees\nproduced by existing NLP toolkits, as was done\nby Zhang et al. (2018) for RE. In these cases, since\nthe parsers are trained on a different domain of\ntext, it is unclear if the produced trees, when used\nwith the SA-BERT models, can still lead to per-\nformance gains. Motivated by these observations,\nwe investigate to what extent gold,in-domain , and\noff-the-shelf parses can improve performance over\nstrong BERT baselines.\nComparing off-the-shelf and gold parses. We\nreport our findings on the CoNLL-2005 SRL\n(Table 5), CoNLL-2012 SRL (Table 6), and\nOntoNotes-5.0 NER (Table 7) tasks. Using gold\nparses, both the Late Fusion and Joint Fusion mod-\nels obtain greater than 2.5 F 1improvement on SRL\ntasks compared with bert-base while we don’t ob-\nserve significant improvements on NER. We further\nnote that as the gold parses are produced by expert\nhuman annotators, these results can be considered\nas the attainable performance ceiling from using\nparse trees in these models.\nWe also observe that using off-the-shelf parses\nfrom the Stanza toolkit (Qi et al., 2020) provides\nlittle to no gains in F 1scores (see Tables 5 and 7).\nThis is mainly due to the low in-domain accuracy of\nthe predicted parses. For example, on the CoNLL-\n4We use Stanford head rules (de Marneffe and Manning,\n2008) implemented in Stanford CoreNLP v4.0.0 (Manning\net al., 2014) to convert constituency trees to dependency trees\nin UDv2 format (Nivre et al., 2020).\n5The difference between settings (a) and (b) is during test\ntime. In (a) gold parses are used for both training and test\ninstances while in (b) gold parses are used for training, while\nduring test time, parses are extracted from a dependency parser\nwhich was trained using gold parses.\n6In this setting, the parsers are trained on general datasets\nsuch as the Penn Treebank or the English Web Treebank.\n2653Test Set P R F 1\nStanza Dependency Parses (UAS: 84.20)\nLate Fusion 86.85 88.06 87.45\nJoint Fusion 86.87 87.85 87.36\nIn-domain Dependency Parses (UAS: 92.66)\nLISA+GloVey85.53 84.45 84.99\nLISA+ELMoy87.13 86.67 86.90\nLate Fusion 86.80 87.98 87.39\nJoint Fusion 87.09 87.95 87.52\nGold Dependency Parses\nLate Fusion 89.17 91.09 90.12\nJoint Fusion 90.59 91.35 90.97\nTable 5: SRL results with different parses on the\nCoNLL-2005 WSJ test set averaged over 5independent\nruns.ymarks results from Strubell et al. (2018).\n2005 SRL test set the UAS is 84.2% for the Stanza\nparser, which is understandable as the parser was\ntrained on the EWT corpus which covers a different\ndomain.\nIn a more fine-grained error analysis, we also ex-\namined the correlation between parse quality and\nperformance on individual examples on CoNLL-\n2005 (Figures 3a and 3b), finding a mild but signifi-\ncant positive correlation between parse quality and\nrelative model performance when training and test-\ning with Stanza parses (Figure 3a). Interestingly,\nwe found that this correlation between parse quality\nand validation performance is much stronger when\nwe train a model on gold parses but then evaluate\nwith noisy Stanza parses (Figure 3b). This suggests\nthat the model trained on noisy parses tends to rely\nless on the noisy dependency tree inputs, while\nthe model trained on gold parses is more sensitive\nto the external syntactic input. This correlation is\nfurther reinforced by our manual error analysis pre-\nsented in Appendix C (Figures 4 and 5), where we\nshow how the erroneous edges in the Stanza parses\ncan lead to incorrect predictions of the SRL tags.\nDo in-domain parses help? Lastly, for the setting\nof using in-domain parses, we only evaluate on\nthe SRL task, since on the NER task even using\ngold parses does not yield substantial gain. We\ntrain a biaffine parser (Dozat and Manning, 2017)\non the gold parses from the CoNLL-2005 train-\ning set and obtain parse trees from it at test time.\nWe observe that while the obtained parse trees are\nfairly accurate (with a UAS of 92.6% on the test\nset), it leads to marginal or no improvements over\nbert-base . This finding is also similar to the re-\nsults obtained by Strubell et al. (2018), where theirTest Set P R F 1\nStanza Dependency Parses (UAS: 82.73)\nLate Fusion 85.74 87.18 86.45\nJoint Fusion 85.94 87.05 86.49\nIn-domain Dependency Parses (UAS: 93.60)\nLate Fusion 86.06 86.90 86.48\nJoint Fusion 85.75 86.92 86.33\nGold Dependency Parses\nLate Fusion 88.06 90.32 89.18\nJoint Fusion 89.34 90.44 89.89\nTable 6: SRL results with different parses on the\nCoNLL-2012 test set.\nTest Set P R F 1\nStanza Dependency Parses (UAS: 83.91)\nLate Fusion 88.83 89.42 89.12\nJoint Fusion 88.56 89.38 88.97\nIn-domain Dependency Parses (UAS: 96.10)\nDGLSTM-CRF+ELMoy– – 89.64\nGold Dependency Parses\nDGLSTM-CRF+ELMoy89.59 90.17 89.88\nLate Fusion 88.75 89.19 88.97\nJoint Fusion 88.58 89.31 88.94\nTable 7: NER results with different parses on the\nOntoNotes-5.0 test set averaged over 5independent\nruns.ymarks results from Jie and Lu (2019).\nLISA+ELMo model only obtains a relatively small\nimprovement over SA+ELMo. We hypothesize\nthat as the accuracy of the predicted parses further\nincreases, the F 1scores would be closer to that\nfrom using the gold parses. One possible reason\nfor these marginal gains from using the in-domain\nparses is that as they are still imperfect, the errors\nin the parse edges is forcing the model to ignore\nthe syntax information.\nOverall, we conclude that parsing quality has a\ndrastic impact on the performance of the Syntax-\nAugmented BERT models, with substantial gains\nonly observed when gold parses are used .\n5 Generalizing to BERT Variants\nOur previous results used the bert-base setting,\nwhich is a relatively small configuration among pre-\ntrained models. Devlin et al. (2019) also proposed\nlarger model settings ( bert-large7,whole-word-\nmasking8) that outperformed bert-base in all the\nbenchmark tasks. More recently, Liu et al. (2019)\n724 layers, 16 attention heads, 1024 model dimensions\n8https://bit.ly/3l7rbXx\n2654\n20 40 60 80 100\nUAS−100−50050Stanza Parse F1 - Gold Parse F1\n(a)When models are trained using Stanza and gold parses,\nwe observe a small positive correlation between F 1difference\nand UAS, suggesting that as UAS of Stanza parse increases,\nthe model makes less errors. The slope of the fitted linear\nregression model is 0.075 and the intercept is -9.27.\n20 40 60 80 100\nUAS−100−50050Stanza Parse F1 - Gold Parse F1\n(b)Inference is done using Stanza parses on a model trained\nwith gold parses. The slope of the fitted linear regression\nmodel is 0.345 and the intercept is -38.9.\nFigure 3: Correlation between parse quality and differ-\nence in F 1scores on CoNLL-2005 SRL WSJ dataset.\nproposed RoBERTa, a better-optimized variant of\nBERT that demonstrated improved results. A re-\nsearch question that naturally arises is: Is syntactic\ninformation equally useful for these more powerful\npre-trained transformers, which were pre-trained\nin a different way than bert-base? To answer this,\nwe finetune these models—with and without Late\nFusion—on the CoNLL-2005 SRL task using gold\nparses and report their performance in Table 8.9\nAs expected, we observe that both bert-large\nandbert-wwm models outperform bert-base , likely\ndue to the larger model capacity from increased\nwidth and more layers. Our Late Fusion model\nconsistently improves the results over the underly-\ning BERT models by about 2.2 F 1. The RoBERTa\nmodels achieve improved results compared with\nthe BERT models. And again, our Late Fusion\nmodel further improves the RoBERTa results by\nabout 2 F 1. Thus, it is evident that the gains from\nthe Late Fusion model generalize to other widely\nused pre-trained transformer models .\n9We use the Late Fusion model with gold parses in this\nsection, as it is computationally more efficient to train than\nJoint Fusion model.Gold Parses P R F 1\nBERT\nBERT LARGE 88.14 88.84 88.49\nLate Fusion 89.86 91.57 90.70\nBERT WWM 88.04 88.87 88.45\nLate Fusion 89.88 91.63 90.75\nRoBERTa\nRoBERTa LARGE 89.14 89.90 89.47\nLate Fusion 90.89 92.08 91.48\nTable 8: SRL results from using different pre-trained\nmodels on the CoNLL-2005 WSJ test set averaged\nover 5independent runs. WWM indicates the whole-\nwordpiece-masking.\n6 Generalizing to Out-of-Domain Data\nIn real-world applications, NLP systems are of-\nten used in a domain different from training. And\nit was previously shown that many NLP systems,\nsuch as information extraction systems, suffer from\nsubstantial performance degradation when applied\nto out-of-domain data (Huang and Yates, 2010).\nWhile it is evident that syntax trees may help mod-\nels generalize to out-of-domain data (Wang et al.,\n2017), since the inductive biases introduced by\nthese trees are invariant across domains, it is un-\nclear if this hypothesis holds for more recent pre-\ntrained models. To study this, we run experiments\non SRL with the CoNLL-2005 SRL corpus because\nthis is where we have access to both in-domain and\nout-of-domain test data using the same annotation\nschema. The training set of this corpus contains\nWSJ articles from the newswire domain and the test\nset consists of two splits: WSJ articles (in-domain)\nand Brown corpus10(out-of-domain). For train-\ning, we use both BERT and RoBERTa pre-trained\nmodels and leverage gold parses in syntax-GNN\nmodels.\nFrom the results in Table 9, the utility of syntax-\nGNN is evident, as we find that the Late Fusion\nmodel always improves over its corresponding\nBERT and RoBERTa baselines by 2-3% relative F 1,\nwith RoBERTa-large based Late Fusion achieving\nthe best F 1on both WSJ and Brown datasets. We\nalso compare the performance across both domains,\nwith the last column showing the relative drop in\nthe F 1score between WSJ and Brown datasets.\nWe observe that the performance of all models\ndrops substantially on the Brown set. However,\ncompared with randomly initialized transformer\n10contains text from 15 genres (Francis and Kucera, 1979)\n2655WSJ Test Brown Test\nGold Parses F 1 %\u0001 F1 %\u0001 %r\nBaseline Models\nSA+GloVey84.5 73.1 13.5\nLISA+GloVey86.0 1.8 76.5 4.7 11.0\nBERT\nBERT BASE 87.5 81.5 6.9\nLate Fusion 90.1 3.0 83.9 2.9 6.9\nBERT LARGE 88.5 82.5 6.8\nLate Fusion 90.8 2.6 84.6 2.5 6.8\nRoBERTa\nRoBERTa LARGE 89.5 84.0 6.1\nLate Fusion 91.5 2.2 85.5 1.8 6.6\nTable 9: Out-of-domain SRL results on the CoNLL-\n2005 WSJ and Brown test sets. ymarks results re-\nported in Strubell et al. (2018). % \u0001denotes the relative\ngain in F 1over pre-trained models when using Late Fu-\nsion model, %rdenotes the relative drop in F 1when\na model trained on WSJ dataset is tested on the Brown\ndataset.\nmodels, where the results can drop by 13%, both\nsyntax-fused and pre-trained models lead to better\ngeneralization as the relative error drop reduces\nto 6–7%. We see that using Late Fusion does not\nlead to a better out-of-domain generalization, when\ncompared to strong pre-trained transformers with-\nout using parse trees. Lastly, we find that among all\npre-trained models, RoBERTa-large and its syntax-\nfused variant Late Fusion achieves the lowest out-\nof-domain generalization error.\n7 Related Work\nOur work is based on finetuning large pre-trained\ntransformer models for NLP tasks, and is closely\nrelated to existing work on understanding the syn-\ntactic information encoded in them, which we have\nearlier covered in Section 1. Here we instead focus\non discussing related work that studies incorporat-\ning syntax into neural NLP models.\nRelation Extraction Neural network models\nhave shown performance improvements when\nshortest dependency path between entities was in-\ncorporated in sentence encoders: Liu et al. (2015)\napply a combination of recursive neural networks\nand CNNs; Miwa and Bansal (2016) apply tree-\nLSTMs for joint entity and relation extraction;\nand Zhang et al. (2018) apply graph convolutional\nnetworks (GCN) over LSTM features.Semantic Role Labeling Recently, several ap-\nproaches have been proposed to incorporate de-\npendency trees within neural SRL models such as\nlearning the embeddings of dependency path be-\ntween predicate and argument words (Roth and\nLapata, 2016); combining GCN-based dependency\ntree representations with LSTM-based word rep-\nresentations (Marcheggiani and Titov, 2017); and\nlinguistically-informed self-attention in one trans-\nformer attention head (Strubell et al., 2018). Kun-\ncoro et al. (2020) directly inject syntax information\ninto BERT pre-training through knowledge distilla-\ntion, an approach which improves the performance\non several NLP tasks including SRL.\nNamed Entity Recognition Moreover, syntax\nhas also been found to be useful for NER as it\nsimplifies modeling interactions between multiple\nentity mentions in a sentence (Finkel and Man-\nning, 2009). To model syntax on OntoNotes-5.0\nNER task, Jie and Lu (2019) feed the concatenated\nchild token, head token, and relation embeddings to\nLSTM and then fuse child and head hidden states.\n8 Conclusion\nIn this work, we explore the utility of incorporating\nsyntax information from dependency trees into pre-\ntrained transformers when applied to information\nextraction tasks of SRL, NER, and RE. To do so,\nwe compute dependency tree embeddings using a\nsyntax-GNN and propose two models to fuse these\nembeddings into transformers. Our experiments\nreveal several important findings: syntax represen-\ntations are most helpful for SRL task when fused\nwithin the pre-trained representations, these per-\nformance gains on SRL task are contingent on the\nquality of the dependency parses. We also notice\nthat these models don’t provide any performance\nimprovements on NER. Lastly, for the RE task,\nsyntax representations are most helpful when incor-\nporated on top of pre-trained representations.\nAcknowledgements\nThe authors would like to thank Mrinmaya Sachan,\nXuezhe Ma, Siva Reddy, and Xavier Carreras for\nproviding us valuable feedback that helped to im-\nprove the paper. We would also like to thank the\nanonymous reviewers for giving us their useful\nsuggestions about this work. This project was sup-\nported by academic gift grants from IBM and Mi-\ncrosoft Research, as well as a Canada CIFAR AI\nChair held by Prof. Hamilton.\n2656References\nChristoph Alt, Aleksandra Gabryszak, and Leonhard\nHennig. 2020. TACRED revisited: A thorough eval-\nuation of the TACRED relation extraction task. In\nProceedings of the 58th Annual Meeting of the Asso-\nciation for Computational Linguistics .\nJimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hin-\nton. 2016. Layer normalization. arXiv preprint\narXiv:1607.06450 .\nPeter Battaglia, Jessica Blake Chandler Hamrick, Vic-\ntor Bapst, Alvaro Sanchez, Vinicius Zambaldi, Ma-\nteusz Malinowski, Andrea Tacchetti, David Ra-\nposo, Adam Santoro, Ryan Faulkner, Caglar Gul-\ncehre, Francis Song, Andy Ballard, Justin Gilmer,\nGeorge E. Dahl, Ashish Vaswani, Kelsey Allen,\nCharles Nash, Victoria Jayne Langston, Chris Dyer,\nNicolas Heess, Daan Wierstra, Pushmeet Kohli,\nMatt Botvinick, Oriol Vinyals, Yujia Li, and Raz-\nvan Pascanu. 2018. Relational inductive biases,\ndeep learning, and graph networks. arXiv preprint\narXiv:1806.01261 .\nJari Björne, Juho Heimonen, Filip Ginter, Antti Airola,\nTapio Pahikkala, and Tapio Salakoski. 2009. Ex-\ntracting complex biological events with rich graph-\nbased feature sets. In Proceedings of the BioNLP\n2009 Workshop Companion Volume for Shared Task .\nXavier Carreras and Lluís Màrquez. 2005. Introduc-\ntion to the CoNLL-2005 shared task: Semantic\nrole labeling. In Proceedings of the Ninth Confer-\nence on Computational Natural Language Learning\n(CoNLL-2005) .\nKevin Clark, Urvashi Khandelwal, Omer Levy, and\nChristopher D. Manning. 2019. What does BERT\nlook at? An analysis of BERT’s attention. In Pro-\nceedings of the 2019 ACL Workshop BlackboxNLP:\nAnalyzing and Interpreting Neural Networks for\nNLP.\nJacob Devlin, Ming-Wei Chang, Kenton Lee, and\nKristina Toutanova. 2019. BERT: Pre-training of\ndeep bidirectional transformers for language under-\nstanding. In Proceedings of the 2019 Conference of\nthe North American Chapter of the Association for\nComputational Linguistics: Human Language Tech-\nnologies, Volume 1 (Long and Short Papers .\nTimothy Dozat and Christopher D Manning. 2017.\nDeep biaffine attention for neural dependency pars-\ning. In International Conference on Learning Rep-\nresentations (ICLR) .\nJenny Rose Finkel and Christopher D. Manning. 2009.\nJoint parsing and named entity recognition. In Pro-\nceedings of Human Language Technologies: The\n2009 Annual Conference of the North American\nChapter of the Association for Computational Lin-\nguistics .\nG. D. Forney. 1973. The viterbi algorithm. Proceed-\nings of the IEEE , 61(3):268–278.W. N. Francis and H. Kucera. 1979. Brown corpus\nmanual. Technical report, Department of Linguis-\ntics, Brown University, Providence, Rhode Island,\nUS.\nKatrin Fundel, Robert Küffner, and Ralf Zimmer. 2006.\nRelEx – Relation extraction using dependency parse\ntrees. Bioinformatics .\nWilliam L. Hamilton, Rex Ying, and Jure Leskovec.\n2017. Representation learning on graphs: Methods\nand applications. IEEE Data Engineering Bulletin ,\n40:52–74.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian\nSun. 2016. Deep residual learning for image recog-\nnition. In Proceedings of the IEEE conference on\ncomputer vision and pattern recognition .\nDan Hendrycks and Kevin Gimpel. 2016. Bridg-\ning nonlinearities and stochastic regularizers with\ngaussian error linear units. arXiv preprint\narXiv:1606.08415 .\nJohn Hewitt and Christopher D. Manning. 2019. A\nstructural probe for finding syntax in word represen-\ntations. In North American Chapter of the Associ-\nation for Computational Linguistics: Human Lan-\nguage Technologies (NAACL) .\nFei Huang and Alexander Yates. 2010. Open-domain\nsemantic role labeling by modeling word spans. In\nProceedings of the 48th Annual Meeting of the Asso-\nciation for Computational Linguistics .\nZhanming Jie and Wei Lu. 2019. Dependency-guided\nLSTM-CRF for named entity recognition. In Pro-\nceedings of the 2019 Conference on Empirical Meth-\nods in Natural Language Processing and the 9th In-\nternational Joint Conference on Natural Language\nProcessing (EMNLP-IJCNLP) .\nZhanming Jie, Aldrian Obaja Muis, and Wei Lu. 2017.\nEfficient dependency-guided named entity recogni-\ntion. In Thirty-First AAAI Conference on Artificial\nIntelligence .\nDiederik P Kingma and Jimmy Ba. 2015. Adam: A\nmethod for stochastic optimization. In The 2015\nInternational Conference for Learning Representa-\ntions .\nAdhiguna Kuncoro, Lingpeng Kong, Daniel Fried,\nDani Yogatama, Laura Rimell, Chris Dyer, and Phil\nBlunsom. 2020. Syntactic structure distillation pre-\ntraining for bidirectional encoders. Transactions\nof the Association for Computational Linguistics ,\n8:776–794.\nJohn D. Lafferty, Andrew McCallum, and Fernando\nC. N. Pereira. 2001. Conditional random fields:\nProbabilistic models for segmenting and labeling se-\nquence data. In Proceedings of the Eighteenth Inter-\nnational Conference on Machine Learning .\n2657Yang Liu, Furu Wei, Sujian Li, Heng Ji, Ming Zhou,\nand Houfeng Wang. 2015. A dependency-based neu-\nral network for relation classification. In Proceed-\nings of the 53rd Annual Meeting of the Association\nfor Computational Linguistics and the 7th Interna-\ntional Joint Conference on Natural Language Pro-\ncessing (Volume 2: Short Papers) .\nYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man-\ndar Joshi, Danqi Chen, Omer Levy, Mike Lewis,\nLuke Zettlemoyer, and Veselin Stoyanov. 2019.\nRoBERTa: A robustly optimized BERT pretraining\napproach. arXiv preprint arXiv:1907.11692 .\nChristopher Manning, Mihai Surdeanu, John Bauer,\nJenny Finkel, Steven Bethard, and David McClosky.\n2014. The Stanford CoreNLP natural language pro-\ncessing toolkit. In Proceedings of 52nd Annual\nMeeting of the Association for Computational Lin-\nguistics: System Demonstrations .\nDiego Marcheggiani and Ivan Titov. 2017. Encoding\nsentences with graph convolutional networks for se-\nmantic role labeling. In Proceedings of the 2017\nConference on Empirical Methods in Natural Lan-\nguage Processing .\nMarie-Catherine de Marneffe and Christopher D. Man-\nning. 2008. The Stanford typed dependencies rep-\nresentation. In COLING 2008: Proceedings of the\nworkshop on Cross-Framework and Cross-Domain\nParser Evaluation .\nMakoto Miwa and Mohit Bansal. 2016. End-to-end\nrelation extraction using LSTMs on sequences and\ntree structures. In Proceedings of the 54th Annual\nMeeting of the Association for Computational Lin-\nguistics .\nJoakim Nivre, Marie-Catherine de Marneffe, Filip Gin-\nter, Jan Haji ˇc, Christopher D. Manning, Sampo\nPyysalo, Sebastian Schuster, Francis Tyers, and\nDaniel Zeman. 2020. Universal Dependencies v2:\nAn evergrowing multilingual treebank collection. In\nProceedings of The 12th Language Resources and\nEvaluation Conference .\nJeffrey Pennington, Richard Socher, and Christopher\nManning. 2014. GloVe: Global vectors for word\nrepresentation. In Proceedings of the 2014 Confer-\nence on Empirical Methods in Natural Language\nProcessing (EMNLP) .\nMatthew Peters, Mark Neumann, Mohit Iyyer, Matt\nGardner, Christopher Clark, Kenton Lee, and Luke\nZettlemoyer. 2018. Deep contextualized word repre-\nsentations. In Proceedings of the 2018 Conference\nof the North American Chapter of the Association\nfor Computational Linguistics: Human Language\nTechnologies, Volume 1 (Long Papers) .\nSameer Pradhan, Alessandro Moschitti, Nianwen Xue,\nOlga Uryupina, and Yuchen Zhang. 2012. CoNLL-\n2012 shared task: Modeling multilingual unre-\nstricted coreference in OntoNotes. In Joint Confer-\nence on EMNLP and CoNLL - Shared Task .Sameer Pradhan, Wayne Ward, Kadri Hacioglu, James\nMartin, and Daniel Jurafsky. 2005. Semantic role\nlabeling using different syntactic views. In Proceed-\nings of the 43rd Annual Meeting of the Association\nfor Computational Linguistics (ACL’05) .\nPeng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton,\nand Christopher D. Manning. 2020. Stanza: A\nPython natural language processing toolkit for many\nhuman languages. In Proceedings of the 58th An-\nnual Meeting of the Association for Computational\nLinguistics: System Demonstrations .\nAlec Radford, Karthik Narasimhan, Tim Salimans, and\nIlya Sutskever. 2018. Improving language under-\nstanding by generative pre-training. Technical Re-\nport.\nMichael Roth and Mirella Lapata. 2016. Neural seman-\ntic role labeling with dependency path embeddings.\nInProceedings of the 54th Annual Meeting of the\nAssociation for Computational Linguistics (Volume\n1: Long Papers) .\nNitish Srivastava, Geoffrey Hinton, Alex Krizhevsky,\nIlya Sutskever, and Ruslan Salakhutdinov. 2014.\nDropout: A simple way to prevent neural networks\nfrom overfitting. Journal of Machine Learning Re-\nsearch , 15(1):1929–1958.\nRupesh K Srivastava, Klaus Greff, and Jürgen Schmid-\nhuber. 2015. Training very deep networks. In Ad-\nvances in neural information processing systems .\nEmma Strubell, Patrick Verga, Daniel Andor,\nDavid Weiss, and Andrew McCallum. 2018.\nLinguistically-informed self-attention for semantic\nrole labeling. In Proceedings of the 2018 Confer-\nence on Empirical Methods in Natural Language\nProcessing .\nIan Tenney, Dipanjan Das, and Ellie Pavlick. 2019.\nBERT rediscovers the classical NLP pipeline. In\nProceedings of the 57th Annual Meeting of the As-\nsociation for Computational Linguistics .\nAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob\nUszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz\nKaiser, and Illia Polosukhin. 2017. Attention is all\nyou need. In Advances in Neural Information Pro-\ncessing Systems .\nPetar Veli ˇckovi ´c, Guillem Cucurull, Arantxa Casanova,\nAdriana Romero, Pietro Lio, and Yoshua Bengio.\n2018. Graph attention networks. In International\nConference on Learning Representations (ICLR) .\nLiangguo Wang, Jing Jiang, Hai Leong Chieu,\nChen Hui Ong, Dandan Song, and Lejian Liao. 2017.\nCan syntax help? improving an LSTM-based sen-\ntence compression model for new domains. In Pro-\nceedings of the 55th Annual Meeting of the Associa-\ntion for Computational Linguistics (Volume 1: Long\nPapers) , pages 1385–1393.\n2658Yuhao Zhang, Peng Qi, and Christopher D. Manning.\n2018. Graph convolution over pruned dependency\ntrees improves relation extraction. In Proceedings of\nthe 2018 Conference on Empirical Methods in Natu-\nral Language Processing .\nYuhao Zhang, Victor Zhong, Danqi Chen, Gabor An-\ngeli, and Christopher D. Manning. 2017. Position-\naware attention and supervised data improve slot fill-\ning. In Proceedings of the 2017 Conference on Em-\npirical Methods in Natural Language Processing .\n2659A Experimental Setup\nA.1 Task-Specific Modeling Details\nSemantic Role Labeling (SRL): We model\nSRL as a sequence tagging task using a linear-chain\nCRF (Lafferty et al., 2001) as the last layer. During\ninference, we perform decoding using the Viterbi\nalgorithm (Forney, 1973). To highlight predicate\nposition in the sentence, we use indicator embed-\ndings as input to the model.\nNamed Entity Recognition (NER): Similar to\nSRL, we model NER as a sequence tagging task,\nand use a linear-chain CRF layer over the model’s\nhidden states. Sequence decoding is performed\nusing the Viterbi algorithm.\nRelation Extraction (RE): As is common in\nprior work (Zhang et al., 2018; Miwa and Bansal,\n2016), the dependency tree is pruned such that the\nsubtree rooted at the lowest common ancestor of en-\ntity mentions is given as input to the syntax-GNN.\nFollowing Zhang et al. (2018), we extract sentence\nrepresentations by applying a max-pooling opera-\ntion over the hidden states. We also concatenate the\nentity representations with sentence representation\nbefore the final classification layer.\nA.2 Additional Training Details\nDuring the finetuning step, the new parameters in\neach model are randomly initialized while the ex-\nisting parameters are initialized from pre-trained\nBERT. For regularisation, we apply dropout (Sri-\nvastava et al., 2014) with p= 0:1to attention co-\nefficients and hidden states. For all datasets, we\nuse the canonical training, development, and test\nsplits. We use the Adam optimizer (Kingma and\nBa, 2015) for finetuning.\nWe observed that the initial learning rate of 2e-5\nwith a linear decay worked well for all the tasks.\nFor the model training to converge, we found that\n10epochs were sufficient for CoNLL-2012 SRL\nand RE and 20epochs were sufficient for CoNLL-\n2005 SRL and NER. We evaluate the test set per-\nformance using the best-performing checkpoint on\nthe development set.\nFor evaluation, following convention we report\nthe micro-averaged precision, recall, and F 1scores\nin every task. For variance control in all the experi-\nments, we report the mean of the results obtained\nfrom five independent runs with different seeds.B Additional Baselines\nBesides BERT models, we also compare our re-\nsults to the following previous work, which had\nobtained good performance gains on incorporating\ndependency trees with neural models:\n•For SRL, we include results from the SA (self-\nattention) and LISA (linguistically-informed self-\nattention) model by Strubell et al. (2018). In\nLISA, the attention computation in one attention-\nhead of the transformer is biased to enforce de-\npendent words only attend to their head words.\nThe models were trained using both GloVe (Pen-\nnington et al., 2014) and ELMo embeddings.\n•For NER, we report the results from Jie and Lu\n(2019), where they concatenate the child token,\nhead token, and relation embeddings as input to\nan LSTM and then fuse child and head hidden\nstates.\n• For RE, we report the results of the GCN model\nfrom Zhang et al. (2018) where they apply graph\nconvolutional networks on pruned dependency\ntrees over LSTM states.\nC Manual Error Analysis\nIn this section, we present several examples from\nour manual error analysis of the predictions from\nthe Late Fusion model when it is trained on CoNLL-\n2005 SRL WSJ dataset using gold and Stanza\nparses. Specifically, we show how the incorrect\nedges present in the parse tree can induce wrong\nSRL tag predictions. In Figure 4, we observe two\nexamples where the model when trained with gold\nparses outputs perfect predictions but the when\ntrained with Stanza parses outputs two incorrect\nSRL tags due to one erroneous edge present in the\ndependency parse. In Figure 5, we show an exam-\nple of a longer sentence where due to the presence\nof four erroneous edges in the Stanza parse, the\nmodel makes a series of incorrect predictions of\nthe SRL tags.\n2660Olivetti reportedly began shipping these tools in 1984 .\nB-A0 B-AM-ADV B-V B-A1 I-A1 I-A1 B-AM-TMP I-AM-TMP OThe Janus Group had a similar recording for investors .\nB-A0 I-A0 I-A0 B-V B-A1 I-A1 I-A1 B-AM-PNC I-AM-PNC O\n(a)Predicted SRL tags using Gold parses\nOlivetti reportedly began shipping these tools in 1984 .\nB-A0 B-AM-ADV B-V B-A1 I-A1 I-A1 I-A1 I-A1 OThe Janus Group had a similar recording for investors .\nB-A0 I-A0 I-A0 B-V B-A1 I-A1 I-A1 I-A1 I-A1 O\n(b)Predicted SRL tags using Stanza parses\nFigure 4: Examples of sentences with their predicted SRL tags when the Late Fusion model is trained using gold\nparses (4a) and Stanza parses (4b). While the predicted SRL tags using the gold parses are accurate, the erroneous\nedges in the Stanza parses (highlighted in bold) leads to incorrect SRL tags predictions (highlighted in orange\ncolor).\n2661\nNasdaq volume Friday totaled 167:7million shares , which was only the fifth busiest day so far this year .\nB-A1 I-A1 B-AM-TMP B-V B-A2 I-A2 I-A2 OB-AM-ADV I-AM-ADV I-AM-ADV I-AM-ADV I-AM-ADV I-AM-ADV I-AM-ADV I-AM-ADV I-AM-ADV I-AM-ADV I-AM-ADV O\nNasdaq volume Friday totaled 167:7 million shares , which was only the fifth busiest day so far this year .\nB-A1 I-A1 B-AM-TMP B-V B-A2 I-A2 I-A2 I-A2 I-A2 I-A2 I-A2 I-A2 I-A2 I-A2 I-A2 I-A2 I-A2 I-A2 I-A2 O\nFigure 5: Example of a longer sentence with its predicted SRL tags when the Late Fusion model is trained using gold parses (above figure) and Stanza parses (lower figure).\nThe erroneous edges in the Stanza parses are highlighted in bold. While the predicted SRL tags using the gold parses are accurate, the erroneous edges in the Stanza parses leads\nto a series of incorrect SRL tag predictions (highlighted in orange color).",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "Hybv2CxOWS",
"year": null,
"venue": "AAAI Workshops 2017",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=Hybv2CxOWS",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Goal Recognition with Noisy Observations",
"authors": [
"Yolanda E-Martín",
"David E. Smith"
],
"abstract": "Goal recognition is an important technological capability for applications that involve cooperation between agents. Many goal recognition techniques allow the sequence of observations to be incomplete, but few consider the possibility of noisy observations. In this paper, we describe a planning-based goal recognition approach that deals with both missing observations and probabilistic noise in the observations. To do this, we first use a Bayesian network to infer action probabilities based on the observation model and the current observation sequence. We then use this information to estimate the expected cost of reaching the different possible goals. Comparing these costs to the a priori costs for the goals allows us to infer a probability distribution over the possible goals.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "HJZiiBfO-H",
"year": null,
"venue": "IJCAI 2015",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=HJZiiBfO-H",
"arxiv_id": null,
"doi": null
}
|
{
"title": "A Fast Goal Recognition Technique Based on Interaction Estimates",
"authors": [
"Yolanda E-Martín",
"María D. R.-Moreno",
"David E. Smith"
],
"abstract": "Goal Recognition is the task of inferring an actor's goals given some or all of the actor's observed actions. There is considerable interest in Goal Recognition for use in intelligent personal assistants, smart environments, intelligent tutoring systems, and monitoring user's needs. In much of this work, the actor's observed actions are compared against a generated library of plans. Recent work by Ramirez and Geffner makes use of AI planning to determine how closely a sequence of observed actions matches plans for each possible goal. For each goal, this is done by comparing the cost of a plan for that goal with the cost of a plan for that goal that includes the observed actions. This approach yields useful rankings, but is impractical for real-time goal recognition in large domains because of the computational expense of constructing plans for each possible goal. In this paper, we introduce an approach that propagates cost and interaction information in a plan graph, and uses this information to estimate goal probabilities. We show that this approach is much faster, but still yields high quality results.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "BJ-6Q0xO-H",
"year": null,
"venue": "AAAI 2011",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=BJ-6Q0xO-H",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Probabilistic Plan Graph Heuristic for Probabilistic Planning",
"authors": [
"Yolanda E-Martín",
"María Dolores Rodríguez-Moreno",
"David E. Smith"
],
"abstract": "This work focuses on developing domain-independent heuristics for probabilistic planning problems characterized by full observability and non-deterministic effects of actions that are expressed by probability distributions. The approach is to first search for a high probability deterministic plan using a classical planner. A novel probabilistic plan graph heuristic is used to guide the search towards high probability plans. The resulting plans can be used in a system that handles unexpected outcomes by runtime replanning. The plans can also be incrementally augmented with contingency branches for the most critical action outcomes.\r\n\r\nThis abstract will describe the steps that we have taken in completing the above work and the obtained results.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "yJeZ-3uTuJZ",
"year": null,
"venue": "IEEE Access2020",
"pdf_link": "https://ieeexplore.ieee.org/iel7/6287639/8948470/08959228.pdf",
"forum_link": "https://openreview.net/forum?id=yJeZ-3uTuJZ",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Escherichia Coli DNA N-4-Methycytosine Site Prediction Accuracy Improved by Light Gradient Boosting Machine Feature Selection Technology.",
"authors": [
"Zhibin Lv",
"Donghua Wang",
"Hui Ding",
"Bineng Zhong",
"Lei Xu"
],
"abstract": "Recently, several machine-learning-based DNA N-4-methycytosine (4mC) predictors have been developed to provide deeper insight into the biological functions and mechanisms of 4mC. However, the performance of the existing classifiers for identification of Escherichia coli DNA 4mC sites is inadequate. Here, we present a new support vector machine 4mC predictor, named iEC4mC-SVM, for Escherichia coli (E.coli) DNA 4mC site identification, optimized using light gradient boosting machine feature selection technology. The iEC4mC-SVM predictor had a 10-fold cross-validation accuracy of 85.4% and Jackknife cross-validation accuracy of 84.9%. The 83.2% independent testing accuracy of iEC4mC-SVM was 1.0-6.5% higher than those of state-of-the-art E. coli DNA 4mC site predictors. A t-distributed stochastic neighbor embedding analysis confirmed that the prediction performance enhancement of iEC4mC-SVM was due to the light gradient boosting machine feature selection.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "HFLlsRmpySv",
"year": null,
"venue": "SIGIR 2019",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=HFLlsRmpySv",
"arxiv_id": null,
"doi": null
}
|
{
"title": "CROSS: Cross-platform Recommendation for Social E-Commerce",
"authors": [
"Tzu-Heng Lin",
"Chen Gao",
"Yong Li"
],
"abstract": "Social e-commerce, as a new concept of e-commerce, uses social media as a new prevalent platform for online shopping. Users are now able to view, add to cart, and buy products within a single social media app. In this paper, we address the problem of cross-platform recommendation for social e-commerce, i.e., recommending products to users when they are shopping through social media. To the best of our knowledge, this is a new and important problem for all e-commerce companies (e.g. Amazon, Alibaba), but has never been studied before. Existing cross-platform and social related recommendation methods cannot be applied directly for this problem since they do not co-consider the social information and the cross-platform characteristics together. To study this problem, we first investigate the heterogeneous shopping behaviors between traditional e-commerce app and social media. Based on these observations from data, we propose CROSS (Cross-platform Recommendation for Online Shopping in Social Media), a recommendation model utilizing not only user-item interaction data on both platforms, but also social relation data on social media. Extensive experiments on real-world online shopping dataset demonstrate that our proposed CROSS significantly outperforms existing state-of-the-art methods.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "RsleFjkTPLs",
"year": null,
"venue": "Data Min. Knowl. Discov. 2016",
"pdf_link": "https://link.springer.com/content/pdf/10.1007/s10618-015-0411-4.pdf",
"forum_link": "https://openreview.net/forum?id=RsleFjkTPLs",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Accelerating the discovery of unsupervised-shapelets",
"authors": [
"Jesin Zakaria",
"Abdullah Mueen",
"Eamonn J. Keogh",
"Neal E. Young"
],
"abstract": "Over the past decade, time series clustering has become an increasingly important research topic in data mining community. Most existing methods for time series clustering rely on distances calculated from the entire raw data using the Euclidean distance or Dynamic Time Warping distance as the distance measure. However, the presence of significant noise, dropouts, or extraneous data can greatly limit the accuracy of clustering in this domain. Moreover, for most real world problems, we cannot expect objects from the same class to be equal in length. As a consequence, most work on time series clustering only considers the clustering of individual time series “behaviors,” e.g., individual heart beats or individual gait cycles, and contrives the time series in some way to make them all equal in length. However, automatically formatting the data in such a way is often a harder problem than the clustering itself. In this work, we show that by using only some local patterns and deliberately ignoring the rest of the data, we can mitigate the above problems and cluster time series of different lengths, e.g., cluster one heartbeat with multiple heartbeats. To achieve this, we exploit and extend a recently introduced concept in time series data mining called shapelets. Unlike existing work, our work demonstrates the unintuitive fact that shapelets can be learned from unlabeled time series. We show, with extensive empirical evaluation in diverse domains, that our method is more accurate than existing methods. Moreover, in addition to accurate clustering results, we show that our work also has the potential to give insight into the domains to which it is applied. While a brute-force algorithm to discover shapelets in an unsupervised way could be untenably slow, we introduce two novel optimization procedures to significantly speed up the unsupervised-shapelet discovery process and allow it to be cast as an anytime algorithm.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "lumwZHXxOS",
"year": null,
"venue": "EA@ICSE 2009",
"pdf_link": "https://ieeexplore.ieee.org/iel5/5062318/5071562/05071580.pdf",
"forum_link": "https://openreview.net/forum?id=lumwZHXxOS",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Using tagging to identify and organize concerns during pre-requirements analysis",
"authors": [
"Harold Ossher",
"David Amid",
"Ateret Anaby-Tavor",
"Rachel K. E. Bellamy",
"Matthew Callery",
"Michael Desmond",
"Jackie De Vries",
"Amit Fisher",
"Sophia Krasikov",
"Ian Simmonds",
"Calvin Swart"
],
"abstract": "Before requirements analysis takes place in a business context, business analysis is usually performed. Important concerns emerge during this analysis that need to be captured and communicated to requirements engineers. In this paper, we take the position that tagging is a promising approach for identifying and organizing these concerns. The fact that tags can be attached freely to entities, often with multiple tags attached to the same entity and the same tag attached to multiple entities, leads to multi-dimensional structures that are suitable for representing crosscutting concerns and exploring their relationships. The resulting tag structures can be hardened into classifications that capture and communicate important concerns.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "K9vhQTzvm4R",
"year": null,
"venue": "KDD 2020",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=K9vhQTzvm4R",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Debiasing Grid-based Product Search in E-commerce",
"authors": [
"Ruocheng Guo",
"Xiaoting Zhao",
"Adam Henderson",
"Liangjie Hong",
"Huan Liu"
],
"abstract": "The widespread usage of e-commerce websites in daily life and the resulting wealth of implicit feedback data form the foundation for systems that train and test e-commerce search ranking algorithms. While convenient to collect, implicit feedback data inherently suffers from various types of bias since user feedback is limited to products they are exposed to by existing search ranking algorithms and impacted by how the products are displayed. In the literature, a vast majority of existing methods have been proposed towards unbiased learning to rank for list-based web search scenarios. However, such methods cannot be directly adopted by e-commerce websites mainly for two reasons. First, in e-commerce websites, search engine results pages (SERPs) are displayed in 2-dimensional grids. The existing methods have not considered the difference in user behavior between list-based web search and grid-based product search. Second, there can be multiple types of implicit feedback (e.g., clicks and purchases) on e-commerce websites. We aim to utilize all types of implicit feedback as the supervision signals. In this work, we extend unbiased learning to rank to the world of e-commerce search via considering a grid-based product search scenario. We propose a novel framework which (1) forms the theoretical foundations to allow multiple types of implicit feedback in unbiased learning to rank and (2) incorporates the row skipping and slower decay click models to capture unique user behavior patterns in grid-based product search for inverse propensity scoring. Through extensive experiments on real-world e-commerce search log datasets across browsing devices and product taxonomies, we show that the proposed framework outperforms the state of the art unbiased learning to rank algorithms. These results also reveal important insights on how user behavior patterns vary in e-commerce SERPs across browsing devices and product taxonomies.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "7Pvwurfqy4z",
"year": null,
"venue": "KDD 2020",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=7Pvwurfqy4z",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Debiasing Grid-based Product Search in E-commerce",
"authors": [
"Ruocheng Guo",
"Xiaoting Zhao",
"Adam Henderson",
"Liangjie Hong",
"Huan Liu"
],
"abstract": "The widespread usage of e-commerce websites in daily life and the resulting wealth of implicit feedback data form the foundation for systems that train and test e-commerce search ranking algorithms. While convenient to collect, implicit feedback data inherently suffers from various types of bias since user feedback is limited to products they are exposed to by existing search ranking algorithms and impacted by how the products are displayed. In the literature, a vast majority of existing methods have been proposed towards unbiased learning to rank for list-based web search scenarios. However, such methods cannot be directly adopted by e-commerce websites mainly for two reasons. First, in e-commerce websites, search engine results pages (SERPs) are displayed in 2-dimensional grids. The existing methods have not considered the difference in user behavior between list-based web search and grid-based product search. Second, there can be multiple types of implicit feedback (e.g., clicks and purchases) on e-commerce websites. We aim to utilize all types of implicit feedback as the supervision signals. In this work, we extend unbiased learning to rank to the world of e-commerce search via considering a grid-based product search scenario. We propose a novel framework which (1) forms the theoretical foundations to allow multiple types of implicit feedback in unbiased learning to rank and (2) incorporates the row skipping and slower decay click models to capture unique user behavior patterns in grid-based product search for inverse propensity scoring. Through extensive experiments on real-world e-commerce search log datasets across browsing devices and product taxonomies, we show that the proposed framework outperforms the state of the art unbiased learning to rank algorithms. These results also reveal important insights on how user behavior patterns vary in e-commerce SERPs across browsing devices and product taxonomies.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "eKs-MyE54sE",
"year": null,
"venue": "ICEC 2009",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=eKs-MyE54sE",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Technical construction methods for e-marketplace",
"authors": [
"Jingzhi Guo",
"Zhuo Hu",
"Zhiguo Gong"
],
"abstract": "An analysis on the existing e-marketplaces shows there are seven types of technical construction methods for e-marketplaces. They are e-catalogue, e-shop, e-portal, e-hub, e-switch, e-integrator and e-merger. The quality of these e-marketplaces can be measured based on a quality matrix of accuracy, reach and richness.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "tJf4N2nKL5-",
"year": null,
"venue": "CoRR 2023",
"pdf_link": "http://arxiv.org/pdf/2301.09174v1",
"forum_link": "https://openreview.net/forum?id=tJf4N2nKL5-",
"arxiv_id": null,
"doi": null
}
|
{
"title": "MATT: Multimodal Attention Level Estimation for e-learning Platforms",
"authors": [
"Roberto Daza",
"Luis F. Gomez",
"Aythami Morales",
"Julian Fiérrez",
"Ruben Tolosana",
"Ruth Cobos",
"Javier Ortega-Garcia"
],
"abstract": "This work presents a new multimodal system for remote attention level estimation based on multimodal face analysis. Our multimodal approach uses different parameters and signals obtained from the behavior and physiological processes that have been related to modeling cognitive load such as faces gestures (e.g., blink rate, facial actions units) and user actions (e.g., head pose, distance to the camera). The multimodal system uses the following modules based on Convolutional Neural Networks (CNNs): Eye blink detection, head pose estimation, facial landmark detection, and facial expression features. First, we individually evaluate the proposed modules in the task of estimating the student's attention level captured during online e-learning sessions. For that we trained binary classifiers (high or low attention) based on Support Vector Machines (SVM) for each module. Secondly, we find out to what extent multimodal score level fusion improves the attention level estimation. The mEBAL database is used in the experimental framework, a public multi-modal database for attention level estimation obtained in an e-learning environment that contains data from 38 users while conducting several e-learning tasks of variable difficulty (creating changes in student cognitive loads).",
"keywords": [],
"raw_extracted_content": "MATT: Multimodal Attention Level Estimation for e-learning Platforms\nRoberto Daza, Luis F. Gomez, Aythami Morales, Julian Fierrez, Ruben Tolosana, Ruth Cobos,\nJavier Ortega-Garcia\nSchool of Engineering, Autonomous University of Madrid\n{roberto.daza, luisf.gomez, aythami.morales, julian.fierrez, ruben.tolosana, ruth.cobos, javier.ortega}@uam.es\nAbstract\nThis work presents a new multimodal system for remote at-\ntention level estimation based on multimodal face analysis.\nOur multimodal approach uses different parameters and sig-\nnals obtained from the behavior and physiological processes\nthat have been related to modeling cognitive load such as\nfaces gestures (e.g., blink rate, facial actions units) and user\nactions (e.g., head pose, distance to the camera). The mul-\ntimodal system uses the following modules based on Con-\nvolutional Neural Networks (CNNs): Eye blink detection,\nhead pose estimation, facial landmark detection, and facial\nexpression features. First, we individually evaluate the pro-\nposed modules in the task of estimating the student’s atten-\ntion level captured during online e-learning sessions. For that\nwe trained binary classifiers (high or low attention) based\non Support Vector Machines (SVM) for each module. Sec-\nondly, we find out to what extent multimodal score level fu-\nsion improves the attention level estimation. The mEBAL\ndatabase is used in the experimental framework, a public\nmulti-modal database for attention level estimation obtained\nin an e-learning environment that contains data from 38 users\nwhile conducting several e-learning tasks of variable diffi-\nculty (creating changes in student cognitive loads).\n1 INTRODUCTION\nOver the last years, new technologies have brought a sub-\nstantial change enabling many processes and applications to\nmove from the physical to the digital world. Education has\nbeen one of the most affected areas, as remote e-learning of-\nfers many advantages over traditional learning (e.g., during\nthe COVID-19 outbreak). Currently, e-learning and virtual\neducation platforms are a fundamental pillar in the improve-\nment strategy of the most important academic institutions\nsuch as Stanford, Oxford, Harvard, etc. Future looks bright\nfor the e-learning industry, with remarkable growth numbers\nin the recent years and estimating even better results for the\nnext 10 years (Chen 2018).\nE-learning presents many advantages (Bowers and Kumar\n2015): Flexible schedules, allows a higher number of stu-\ndents, etc; however, it also presents challenges in compari-\nson with the traditional face-to-face education system. With-\nout a doubt, the recent e-learning platforms (Hernandez-\nCopyright © 2023, Association for the Advancement of Artificial\nIntelligence (www.aaai.org). All rights reserved.Ortega et al. 2020a) are a key tool to overcome these dif-\nficulties. These platforms allow the monitoring of e-learning\nsessions, capturing student’s information for a better under-\nstanding of the student’s behaviors and conditions. These\nplatforms may incorporate new technologies to analyze stu-\ndents’ information such as the attention level (Daza et al.\n2022), the heart rate (Hernandez-Ortega et al. 2020b), the\nemotional state (Shen, Wang, and Shen 2009), and the gaze\nand head pose (Asteriadis et al. 2009).\nE-learning platforms represent indeed an opportunity to\nimprove education. In the traditional face-to-face educa-\ntion system, how can a teacher know when students present\nhigher or lower attention levels? In remote education this\nis even more difficult to infer without direct contact between\nthe teacher and the student. On the other hand, the automatic\nestimation of the student’s attention level on e-learning plat-\nforms is feasible (Peng et al. 2020a), and represents a high\nvalue tool to improve face-to-face and online education.\nThis information obtained from e-learning platforms can\nbe used to create personalized environments and a more se-\ncure evaluation, for example, to: i) adapt dynamically the\nenvironment and content (Nappi, Ricciardi, and Tistarelli\n2018; Fierrez et al. 2018) based on the attention level of\nthe students, and ii) improve the educational materials and\nresources with a further analysis of the e-learning sessions,\ne.g. detecting the most appropriate types of contents for a\nspecific student and adapting the general information to her\n(Fierrez-Aguilar et al. 2005a,b).\nAttention is defined as a conscious cognitive effort on a\ntask (Tang et al. 2022) and it’s essential in learning tasks for\na correct comprehension, since a sustained attention gener-\nally leads to better learning results. Nevertheless, the atten-\ntion level measurement is not an easy task to perform, and it\nhas been studied in depth in the state of art.\nOn the one hand, brain waves from an electroencephalo-\ngram (EEG) have demonstrated to be one of the most effec-\ntive signals for attention level estimation (Chen and Wang\n2018; Li et al. 2011). On the other hand, attention estimation\nthrough face videos obtained from a simple webcam (much\nless intrusive than measuring EEG signals) is also feasible\nbut less effective in general.\nNow focusing in human behaviors related to the attention\nlevel that can be measured from face videos:\n• The eye blink rate has been demonstrated to be relatedarXiv:2301.09174v1 [cs.CV] 22 Jan 2023\nwith the cognitive activity, and therefore the attention\n(Bagley and Manelis 1979; K. Holland and Tarlow 1972).\nLower eye blink rates can be associated to high attention\nlevels, while higher eye blink rates are related to low at-\ntention levels. ALEBk (Daza et al. 2022) presented a fea-\nsibility study of Attention Level Estimation via Blink de-\ntection with results on the mEBAL database demonstrat-\ning that there is certain correlation between the attention\nlevels and the eye blink rate.\n• The student’s head pose can be also used to detect the\nvisual attention which is a key learning factor (Luo et al.\n2022; Zaletelj and Košir 2017; Raca, Kidzinski, and Dil-\nlenbourg 2015). Also, the facial expression recognition\nhas demonstrated to be a pointer in the state of human\nemotions and it’s highly related with the attention esti-\nmation in a learning environment (Monkaresi et al. 2016;\nMcDaniel et al. 2007; Grafsgaard et al. 2013).\n• The student interest is also highly related with the atten-\ntion levels, and some physical actions happen to be re-\nlated with the interest. E.g., leaning closer to the screen\nis frequent when the displayed information is attractive\nor complex (Fujisawa and Aihara 2009).\nMultimodal systems have the advantage of offering a\nglobal vision, taking into consideration several variables at\ndifferent levels that affect the process of interest (Fierrez\net al. 2018). As representative examples of multimodal at-\ntention level estimation we can mention: 1) (Zaletelj and\nKošir 2017) estimated attention levels using facial and body\ninformation obtained from a Kinect camera, and 2) (Peng\net al. 2020b) used different features obtained from the face\nand also head movements (e.g., leaning closer to the screen).\nIn this context, the contributions of the present paper are:\n• We perform an attention estimation study in a realistic e-\nlearning environment including different facial features\nlike pose, facial expressions, facial landmarks, eye blink\ninformation, etc.\n• The study comprises different modules based on Con-\nvolutional Neural Networks trained to obtain facial fea-\ntures with potential correlation with attention. We evalu-\nate each of these modules separately in the attention es-\ntimation task.\n• We propose a multimodal approach to improve the es-\ntimation level results in comparison with a monomodal\nsystem. The approach is based on a score-level fusion of\nthe different face analysis modules. The results of our\nmultimodal system are compared with another existing\nattention estimation system called ALEBk; both evalu-\nated on the same mEBAL database. The results show that\nthe proposed multimodal approach is able to reduce the\nerror rates relatively by ca. 40% in comparison with ex-\nisting methods.\nThe rest of the paper is organized as follows: Section 2\npresents the materials and methods, including the databases\nand the proposed technologies to estimate attention levels.\nSection 3shows the experiments and results. Finally, re-\nmarks and future work are drawn in Section 4.2 MATERIALS AND METHODS\n2.1 Database: mEBAL\nThe mEBAL database (Daza et al. 2020) is selected for\nour study for several reasons. First, mEBAL is a public\ndatabase which was captured using an e-learning platform\ncalled edBB in a realist e-learning environment (Hernandez-\nOrtega et al. 2020a). Second, mEBAL consists of 38ses-\nsions of e-learning (one per student) where the users per-\nform different tasks of variable difficulty to create changes\nin the student’s cognitive load. These activities include find-\ning the differences, crosswords, logical problems, etc. The\nsessions have a duration of 15to30minutes. Third, mEBAL\nis a multimodal database with signals from multiple sensors\nincluding face video and electroencephalogram (EEG) data.\nmEBAL includes the recordings of every session captured\nfrom 3cameras ( 1RGB camera and 2NIR cameras). Be-\nsides, mEBAL used an EEG headset by NeuroSky to obtain\nthe cognitive signals. Previous studies have also used this\nheadset to capture EEG and attention signals (Rebolledo-\nMendez et al. 2009; Li et al. 2009; Lin and Kao 2018). The\nEEG information provides effective attention level estima-\ntion (Chen and Wang 2018; Li et al. 2011) because this infor-\nmation is sensitive to the mental effort and cognitive work,\nwhich varies significantly in different activities like learn-\ning, lying, perception, stress, etc. (Hall and Hall 2020; Lin\nand Kao 2018; Chen, Wang, and Yu 2017; Li et al. 2009).\nThe resulting EEG information in mEBAL consists of 5\nsignals in different frequency ranges. More precisely, the\npower spectrum density of 5electroencephalographic chan-\nnels:\u000e(<4Hz),\u0012(4-8Hz),\u000b(8-13Hz),\f(13-30Hz),\nand\r(>30Hz) signals. From these channels, through the\nofficial SDK of NeuroSky, mEBAL includes information of\nthe attention level, meditation level, and temporary sequence\nwith the eye blink strength. The attention and meditation lev-\nels have a value from 0to100. The headset has a sampling\nrate of 1Hz. This work uses the attention level obtained from\nEEG headset as ground truth to train and evaluate our image-\nbased attention level estimation system.\n2.2 Face Analysis Modules\nFig. 1 shows the proposed multimodal approach for the es-\ntimation of the attention level. The framework includes the\nfollowing modules:\nFace Detection Module: First, we use a face detector to\nobtain 2D face images. These images are used as input by the\nother technologies. The face position is estimated using the\nRetinaFace Detector (Deng et al. 2020), a robust single-stage\nface detector trained using the Wider Face dataset (Yang\net al. 2016).\nLandmark Detection Module: We use a landmark detec-\ntor for two tasks. In the first one, we use landmarks for the\nlocalization of a region of interest for subsequent Eye Blink\nDetection. In the second one, we obtain the Eye Aspect Ra-\ntio (EAR) (Soukupovà and Cech 2016) for each eye (which\nis related to the opening of the eye), and the width and length\nof the nose and the head, as features to detect the head dis-\ntance from the screen. We used the SAN landmark detector\n(Dong et al. 2018), which is a 68-landmark detector based\nFigure 1: Block diagram of the multimodal attention level estimation approach. The dashed line represents ground truth used\nfor training the SVMs.\non VGG- 16plus 2convolution layers trained on a 300-W\ndataset (Sagonas et al. 2016). Therefore, for each frame, 6\nfeatures fLare used in our attention level estimation.\nHead Pose Estimation Module: The head pose is esti-\nmated from 2D face images. We obtain the vertical (pitch)\nand horizontal (yaw) angles to define a 3D head pose from a\n2D image. The head pose is estimated using a Convolutional\nNeural Network (ConvNet) based on (Berral-Soler et al.\n2021) trying to balance the speed and precision that max-\nimizes the utility. This head pose is trained with Pointing\n04(Gourier, Hall, and Crowley 2004) and Annotated Facial\nLandmarks in the Wild (Koestinger et al. 2011) databases.\nSo, for each frame we have 2angles as features fP.\nEye Blink Detection Module: We used the architec-\nture presented in ALEBk (Daza et al. 2022) but trained\non mEBAL from scratch, with only the eye blinks of the\nRGB camera. This detector has been inspired by the popular\nVGG16 Neural Network model and it’s a binary classifier\n(open or closed eyes) using two input images (cropped left\neye and cropped right eye). For the region of interest local-\nization, we also followed ALEBk: i)face detection, ii)land-\nmark detection, iii)face alignment, and iv)eye cropping. For\neach frame, the module classifies into open or closed eyes,\ntherefore for each frame we have a scalar value between 0\nand1as a feature fB.\nFacial Expression Detection Module: Our model was\ninspired in the work (Zhang et al. 2021). The model was\ntrained using FaceNet-Inception architecture pretrained with\nVGGFace2 and retrained with Google Facial Expression\nComparison (FEC) dataset. The model follows the same ex-\nperimental protocol proposed in (Zhang et al. 2021) to create\na disentangled Facial Expression Embedding. The resulting\nFacial Expression Embedding fEconsists of 16features that\nare used in our attention level estimation.2.3 Attention Level Estimation based on Facial\nFeatures\nThe featuresffB;fL;fP;fEgobtained with the facial analy-\nsis models are used to estimate high and low attention peri-\nods. These features are obtained from very different behav-\nior or physiological processes, that’s why we first analyze\nthem separately, to understand the relation of each feature\nwith the cognitive load estimation. To perform this analy-\nsis, we use information from mEBAL. On the one hand, we\nuse the attention levels from the EEG band like ground truth\nfor the experiments. On the other hand, we used RGB video\nrecordings from the e-learning sessions in order to estimate\nthe attention levels through image processing.\nThe attention levels given by the band are captured every\nsecond ( 1Hz). In order to capture enough behavioral fea-\ntures, the attention level estimation proposed in this work\nis estimated according to one-minute time frames displaced\nevery second (i.e., the attention is estimated every second\nbased on the features captured during the last minute). We\ncalculate the average band attention level per minute and\nconcatenate the features vectors obtained when processing\nthe imagesffB;fL;fP;fEg, resulting in, for each attention\nlevel estimation (i.e., every second): 30frames per-second\n\u000260seconds\u0002features dimension.\nThe main goal of this work is to be able to detect one-\nminute time periods of high and low attention levels. At-\ntention levels will vary on each student, that’s why we fol-\nlow the same approach as ALEBk for the same mEBAL\ndatabase, where two symmetric thresholds are proposed for\nhigh and low attention periods segmentation; high attention\n(attention higher than a threshold \u001cH), and low attention (at-\ntention lower than a threshold \u001cL) which are defined in rela-\ntion to the probability density function (PDF) of the attention\nlevels of all users.\nWe individually evaluate eye blink ( fB), head pose\n(fP), landmarks ( fL), and facial expressions ( fE) as binary\nmonomodal classifiers (high or low attention). For all mod-\nules (see Fig. 1) we used the same classifying algorithm\nFigure 2: Probability density functions from scores obtained by our attention estimation systems during high/low attention\n(one-minute periods). Best systems/combinations for: monomodal systems in Top row and multimodal systems in Bottom row .\nTable 1: Attention estimation accuracy results using the\nmEBAL database for the proposed monomodal approaches.\nWe set the value of \u001cLon10% and90% for\u001cH. Also, two\naccuracy measurements were used, maximum accuracy, and\nequal error rate (EER) accuracy.\nModule max acc acc=1-EER\nEye Blink (EB) 0.7639 0.7596\nExpressions (Expr.) 0.6991 0.6705\nLandmark (EAR) 0.6624 0.6047\nHead Pose (HP) 0.5977 0.5793\nLandmark (Head Distance) 0.5872 0.5652\nbased on a Support Vector Machines (SVM) with a linear\nkernel using a squared I 2penalty with a regularization pa-\nrameter between 1e\u00004and1e2, and a value of 1e\u00003for the\ntolerance on stopping criterion.\nUnimodal attention level estimation: The training pro-\ncess is divided in the following steps: 1) Each frame was pro-\ncessed separately to obtain the four feature vectors (one from\neach module); 2) The feature vectors of all frames available\nin one minute video were concatenated ( 30frames/second\n\u000260seconds\u0002number of features of the module). 3) We\ntrained one SVM for each module as binary classifiers of\nattention estimation per minute (high or low attention level).\nMultimodal attention level estimation: We combined\nthe scores obtained for each of the facial analysis mod-\nulesfsB;sL;sP;sEgaccording to a weighted sum sF=P(wBsB+wLsL+wPsP+WEsE), with equal weights\nin this initial study (subject to optimization in future work).\nFinally, the fused score sFwas compared with the threshold\n\u001cto infer the attention level (high or low).3 EXPERIMENTS AND RESULTS\n3.1 Experimental Protocol\nWe used the protocol proposed in (Daza et al. 2022). The\nvideos from the mEBAL database were processed to ob-\ntain the one-minute periods of high and low attention levels.\nWe considered the lowest 10% percentile for low attention,\nwhile high attention corresponds to the highest percentile\n(\u001cL= 10% y\u001cH= 90% values). In total, we obtained 3,706\none-minute samples from the students in the database. From\ntheses samples 1,852correspond to high attention periods\nand1,854to low attention periods.\nWe used the “leave-one-out” cross validation protocol,\nleaving one user out for testing and training using the re-\nmaining users; the process is repeated with all users in the\ndatabase. The decision threshold is chosen at the point where\nFalse Positive and False Negative rates were equal and at the\npoint where classification accuracy is maximized.\n3.2 Unimodal Experiments\nTable 1 shows the accuracy results for each monomodal ap-\nproach and top row in Fig. 2 shows the probability den-\nsity distributions of the obtained scores for each monomodal\nclassifier. The results show that there’s a higher separability\nbetween distributions for modules based on eye blink (EB)\nand facial expressions (Expr.), obtaining maximum accuracy\nrespectively of 76:39% and69:91%. As we can see, high at-\ntention levels (for most of the cases) are easier to recognize\nthan low attention levels, which have a more spread density\ndistribution. These results make sense in an evaluation en-\nvironment like mEBAL, due to the fact that students would\nnormally be focused with high attention moments during a\nshort time tasks. Note that sessions in the mEBAL dataset\nhave a duration between 15and30minutes where the stu-\ndents solve different types of tasks.\nAs expected, the system based on head pose (HP) shows\nthe second worst results. By itself this is not a clear attention\nestimation indicator, however, its information can be use-\nTable 2: Attention estimation accuracy results using the\nmEBAL database for the best combinations of the proposed\nmultimodal approaches. We set the value of \u001cLon10% and\n90% for\u001cH. Also, two accuracy measurements were used,\nmaximum accuracy, and equal error rate (EER) accuracy.\nSystems: EB =Eye Blink, Expr. =Expressions, EAR =EAR\nfeature from Landmarks, HP =Head Pose.\nModules max acc acc=1-EER\nEB & HP 0.7969 0.7853\nEB & Expr. 0.7769 0.7734\nHP & Expr. 0.7590 0.7334\nEB & EAR 0.7501 0.7377\nHP & EAR 0.6764 0.6683\nExpr. & EAR 0.6759 0.6486\nEB&HP&Expr. 0.8215 0.8198\nEB & HP & EAR 0.7947 0.7885\nEB & EAR & Expr. 0.7618 0.7407\nHP & EAR & Expr. 0.7250 0.7172\nEB & HP & EAR & Expr. 0.8066 0.7966\nful for multimodal approaches as we will see later. Land-\nmarks features related to the proximity of students towards\nthe camera (Head Distance), show two distributions almost\nentirely overlapped, indicating lack of utility for attention\nestimation. The features from this module were therefore\nremoved from the multimodal approach. Regarding the Eye\nAspect Ratio (EAR) feature estimated through landmarks, it\nprovides worse results in comparison to the ones with eye\nblinks and facial expressions.\n3.3 Multimodal Experiments\nWe now perform an analysis to detect which combination\nof the previously mentioned monomodal modules obtains\nthe best results in our multimodal approach. The bottom of\nFig. 2 shows the probability density distributions of the best\ncombination from the scores, and Table 2 shows the accu-\nracy for each multimodal approach.\nThe best scores are obtained from the combination of 3\nmodules (eye blink detection, head pose, and facial expres-\nsions), see Fig. 3, leading to an accuracy (1-EER) of 81:98%,\nwhich shows a significant improvement (ca. 25% relative re-\nduction in error rates) in relation to the best obtained result\nwith our monomodal approach ( 75:96% 1-EER accuracy).\nThe second system with the best results is the combination\nof all 4 monomodal modules, which obtains a 79:66% accu-\nracy. This shows that including the system based on EAR\nworsens the results.\nThe best combination using only two modules is the eye\nblink and head pose with 78:53% accuracy, showing that the\ndetection of the head pose can be of great help when it’s\ncombined with other module to obtain more precise infor-\nmation about the context in the attention estimation; in fact,\nthe head pose combination improves all combinations where\nit was included ( 2,3, or4combinations).\nFigure 3: Comparison of the Receiver Operating Charac-\nteristic curve (ROC) obtained for the multimodal approach\nwith the highest accuracy (blue line) and for each of the\nmonomodal systems that belong to this combination.\nWe finally compare with an existing approach: ALEBk\n(Daza et al. 2022), which recently proposed an attention\nclassification method (low or high attention) based on the\neye blink frequency per minute. The method was evaluated\nover the mEBAL dataset with a resulting best accuracy (1-\nEER) of 70%. The results shown on Table 1 and 2 (ob-\ntained by our monomodal and multimodal approaches, re-\nspectively) significantly outperform the ALEBK results over\nthe same mEBAL database, same protocol, and same per-\ncentile of 10% attention periods: from ALEBk best accuracy\nof70% to our best of ca. 82%, which means ca. 40% relative\nreduction in error rates.\n4 CONCLUSION\nWe performed an analysis of high and low attention esti-\nmation based on face analysis, using monomodal and mul-\ntimodal approaches. We used different features that have\nproven to be effective for attention estimation, and for that,\nwe have used recent technologies for Eye Blink Detection,\nFacial Expression Analysis, Head Pose, and Landmark De-\ntection.\nThe results have showed the capacity of multimodal ap-\nproaches to improve current methods for attention estima-\ntion. We have obtained ca. 82% accuracy (as 1-EER) with\na multimodal system that combines eye blink, facial expres-\nsions, and head pose features. In relation to the best obtained\nresult with a monomodal system, we got ca. 76% classifica-\ntion accuracy for the eye blink feature. Also, these results\nhave corroborated a clear correlation between eye blink and\nattention.\nOur results have outperformed the ones obtained by\nALEBk (Daza et al. 2022). The best obtained result by\nALEBk was around 70% classification accuracy, in compar-\nison with our proposed multimodal system that obtained ca.\n82% accuracy. This means a relate improvement in error re-\nduction (EER) of ca. 40%.\nIn future studies, we will explore other features that have\nshown a direct relation with attention levels like heart rate\n(Hernandez-Ortega et al. 2020b), eye pupil size (Rafiqi et al.\n2015; Krejtz et al. 2018), gaze tracking (Wang et al. 2014),\nkeystroking (Morales et al. 2016b,a), etc. Also, we will ex-\nplore new attention classifier architectures like Long and\nShort-Term Memory (LSTM Neural Networks), or other ar-\nchitectures, combining both short and long-term informa-\ntion. More advanced adaptive and user-dependent fusion\nschemes will be also studied (Fierrez et al. 2018).\n5 ACKNOWLEDGMENTS\nSupport by projects: BIBECA (RTI2018-101248-B-I00\nMINECO/FEDER), HumanCAIC (TED2021-131787B-I00\nMICINN), TRESPASS-ETN (H2020-MSCA-ITN-2019-\n860813), and BIO-PROCTORING (GNOSS Program,\nAgreement Ministerio de Defensa-UAM-FUAM dated 29-\n03-2022). Roberto Daza is supported by a FPI fellowship\nfrom MINECO/FEDER.\nReferences\nAsteriadis, S.; Tzouveli, P.; Karpouzis, K.; and Kollias, S.\n2009. Estimation of Behavioral User State Based On Eye\nGaze and Head Pose—Application in an E-learning Envi-\nronment. Multimedia Tools and Applications , 41(3): 469–\n493.\nBagley, J.; and Manelis, L. 1979. Effect of Awareness on an\nIndicator of Cognitive Load. Perceptual and Motor Skills ,\n49(2): 591–594.\nBerral-Soler, R.; Madrid-Cuevas, F. J.; Muñoz-Salinas, R.;\nand Marín-Jiménez, M. J. 2021. RealHePoNet: A robust\nsingle-stage ConvNet for head pose estimation in the wild.\nNeural Computing and Applications , 33(13): 7673–7689.\nBowers, J.; and Kumar, P. 2015. Students’ Perceptions of\nTeaching and Social Presence: A Comparative Analysis of\nFace-To-Face and Online Learning Environments. Web-\nBased Learning and Teaching Technologies , 10(1): 27–44.\nChen, C.-M.; and Wang, J.-Y . 2018. Effects of Online\nSynchronous Instruction with an Attention Monitoring and\nAlarm Mechanism on Sustained Attention and Learning Per-\nformance. Interactive Learning Environ , 26(4): 427–443.\nChen, C.-M.; Wang, J.-Y .; and Yu, C.-M. 2017. Assessing\nthe Attention Levels of Students by using a Novel Attention\nAware System Based On Brainwave Signals. British Journal\nof Educational Technology , 48(2): 348–369.\nChen, P. 2018. Research on Sharing Economy and E-\nLearning in the Era of’Internet Plus. In Proc. Intl. Conf. on\nEducation Science and Economic Management , 751–754.\nDaza, R.; DeAlcala, D.; Morales, A.; Tolosana, R.; Cobos,\nR.; and Fierrez, J. 2022. ALEBk: Feasibility study of at-\ntention level estimation via blink detection applied to e-\nlearning. In Proc. AAAI Workshop on Artificial Intelligence\nfor Education .\nDaza, R.; Morales, A.; Fierrez, J.; and Tolosana, R. 2020.\nmEBAL: A Multimodal Database for Eye Blink Detection\nand Attention Level Estimation. In Proc. Intl. Conf. on Mul-\ntimodal Interaction , 32–36.\nDeng, J.; Guo, J.; Ververas, E.; Kotsia, I.; and Zafeiriou, S.\n2020. Retinaface: Single-Shot Multi-Level Face Localisa-\ntion in the Wild. In Proc. IEEE/CVF Conference on Com-\nputer Vision and Pattern Recognition , 5203–5212.Dong, X.; Yan, Y .; Ouyang, W.; and Yang, Y . 2018. Style\nAggregated Network for Facial Landmark Detection. In\nProc. IEEE Conference on Computer Vision and Pattern\nRecognition , 379–388.\nFierrez, J.; Morales, A.; Vera-Rodriguez, R.; and Camacho,\nD. 2018. Multiple Classifiers in Biometrics. Part 1: Funda-\nmentals and Review. Information Fusion , 44: 57–64.\nFierrez-Aguilar, J.; Garcia-Romero, D.; Ortega-Garcia, J.;\nand Gonzalez-Rodriguez, J. 2005a. Adapted user-dependent\nmultimodal biometric authentication exploiting general in-\nformation. Pattern Recognition Letters , 26(16): 2628–2639.\nFierrez-Aguilar, J.; Garcia-Romero, D.; Ortega-Garcia, J.;\nand Gonzalez-Rodriguez, J. 2005b. Bayesian adaptation for\nuser-dependent multimodal biometric authentication. Pat-\ntern Recognition , 38(8): 1317–1319.\nFujisawa, K.; and Aihara, K. 2009. Estimation of User In-\nterest from Face Approaches Captured by Webcam. In Proc.\nIntl. Conf. on Virtual and Mixed Reality , 51–59.\nGourier, N.; Hall, D.; and Crowley, J. L. 2004. Estimating\nFace Orientation from Robust Detection of Salient Facial\nStructures. In FG Net Workshop on Visual Observation of\nDeictic Gestures , 7.\nGrafsgaard, J.; Wiggins, J. B.; Boyer, K. E.; Wiebe, E. N.;\nand Lester, J. 2013. Automatically Recognizing Facial Ex-\npression: Predicting Engagement and Frustration. In Proc.\nof Educational Data Mining .\nHall, J. E.; and Hall, M. E., eds. 2020. Guyton and Hall\nTextbook of Medical Physiology e-Book . Elsevier.\nHernandez-Ortega, J.; Daza, R.; Morales, A.; Fierrez, J.; and\nOrtega-Garcia, J. 2020a. edBB: Biometrics and Behavior for\nAssessing Remote Education. In Proc. AAAI Workshop on\nArtificial Intelligence for Education .\nHernandez-Ortega, J.; Daza, R.; Morales, A.; Fierrez, J.; and\nTolosana, R. 2020b. Heart rate estimation from face videos\nfor student assessment: experiments on edBB. In Proc. of\nAnnual Computers, Software, and Applications Conference ,\n172–177.\nK. Holland, M.; and Tarlow, G. 1972. Blinking and Mental\nLoad. Psychological Reports , 31(1): 119–127.\nKoestinger, M.; Wohlhart, P.; Roth, P. M.; and Bischof, H.\n2011. Annotated Facial Landmarks in the Wild: A Large-\nScale, Real-World Database for Facial Landmark Localiza-\ntion. In Proc. Intl. Conf. on computer vision workshops ,\n2144–2151.\nKrejtz, K.; Duchowski, A. T.; Niedzielska, A.; Biele, C.; and\nKrejtz, I. 2018. Eye Tracking Cognitive Load using Pupil\nDiameter and Microsaccades with Fixed Gaze. PloS One ,\n13: 1–23.\nLi, X.; Hu, B.; Zhu, T.; Yan, J.; and Zheng, F. 2009. To-\nwards Affective Learning with an EEG Feedback Approach.\nInProc. of the First ACM International Workshop on Multi-\nmedia Technologies for Distance Learning , 33–38.\nLi, Y .; Li, X.; Ratcliffe, M.; Liu, L.; Qi, Y .; and Liu, Q. 2011.\nA Real-Time EEG-based BCI System for Attention Recog-\nnition in Ubiquitous Environment. In Proc. Intl. Workshop\non Ubiquitous Affective Awareness and Intelligent Interac-\ntion, 33–40.\nLin, F.-R.; and Kao, C.-M. 2018. Mental Effort Detection\nusing EEG Data in E-learning Contexts. Computers & Edu-\ncation , 112: 63–79.\nLuo, Z.; Jingying, C.; Guangshuai, W.; and Mengyi, L.\n2022. A Three-Dimensional Model of Student Interest dur-\ning Learning using Multimodal Fusion with Natural Sens-\ning Technology. Interactive Learning Environments , 30(6):\n1117–1130.\nMcDaniel, B.; D’Mello, S.; King, B.; Chipman, P.; Tapp, K.;\nand Graesser, A. 2007. Facial Features for Affective State\nDetection in Learning Environments. In Proc. of the Annual\nMeeting of the Cognitive Science Society .\nMonkaresi, H.; Bosch, N.; Calvo, R. A.; and D’Mello, S. K.\n2016. Automated Detection of Engagement using Video-\nBased Estimation of Facial Expressions and Heart Rate.\nIEEE Transactions on Affective Computing , 8(1): 15–28.\nMorales, A.; Fierrez, J.; Gomez-Barrero, M.; Ortega-Garcia,\nJ.; Daza, R.; Monaco, J. V .; Montalvão, J.; Canuto, J.; and\nGeorge, A. 2016a. KBOC: Keystroke Biometrics Ongoing\nCompetition. In Proc. Intl. Conf. on Biometrics Theory, Ap-\nplications and Systems , 1–6.\nMorales, A.; Fierrez, J.; Tolosana, R.; Ortega-Garcia, J.;\nGalbally, J.; Gomez-Barrero, M.; Anjos, A.; and Marcel, S.\n2016b. Keystroke Biometrics Ongoing Competition. IEEE\nAccess , 4: 7736–7746.\nNappi, M.; Ricciardi, S.; and Tistarelli, M. 2018. Context\nAwareness in Biometric Systems and Methods: State of the\nArt and Future Scenarios. Image and Vision Comp. , 76: 27–\n37.\nPeng, S.; Chen, L.; Gao, C.; and Tong, R. J. 2020a. Predict-\ning Students’ Attention Level with Interpretable Facial and\nHead Dynamic Features in an Online Tutoring System. In\nProc. AAAI Conf. on Artificial Intelligence .\nPeng, S.; Chen, L.; Gao, C.; and Tong, R. J. 2020b. Predict-\ning Students’ Attention Level with Interpretable Facial and\nHead Dynamic Features in an Online Tutoring System (Stu-\ndent Abstract). In Proc. of the AAAI Conference on Artificial\nIntelligence , 13895–13896.\nRaca, M.; Kidzinski, L.; and Dillenbourg, P. 2015. Trans-\nlating Head Motion into Attention-Towards Processing of\nStudent’s Body-Language. In Proc. of the 8th International\nConference on Educational Data Mining .\nRafiqi, S.; Wangwiwattana, C.; Kim, J.; Fernandez, E.; Nair,\nS.; and Larson, E. C. 2015. PupilWare: Towards Pervasive\nCognitive Load Measurement using Commodity Devices. In\nProc. of the 8th ACM International Conference on Pervasive\nTechnologies Related to Assistive Environments , 1–8.\nRebolledo-Mendez, G.; Dunwell, I.; Martínez-Mirón, E. A.;\nVargas-Cerdán, M. D.; Freitas, S. d.; Liarokapis, F.; and\nGarcía-Gaona, A. R. 2009. Assessing Neurosky’s Usabil-\nity to Detect Attention Levels in an Assessment Exercise. In\nProc. Intl. Conf. on Human-Computer Interaction , 149–158.\nSagonas, C.; Antonakos, E.; Tzimiropoulos, G.; Zafeiriou,\nS.; and Pantic, M. 2016. 300 Faces in-the-Wild Challenge:\nDatabase and Results. Image and Vision Comp. , 47: 3–18.Shen, L.; Wang, M.; and Shen, R. 2009. Affective E-\nlearning: Using “Emotional” Data to Improve Learning in\nPervasive Learning Environment. Educational Technology\n& Society , 12(2): 176–189.\nSoukupovà, T.; and Cech, J. 2016. Eye Blink Detection us-\ning Facial Landmarks. In Proc. Computer Vision Winter\nWorkshop .\nTang, H.; Dai, M.; Yang, S.; Du, X.; Hung, J.-L.; and Li,\nH. 2022. Using Multimodal Analytics to Systemically In-\nvestigate Online Collaborative Problem-Solving. Distance\nEducation , 1–28.\nWang, Q.; Yang, S.; Liu, M.; Cao, Z.; and Ma, Q. 2014. An\nEye-Tracking Study of Website Complexity from Cognitive\nLoad Perspective. Decision Support Systems , 62: 1–10.\nYang, S.; Luo, P.; Loy, C.-C.; and Tang, X. 2016. Wider\nFace: A Face Detection Benchmark. In Proc. IEEE confer-\nence on Computer Vision and Pattern Recognition , 5525–\n5533.\nZaletelj, J.; and Košir, A. 2017. Predicting Students’ Atten-\ntion in the Classroom from Kinect Facial and Body Features.\nEURASIP Journal on Image and Video Processing , 2017(1):\n1–12.\nZhang, W.; Ji, X.; Chen, K.; Ding, Y .; and Fan, C. 2021.\nLearning a facial expression embedding disentangled from\nidentity. In Proceedings of the IEEE/CVF conference on\ncomputer vision and pattern recognition , 6759–6768.",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "v0He-u07oeu",
"year": null,
"venue": null,
"pdf_link": "/pdf/d30cabe263f120122bb590f8ffad3a9b6a4f21f6.pdf",
"forum_link": "https://openreview.net/forum?id=v0He-u07oeu",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Cross-Attribute Consistency Detection in E-commerce Catalog with Large Language Models",
"authors": [
"Anonymous"
],
"abstract": "Comprehending the quality of data represented on an E-commerce product page is a challenge and is currently achieved with varied approaches that are dependent on large task-specific datasets curated with human efforts. This slows down the process of scaling to a large catalog scope. The recent advancements in Large Language Models (LLM) have revolutionized their ability to significantly enhance various downstream applications using small and carefully curated datasets. In this paper, our focus is to explore LLM capability in addressing a challenge related to the catalog quality assessment. To be specific, we aim to detect the consistency of information presented between Unstructured Attributes (UA) (incl. Title, Bullet Points (BP), Product Description (PD)), and Structured Attributes (SA) within a product page through pairwise evaluations using predefined class labels. To achieve it, we propose a novel approach, $\\texttt{CENSOR}$, that utilizes LLM in two phases. In the first phase, off-the-shelf LLM is leveraged in a zero-shot manner using prompt engineering techniques. While in the second phase, open-source LLM is fine-tuned with a small human curated dataset along with the weak labeled data generated in first phase as a data augmentation technique to incorporate domain-specific knowledge. The fine-tuned LLM overcomes the deficiencies observed in the first phase and entails the model to address the consistency detection task. Evaluation conducted using the E-commerce dataset which include a comprehensive set of 186 distinct combinations of <Product Type, SA>, $\\texttt{CENSOR}$ fine-tuned model outperforms the baseline method and $\\texttt{CENSOR}$ zero-shot model with +34.4 and +19.4 points on F1-score respectively.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "4g1RPSNPAlc",
"year": null,
"venue": "WES 2002",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=4g1RPSNPAlc",
"arxiv_id": null,
"doi": null
}
|
{
"title": "A Service Infrastructure for e-Science: The Case of the ARION System",
"authors": [
"Catherine E. Houstis",
"Spyros Lalis",
"Vassilis Christophides",
"Dimitris Plexousakis",
"Manolis Vavalis",
"Marios Pitikakis",
"Kyriakos Kritikos",
"Antonis Smardas",
"Charalampos Gikas"
],
"abstract": "The ARION system provides basic e-services of search and retrieval of objects in scientific collections, such as, data sets, simulation models and tools necessary for statistical and/or visualization processing. These collections may represent application software of scientific areas, they reside in geographically disperse organizations and constitute the system content. The user, as part of the retrieval mechanism, may dynamically invoke on-line computations of scientific data sets when the latter are not found into the system. Thus, ARION provides the basic infrastructure for accessing and producing scientific information in an open, distributed and federated system. More advanced e-services, which depend on the scientific content of the system, can be built upon this infrastructure, such as decision making and/or policy support using various information brokering techniques.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "rWNZx6sE1-5",
"year": null,
"venue": "IEEE PACT 2000",
"pdf_link": "https://ieeexplore.ieee.org/iel5/7126/19206/00888352.pdf",
"forum_link": "https://openreview.net/forum?id=rWNZx6sE1-5",
"arxiv_id": null,
"doi": null
}
|
{
"title": "A Fast Algorithm for Scheduling Instructions with Deadline Constraints on RISC Processors",
"authors": [
"Hui Wu",
"Joxan Jaffar",
"Roland H. C. Yap"
],
"abstract": "We present a fast algorithm for scheduling UET (Unit Execution Time) instructions with deadline constraints in a basic block on RISC machines with multiple processors. Unlike Palem and Simon's algorithm, our algorithm allows latency of 1/sub ij/=-1 which denotes that instruction v/sub j/ cannot be started before v/sub i/. The time complexity of our algorithm is O(ne+nd), where n is the number of instructions, e is the number of edges in the precedence graph and d is the maximum latency. Our algorithm is guaranteed to compute a feasible schedule whenever one exists in the following special cases: 1) Arbitrary precedence constraints, latencies in {0, 1} and one processor. In this special case, our algorithm improves the existing fastest algorithm from O(ne+e'log n) to O(min{ne, n/sup 2.376}/), where e' is the number of edges in the transitively closed precedence graph. 2) Arbitrary precedence constraints, latencies in {-1, 0} and two processors. In the special case where all latencies are 0, our algorithm degenerates to Garey and Johnson's two processor algorithm. 3) Special precedence constraints in the form of monotone interval graph, arbitrary latencies in {-1, 0, 1, /spl middot//spl middot//spl middot/, d} and multiple processors. 4) Special precedence constraints in the form of in-forest, equal latencies and multiple processors. In the above special cases, if no feasible schedule exists, our algorithm will compute a schedule with minimum lateness. Moreover, by setting all deadlines to a sufficiently large integer, our algorithm will compute a schedule with minimum length in all the above special cases and the special case of out-forest, equal latencies and multiple processors.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "e3f3U8-h12e",
"year": null,
"venue": "VTC Spring 2022",
"pdf_link": "https://ieeexplore.ieee.org/iel7/9860273/9860350/09860871.pdf",
"forum_link": "https://openreview.net/forum?id=e3f3U8-h12e",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Parking Behaviour Analysis of Shared E-Bike Users Based on a Real-World Dataset - A Case Study in Dublin, Ireland",
"authors": [
"Sen Yan",
"Mingming Liu",
"Noel E. O'Connor"
],
"abstract": "In recent years, an increasing number of shared E-bikes have been rolling out rapidly in our cities. It, therefore, becomes important to understand new behaviour patterns of the cyclists in using these E-bikes as a foundation for the novel design of shared micromobility services as part of the realisation of next-generation intelligent transportation systems. In this paper, we deeply investigate the users’ behaviour of shared E-bikes in a case study by using the real-world dataset collected from the shared E-bike company, MOBY, which currently operates in Dublin, Ireland. More specifically, we look into the parking behaviours of users as we know that inappropriate parking of these bikes can not only increase the management costs of the company but also result in other users’ inconveniences, especially in situations of a battery shortage, which inevitably reduces the overall operational efficacy of these shared E-bikes. Our work has conducted analysis at both bike station and individual levels in a fully anonymous and GDPR-Compliant manner, and our results have shown that up to 12.9% of shared E-bike users did not park their bikes properly at the designated stands. Different visualisation tools have been applied to better illustrate our obtained results.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "lZqOGo4VWz",
"year": null,
"venue": "World Wide Web 2019",
"pdf_link": "https://link.springer.com/content/pdf/10.1007/s11280-018-0554-5.pdf",
"forum_link": "https://openreview.net/forum?id=lZqOGo4VWz",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Predicting e-book ranking based on the implicit user feedback",
"authors": [
"Bin Cao",
"Chenyu Hou",
"Hongjie Peng",
"Jing Fan",
"Jian Yang",
"Jianwei Yin",
"Shuiguang Deng"
],
"abstract": "In this paper, we plan to predict a ranking on e-books by analyzing the implicit user behavior, and the goal of our work is to optimize the ranking results to be close to that of the ground truth ranking where e-books are ordered by their corresponding reader number. As far as we know, there exist little work on predicting the future e-book ranking. To this end, through analyzing various user behavior from a popular e-book reading mobile APP, we construct three groups of features that are related to e-book ranking, where some features are created based on the popular metrics from the e-commerce, e.g., conversion rates. Then, we firstly propose a baseline method by using the idea of learning to rank (L2R), where we train the ranking model for each e-book by taking all its past user feedback within a time interval into consideration. Then we further propose TDLR: a Time Decay based Learning to Rank method, where we separately train the ranking model on each day and combine these models by gradually decaying the importance of them over time. Through extensive experimental studies on the real-world dataset, our approach TDLR is proved to significantly improve the e-book ranking quality more than 10% when compared with the L2R method where no time decay is considered.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "zkJ0RNvMqnr",
"year": null,
"venue": "DPM/QASA@ESORICS 2015",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=zkJ0RNvMqnr",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Privacy Threats in E-Shopping (Position Paper)",
"authors": [
"Jesus Diaz",
"Seung Geol Choi",
"David Arroyo",
"Angelos D. Keromytis",
"Francisco de Borja Rodríguez",
"Moti Yung"
],
"abstract": "E-shopping has grown considerably in the last years, providing customers with convenience, merchants with increased sales, and financial entities with an additional source of income. However, it may also be the source of serious threats to privacy. In this paper, we review the e-shopping process, discussing attacks or threats that have been analyzed in the literature for each of its stages. By showing that there exist threats to privacy in each of them, we argue our following position: “It is not enough to protect a single independent stage, as is usually done in privacy respectful proposals in this context. Rather, a complete solution is necessary spanning the overall process, dealing also with the required interconnections between stages.” Our overview also reflects the diverse types of information that e-shopping manages, and the benefits (e.g., such as loyalty programs and fraud prevention) that system providers extract from them. This also endorses the need for solutions that, while privacy preserving, do not limit or remove these benefits, if we want prevent all the participating entities from rejecting it.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "5ZTa89XZk62",
"year": null,
"venue": "ICWE 2003",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=5ZTa89XZk62",
"arxiv_id": null,
"doi": null
}
|
{
"title": "A Collaborative E-learning Component for the IDEFIX Project",
"authors": [
"T. Hernán Sagástegui Ch.",
"José Emilio Labra Gayo",
"Juan Manuel Cueva Lovelle",
"José M. Morales Gil",
"María E. Alva O.",
"Eduardo Valdés",
"Cecilia García"
],
"abstract": "This paper presents the design and architecture of a collaborative learning model based on the Web as an experience that it is being done at the University of Oviedo, giving support to the students of a logic course, in the collaborative achievement of exercises by means of e-meetings and in the training of the achievement of exams by means of a virtual reality game. This model is developed like a component of the project IDEFIX and it uses Web services.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "qBl9cbQ66-",
"year": null,
"venue": "DEXA (2) 2014",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=qBl9cbQ66-",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Multi-dimensional Sentiment Analysis for Large-Scale E-commerce Reviews",
"authors": [
"Lizhou Zheng",
"Peiquan Jin",
"Jie Zhao",
"Lihua Yue"
],
"abstract": "E-commerce reviews reveal the customers’ attitudes on the products, which are very helpful for customers to know other people’s opinions on interested products. Meanwhile, producers are able to learn the public sentiment on their products being sold in E-commerce platforms. Generally, E-commerce reviews involve many aspects of products, e.g., appearance, quality, price, logistics, and so on. Therefore, sentiment analysis on E-commerce reviews has to cope with those different aspects. In this paper, we define each of those aspects as a dimension of product, and present a multi-dimensional sentiment analysis approach for E-commerce reviews. In particular, we employ a sentiment lexicon expanding mechanism to remove the word ambiguity among different dimensions, and propose an algorithm for sentiment analysis on E-commerce reviews based on rules and a dimensional sentiment lexicon. We conduct experiments on a large-scale dataset involving over 28 million reviews, and compare our approach with the traditional way that does not consider dimensions of reviews. The results show that the multi-dimensional approach reaches a precision of 95.5% on average, and outperforms the traditional way in terms of precision, recall, and F-measure.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "vXe5jTPSERt",
"year": null,
"venue": null,
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=vXe5jTPSERt",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Response to Reviewer e1cd (Part 3)",
"authors": [],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "fBRbscfuor5",
"year": null,
"venue": "IEEE Trans. Control. Syst. Technol. 2019",
"pdf_link": "https://ieeexplore.ieee.org/iel7/87/8688595/08304800.pdf",
"forum_link": "https://openreview.net/forum?id=fBRbscfuor5",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Toward Real-Time Autonomous Target Area Protection: Theory and Implementation",
"authors": [
"Jitesh Mohanan",
"Manikandasriram Srinivasan Ramanagopal",
"Raghav Harini Venkatesan",
"Bharath Bhikkaji"
],
"abstract": "This brief considers the target guarding problem (TGP) with a single pursuer F, a single evader E, and a stationary target T. The goal of F is to prevent E from capturing T, by intercepting E as far away from T as possible. An optimal solution to this problem, referred to as a command to optimal interception point (COIP), was proposed recently. This guidance law requires the positions of the agents involved. Typically, aerial sensors, such as GPS, used for obtaining these data may not always perform robustly on the field, thereby reducing the autonomy of the vehicles. The computational complexity of the expressions in the COIP law also makes it difficult for a real-time implementation. Here, the TGP is revisited and the optimal solution is reformulated to expressions that are suitable for autonomous systems with ranging sensors mounted on them. These expressions also allow for seamless real-time implementation in robotic hardware. The reformulation enables the optimal solution to be coded as a lookup table requiring minimal memory to further increase the speed of computations. An experimental setup with mobile robots is then used to validate the claims. The case of T lying in E's dominance region is considered a lost game for F. However, this is true only if E plays optimally. If E plays suboptimally F stands a chance to win the game. This case, which has not been analyzed earlier, is also discussed in this brief, and an optimal strategy for F is presented.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "pc1T3FKY1R",
"year": null,
"venue": "AICT/ICIW 2006",
"pdf_link": "https://ieeexplore.ieee.org/iel5/10670/33674/01602141.pdf",
"forum_link": "https://openreview.net/forum?id=pc1T3FKY1R",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Integrating Heterogeneous E-learning Systems",
"authors": [
"Junzhou Luo",
"Wei Li",
"Jiuxin Cao",
"Liang Ge"
],
"abstract": "E-learning is a novel education pattern that serves to solve the contradiction between the large amount of social demands and the lack of educational resources. With the popularization of e-learning, various teaching software, tools and e-learning platforms have unceasingly appeared. It is a burning question how to integrate various heterogeneous application systems for e-learning into an organic e-learning system. In this paper, an Integrated Framework for E-learning systems (IFE), which characterizes the reliability, the flexibility and the platform-independence, is proposed. It solves the problems faced while integrating the heterogeneous e-learning systems, such as the heterogeneity, the interoperation and the scalability. The practical application and the testing results on the performance indicate that the design of IFE is in reason and the e-learning system based on IFE can satisfy the concurrency requirements in practice.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "yfcp09bj2sA",
"year": null,
"venue": null,
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=yfcp09bj2sA",
"arxiv_id": null,
"doi": null
}
|
{
"title": null,
"authors": [],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "jdB2eKADk13",
"year": null,
"venue": null,
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=jdB2eKADk13",
"arxiv_id": null,
"doi": null
}
|
{
"title": null,
"authors": [],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "pp9oD1DETU5",
"year": null,
"venue": "Future Application and Middleware Technology on e-Science 2010",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=pp9oD1DETU5",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Experiences and Requirements for Interoperability Between HTC and HPC-driven e-Science Infrastructure",
"authors": [
"Morris Riedel",
"Achim Streit",
"Daniel Mallmann",
"Felix Wolf",
"Thomas Lippert"
],
"abstract": "Recently, more and more e-science projects require resources in more than one production e-science infrastructure, especially when using HTC and HPC concepts together in one scientific workflow. But the interoperability of these infrastructures is still not seamlessly provided today and we argue that this is due to the absence of a realistically implementable reference model in Grids. Therefore, the fundamental goal of this paper is to identify requirements that allows for the definition of the core building blocks of an interoperability reference model that represents a trimmed down version of OGSA in terms of functionality, is less complex, more fine-granular and thus easier to implement. The identified requirements are underpinned with gained experiences from world-wide interoperability efforts.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "hSCtanfL2fP",
"year": null,
"venue": null,
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=hSCtanfL2fP",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Data Instance Prior for Transfer Learning in GANs",
"authors": [],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "Uwy7h4JBjmU",
"year": null,
"venue": "eScience 2007",
"pdf_link": "https://ieeexplore.ieee.org/iel5/4426856/4426857/04426922.pdf",
"forum_link": "https://openreview.net/forum?id=Uwy7h4JBjmU",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Computational Steering and Online Visualization of Scientific Applications on Large-Scale HPC Systems within e-Science Infrastructures",
"authors": [
"Morris Riedel",
"Thomas Eickermann",
"Sonja Habbinga",
"Wolfgang Frings",
"Paul Gibbon",
"Daniel Mallmann",
"Felix Wolf",
"Achim Streit",
"Thomas Lippert",
"Wolfram Schiffmann",
"Andreas Ernst",
"Rainer Spurzem",
"Wolfgang E. Nagel"
],
"abstract": "In the past several years, many scientific applications from various domains have taken advantage of e-science infrastructures that share storage or computational resources such as supercomputers, clusters or PC server farms across multiple organizations. Especially within e-science infrastructures driven by high-performance computing (HPC) such as DEISA, online visualization and computational steering (COVS) has become an important technique to save compute time on shared resources by dynamically steering the parameters of a parallel simulation. This paper argues that future supercomputers in the Petaflop/s performance range with up to 1 million CPUs will create an even stronger demand for seamless computational steering technologies. We discuss upcoming challenges for the development of scalable HPC applications and limits of future storage/IO technologies in the context of next generation e- science infrastructures and outline potential solutions.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "mGQ-WHW1Am_",
"year": null,
"venue": "Robotics Auton. Syst. 2018",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=mGQ-WHW1Am_",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Robot learning from demonstrations: Emulation learning in environments with moving obstacles",
"authors": [
"Amir M. Ghalamzan E.",
"Matteo Ragaglia"
],
"abstract": "Highlights • We present an approach to robot learning from demonstrations in dynamic environments. • The robot learns the task from demonstrations in a stationary environment. • The robot is then able to generalise and perform the task in dynamic environments. • We demonstrate the effectiveness of the approach with pick-and-place by YuMi robot. Abstract In this paper, we present an approach to the problem of Robot Learning from Demonstration (RLfD) in a dynamic environment, i.e. an environment whose state changes throughout the course of performing a task. RLfD mostly has been successfully exploited only in non-varying environments to reduce the programming time and cost, e.g. fixed manufacturing workspaces. Non-conventional production lines necessitate Human–Robot Collaboration (HRC) implying robots and humans must work in shared workspaces. In such conditions, the robot needs to avoid colliding with the objects that are moved by humans in the workspace. Therefore, not only is the robot: (i) required to learn a task model from demonstrations; but also, (ii) must learn a control policy to avoid a stationary obstacle. Furthermore, (iii) it needs to build a control policy from demonstration to avoid moving obstacles. Here, we present an incremental approach to RLfD addressing all these three problems. We demonstrate the effectiveness of the proposed RLfD approach, by a series of pick-and-place experiments by an ABB YuMi robot. The experimental results show that a person can work in a workspace shared with a robot where the robot successfully avoids colliding with him.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "Sk7cHb-C-",
"year": null,
"venue": null,
"pdf_link": "/pdf/7a54a09d06726a6a8bc786b636d7280da6644d5f.pdf",
"forum_link": "https://openreview.net/forum?id=Sk7cHb-C-",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Representing dynamically: An active process for describing sequential data",
"authors": [
"Juan Sebastian Olier",
"Emilia Barakova",
"Matthias Rauterberg",
"Carlo Regazzoni"
],
"abstract": "We propose an unsupervised method for building dynamic representations of sequential data, particularly of observed interactions. The method simultaneously acquires representations of input data and its dynamics. It is based on a hierarchical generative model composed of two levels. In the first level, a model learns representations to generate observed data. In the second level, representational states encode the dynamics of the lower one. The model is designed as a Bayesian network with switching variables represented in the higher level, and which generates transition models. The method actively explores the latent space guided by its knowledge and the uncertainty about it. That is achieved by updating the latent variables from prediction error signals backpropagated to the latent space. So, no encoder or inference models are used since the generators also serve as their inverse transformations.\nThe method is evaluated in two scenarios, with static images and with videos. The results show that the adaptation over time leads to better performance than with similar architectures without temporal dependencies, e.g., variational autoencoders. With videos, it is shown that the system extracts the dynamics of the data in states that highly correlate with the ground truth of the actions observed.",
"keywords": [
"Generative Models",
"Latent representations",
"Predictive coding",
"Recurrent networks",
"Sequential data"
],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "lzZstLVGVGW",
"year": null,
"venue": "NeurIPS 2022 Accept",
"pdf_link": "/pdf/b588a3d12a88fd3a848bbcc33b93e79f7b1e0927.pdf",
"forum_link": "https://openreview.net/forum?id=lzZstLVGVGW",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Earthformer: Exploring Space-Time Transformers for Earth System Forecasting",
"authors": [
"Zhihan Gao",
"Xingjian Shi",
"Hao Wang",
"Yi Zhu",
"Bernie Wang",
"Mu Li",
"Dit-Yan Yeung"
],
"abstract": "Conventionally, Earth system (e.g., weather and climate) forecasting relies on numerical simulation with complex physical models and hence is both expensive in computation and demanding on domain expertise. With the explosive growth of spatiotemporal Earth observation data in the past decade, data-driven models that apply Deep Learning (DL) are demonstrating impressive potential for various Earth system forecasting tasks. The Transformer as an emerging DL architecture, despite its broad success in other domains, has limited adoption in this area. In this paper, we propose Earthformer, a space-time Transformer for Earth system forecasting. Earthformer is based on a generic, flexible and efficient space-time attention block, named Cuboid Attention. The idea is to decompose the data into cuboids and apply cuboid-level self-attention in parallel. These cuboids are further connected with a collection of global vectors. We conduct experiments on the MovingMNIST dataset and a newly proposed chaotic $N$-body MNIST dataset to verify the effectiveness of cuboid attention and figure out the best design of Earthformer. Experiments on two real-world benchmarks about precipitation nowcasting and El Niño/Southern Oscillation (ENSO) forecasting show that Earthformer achieves state-of-the-art performance.",
"keywords": [
"Machine Learning for Earth Science",
"Spatiotemporal Forecasting",
"Transformers"
],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "EKjgEq1UQOc",
"year": null,
"venue": "EACL (2) 2017",
"pdf_link": "https://aclanthology.org/E17-2079.pdf",
"forum_link": "https://openreview.net/forum?id=EKjgEq1UQOc",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Grounding Language by Continuous Observation of Instruction Following",
"authors": [
"Ting Han",
"David Schlangen"
],
"abstract": "Grounded semantics is typically learnt from utterance-level meaning representations (e.g., successful database retrievals, denoted objects in images, moves in a game). We explore learning word and utterance meanings by continuous observation of the actions of an instruction follower (IF). While an instruction giver (IG) provided a verbal description of a configuration of objects, IF recreated it using a GUI. Aligning these GUI actions to sub-utterance chunks allows a simple maximum entropy model to associate them as chunk meaning better than just providing it with the utterance-final configuration. This shows that semantics useful for incremental (word-by-word) application, as required in natural dialogue, might also be better acquired from incremental settings.",
"keywords": [],
"raw_extracted_content": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers , pages 491–496,\nValencia, Spain, April 3-7, 2017. c\r2017 Association for Computational Linguistics\nGrounding Language by Continuous\nObservation of Instruction Following\nTing Han andDavid Schlangen\nCITEC, Dialogue Systems Group, Bielefeld University\[email protected]\nAbstract\nGrounded semantics is typically learnt\nfrom utterance-level meaning representa-\ntions (e.g., successful database retrievals,\ndenoted objects in images, moves in a\ngame). We explore learning word and ut-\nterance meanings by continuous observa-\ntion of the actions of an instruction fol-\nlower ( IF). While an instruction giver ( IG)\nprovided a verbal description of a config-\nuration of objects, IFrecreated it using a\nGUI. Aligning these GUI actions to sub-\nutterance chunks allows a simple maxi-\nmum entropy model to associate them as\nchunk meaning better than just providing\nit with the utterance-final configuration.\nThis shows that semantics useful for in-\ncremental (word-by-word) application, as\nrequired in natural dialogue, might also be\nbetter acquired from incremental settings.\n1 Introduction\nSituated instruction giving and following is a good\nsetting for language learning, as it allows for the\nassociation of language with externalised mean-\ning. For example, the reaction of drawing a circle\non the top left of a canvas provides a visible signal\nof the comprehension of “ top left, a circle ”. That\nsuch signals are also useful for machine learn-\ning of meaning has been shown by some recent\nwork ( inter alia (Chen and Mooney, 2011; Wang\net al., 2016)). While in that work instructions\nwere presented as text and the comprehension\nsignals (goal configurations or successful naviga-\ntions) were aligned with full instructions, we ex-\nplore signals that are aligned more fine-grainedly,\npossibly to sub-utterance chunks of material. This,\nwe claim, is a setting that is more representative of\nsituated interaction, where typically no strict turntaking between instruction giving and execution is\nobserved.\nvierecksquareblauesblueeinakleinessmalllinksleftuntenbottom[row:102,column:280][small][blue][square]istisschrägdiagonallyrechtsrightdrüberaboveeinagrosserbiggergelberyellowkreiscircle[row:237,column:208][big][yellow][circle]time (ms)IG:IF:IG:IF:IG: Instruction giver IF: instruction follower\nFigure 1: Example of collaborative scene drawing\nwith IGwords (rounded rectangles) and IFreac-\ntions (blue diamonds) on a time line.\nFigure 1 shows two examples from our task.\nWhile the instruction giver ( IG) is producing their\nutterance (in the actual experiment, this is com-\ning from a recording), the instruction follower ( IF)\ntries to execute it as soon as possible through ac-\ntions in a GUI. The temporal placement of these\nactions relative to the words is indicated with\nblue diamonds in the figure. We use data of this\ntype to learn alignments between actions and the\nwords that trigger them, and show that the tem-\nporal alignment leads to a better model than just\nrecording the utterance-final action sequence.\n2 The learning task\nWe now describe the learning task formally. We\naim to enable a computer to learn word and utter-\nance meanings by observing human reactions in a\nscene drawing task. At the beginning, the com-\nputer knows nothing about the language. What it\nobserves are an unfolding utterance from an IGand\nactions from an IFwhich are performed while the\ninstruction is going on. Aligning each action a(or,\nmore precisely, action description , as will become\nclear soon) to the nearest word w, we can repre-\nsent an utterance / action sequence as follows:\nwt1,wt2,at3,···,wti,ati+1,···,wtn(1)491\n(Actions are aligned ‘to the left’, i.e. to the imme-\ndiately preceding or overlapping word.)\nAs the IFconcurrently follows the instruction\nand reacts, we make the simplifying assumption\nthat each action atiis a reaction to the words\nwhich came before it and disregard the possibility\nthat IFmight act on a predictions of subsequent in-\nstructions. For instance, in (1), we assume that the\nactionat3is the interpretation of the words wt1\nandwt2. When no action follows a given word\n(e.g.wtnin (1)), we take this word as not con-\ntributing to the task.\nWe directly take these action symbols aas the\nrepresentation of the utterance meaning so-far, or\nin other words, as its logical form; hence, the\nlearning task is to predict an action symbol as soon\nas it is appropriate to do so. The input is presented\nas a chunk of the ongoing utterance containing at\nleast the latest word. The utterance meaning Uof\na sequence{wt1,...,w tn}as a whole then is sim-\nply the concatenation of these actions:\nU={at1,....,a ti} (2)\n3 Modeling the learning task\n3.1 Maximum entropy model\nWe trained a maximum entropy model to compute\nthe probability distribution over actions from the\naction space A={ai: 1≤i≤N}, given the\ncurrent input chunk c:\np(ai|c) =1\nZ(c)exp/summationdisplay\njλjfj(ai,c) (3)\nλjis the parameter to be estimated. fj(ai,c)is\na simple feature function recording co-occurences\nof chunk and action:\nfj(ai,c) =/braceleftbigg1ifc=cj\n0otherwise(4)\nIn our experiments, we use a chunk size of 2,\ni.e., we use word bigrams. Z(c)is the normal-\nising constant. The logical form with the highest\nprobability is taken to represent the meaning of the\ncurrent chunk: a∗(c) = arg max ip(ai|c)\nIn the task that we test the approach on, the ac-\ntion space contains actions for locating an object\nin a scene; for sizing and colouring it; as well as\nfor determining its shape. (See below.)\nuntenlinksistrow4:0.8col0:0.6row4:0.8row4:0.8 col0:0.6col0:0.2row4:0.8 col0:0.6einbig:0.3row4:0.8 col0:0.6 big:0.3kleinessmall:0.7row4:0.8 col0:0.6 small:0.7…Hypothesis updatingp(a|w)smallbottomleftisaInstructionTranslationFigure 2: Example of hypothesis updating. New\nbest hypotheses per type are shown in blue; re-\ntained hypotheses in green; revised hypotheses in\nred.\n3.2 Composing utterance meaning\nSince in our task each utterance places one object,\nwe assume that each utterance hypothesis Ucon-\ntains a unique logical form for each of following\nconcepts (referred as type of logical forms later):\ncolour, shape, size, row and column position.\nWhile the instruction unfolds, we update the ut-\nterance meaning hypotheses by adding new logical\nforms or updating the probabilities of current hy-\npothesis. With each uttered word, we first check\nthe type of the predicted logical form. If no log-\nical form of the same type has already been hy-\npothesised, we incorporate the new logical form to\nthe current hypothesis. Otherwise, if the predicted\nlogical form has a higher probability than the one\nwith the same type in the current hypothesis, we\nupdate the hypothesis; if it has a lower probabil-\nity, the hypothesis remains unchanged. Figure 2\nshows an example of the hypothesis updating pro-\ncess.\n4 Data collection\n4.1 The experiment\nWhile the general setup described above is one\nwhere IGgives an instruction, which a co-present\nIFfollows concurrently, we separated these con-\ntributions for technical reasons: The instructions\nfrom IGwere recorded in one session, and the\nactions from IF(in response to being played the\nrecordings of IG) in another.\nTo elicit the instructions, we showed IGs a scene\n(as shown in the top of Figure 3) on a computer\nscreen. They were instructed to describe the size,\ncolour, shape and the spatial configuration of the\nobjects. They were told that another person will\nlisten to the descriptions and try to re-create the\ndescribed scenes.\n100 scenes were generated for the description\ntask. Each scene includes 2 circles and a square.\nThe position, size, colour and shape of each ob-\nject were randomly selected when the scenes were492\nFigure 3: The GUIand a sample scene.\ngenerated. The scenes were shown in the same\norder to all IGs. There was no time restriction\nfor each description. Each IGwas recorded for\n20 minutes, yielding on average around 60 scene\ndescriptions. Overall, 13 native German speakers\nparticipated in the experiment. Audio and video\nwas recorded with a camera.\nIn the scene drawing task, we replayed the\nrecordings to IFs who were not involved in the pre-\nceding experiment. To reduce the time pressure\nof concurrently following instructions and react-\ning with GUI operation, the recordings were cut\ninto 3 separate object descriptions and replayed\nwith a slower pace (at half the original speed). IFs\ndecided when to begin the next object description,\nbut were asked to act as fast as possible. This setup\nprovides an approximation (albeit a crude one) to\nrealistic interactive instruction giving, where feed-\nback actions control the pace (Clark, 1996).\nThe drawing task was performed with a GUI\n(Figure 3) with separate interface elements corre-\nsponding to the aspects of the task (placing, sizing,\ncolouring, determining shape). Before the exper-\niment, IFs were instructed in using the GUI and\ntried several drawing tasks. After getting familiar\nwith the GUI, the recordings were started. Over-\nall, 3 native German speakers took part in this ex-\nperiment. Each of them listened to the complete\nrecordings of between 4 and 5 IGs, that is, to be-\ntween 240 and 300 descriptions. The GUIactions\nwere logged and timestamped.\n4.2 Data preprocessing\nAligning words and actions First, the instruc-\ntion recordings were manually transcribed. A\nforced-alignment approach was applied to tempo-\n0.0 0.5 1.0 1.5 2.0End of utterance\n position\nsize\ncolour\nshapeFigure 4: Action type distributions over utterance\nduration (1 = 100% of utterance played).\nrally align the transcriptions with the recordings.\nThen, the IFactions were aligned with the record-\nings via logged timestamps.\nFigure 4 shows how action types distribute over\nutterances. As this shows, object positions tend\nto be decided on early during the utterance, with\nthe other types clustering at the end or even after\ncompletion of the utterance.\nActions and Action Descriptions We defined a\nset of action symbols to serve as logical forms rep-\nresenting utterance chunk meanings. As described\nabove, we categorised these action symbols into 5\ntypes (shown in Table 1). The symbols were used\nfor associating logical forms to words, while the\ntype of actions was used for updating the hypoth-\nesis of utterance meaning (as explained in Sec-\ntion 3.2).\nWe make the distinction between actions and\naction symbol (or action description), because we\nmake use of the fact that the same action may be\ndescribed in different ways. E.g., a placing action\ncan be described relative to the canvas as a whole\n(e.g. “bottom left”) or relative to other objects (e.g.\n“right of object 1”). We divided the canvas into a\ngrid with 6×6cells. We represent canvas posi-\ntions with the grid coordinate. For example, row1\nindicates an object is in the first row of the canvas\ngrid. We represent the relative positions with the\nsubtraction of their indexes to corresponding refer-\nential objects. For example, prev1 row1 indicates\nthat the object is 1 row above the first described\nobject. Describing the same action in these differ-\nent ways gives us the required targets for associ-\nating with the different possible types of locating\nexpressions.\nLabelling words with logical forms With the\nassumption that each action is a reaction to at most\nNwords that came before it ( N= 3in our setup),493\ntype logical form\nrow row1, row2 ...row6\nprev1 row1, prev1 row2 ...\nprev2 row1, prev2 row2 ...\ncolumn col1, col2 ...col6\nprev1 col1, prev1 col2 ...\nprev2 col1, prev2 col2 ...\nsize small, medium, big\ncolour red, green, blue, magenta\ncyan, yellow\nshape circle, square\nTable 1: Reaction types and logical forms.\nwe label these Nprevious words with the logical\nform of the action. E.g., for the first utterance from\nFigure 1 above:\n(1)unten links ist ein kleines blaues Viereck\nrow4 row4 small small small blue square\ncol0 col0 blue blue square\nsquare\nNotice that a word might be aligned with more\nthan one action, which means that the learning\nprocess has to deal with potentially noisy infor-\nmation. Alternatively, a word might not be aligned\nwith any action.\n5 Evaluation\nThe data was randomly divided into train ( 80%)\nand test sets ( 20%). For our multi-class classifica-\ntion task, we calculated the F1-score and precision\nfor each class and took the weighted sum as the fi-\nnal score.\nSetup F1-score Precision Recall\nProposed Exp1 0.75 0.65 0.89\nmodel Exp2 0.66 0.55 0.83\nBaseline model 0.60 0.52 0.71\nTable 2: Evaluation results.\nFigure 5 illustrates the evaluation process of\neach setup.\nProposed model The proposed model was eval-\nuated on the utterance and the incremental level.\nInExperiment 1 , the meaning representation is\nassembled incrementally as described above, but\nevaluated utterance-final. In Experiment 2 , the\nmodel is evaluated incrementally, after each word\nof the utterance. Hence, late predictions (where\na part of the utterance meaning is predicted later\nthan would have been possible) are penalised in\nExperiment 2, but not Experiment 1. The model\nperforms better on the utterance level, which sug-\ngests that the hypothesis updating process can suc-\nuntenlinksistrow4 col0row4row4 col0row4 col0ein\nrow4 col0 bigkleines\nrow4 col0 smallExperiment 2Gold standardsmallbottomleftisaInstructionTranslation\nExperiment 1Baseline modelblauesViereckrow4 col0 small blue------------row4, col0, small, blue, squarerow4 col0 small bluerow4, col0, small, blue, squarebluesquarerow4, col0, small, blue, squarerow4 col0row4 col0row4 col0 smallrow4, col0, small, blue, squareFigure 5: Evaluation Setups. Exp. 1 only evaluates\nthe utterance-final representation, Exp. 2 evaluates\nincrementally. False interpretations are shown in\nred.\ncessfully revise false interpretations while the de-\nscriptions unfold.\nBaseline model For comparison, we also trained\na baseline model with temporally unaligned data\n(comparable to a situation where only at the end\nof an utterance a gold annotation is available). For\n(1), this would result in all words getting assigned\nthe labels row4,col0,small,blue,square . As\nTable 2 shows, this model achieves lower results.\nThis indicates that temporal alignment in the train-\ning data does indeed provide better information for\nlearning.\nError analysis While the model achieves good\nperformance in general, it performs less well on\nposition words. For example, given the chunk\n“schr ¨ag rechts” ( diagonally to the right ) which\ndescribes a landmark-relative position, our model\nlearned as best interpretation a canvas-relative po-\nsition. The hope was that offering the model\nthe two different action description types (canvas-\nrelative and object-relative) would allow it to make\nthis distinction, but it seems that here at least\nthe more frequent use of “rechts” suppresses that\nmeaning.\n6 Related work\nThere has been some recent work on grounded\nsemantics with ambiguous supervision. For ex-\nample, Kate and Mooney (2007) and Kim and\nMooney (2010) paired sentences with multiple\nrepresentations, among which only one is cor-\nrect. B ¨orschinger et al. (2011) introduced an ap-\nproach to ground language learning based on un-\nsupervised PCFG induction. Kim and Mooney\n(2012) presents an enhancement of the PCFG ap-\nproach that scales to such problems with highly-\nambiguous supervision. Berant et al. (2013)\nand Dong and Lapata (2016) map natural lan-\nguage to machine interpretable logical forms with494\nquestion-answer pairs. Tellex et al. (2012), Salvi\net al. (2012), Matuszek et al. (2013), and Andreas\nand Klein (2015) proposed approaches to learn\ngrounded semantics from natural language and ac-\ntion associations. These approaches paired am-\nbiguous robot actions with natural language de-\nscriptions from humans. While these approaches\nachieve good learning performance, the ambigu-\nous logical forms paired with the sentences were\nmanually annotated. We attempted to align ut-\nterances and potential logical forms by continu-\nously observing the instruction following actions.\nOur approach not only needs no human annota-\ntion or prior pairing of natural language and log-\nical forms for the learning task, but also acquires\nless ambiguous language and action pairs. The re-\nsults show that the temporal information helps to\nachieve competitive learning performance with a\nsimple maximum entropy model.\nLearning from observing successful interpreta-\ntion has been studied in much recent work. Be-\nsides the work discussed above, Frank and Good-\nman (2012), Golland et al. (2010), and Reckman\net al. (2010) focus on inferring word meanings\nthrough game playing. Branavan et al. (2009),\nArtzi and Zettlemoyer (2013), Kollar et al. (2014)\nand Monroe and Potts (2015) infer natural lan-\nguage meanings from successful instruction ex-\necution of humans/agents. While interpretations\nwere provided on utterance level in above works,\nwe attempt to learn word and utterance meanings\nby continuously observing interpretations of natu-\nral language in a situated setup which enables ex-\nploitation of temporally-aligned instruction giving\nand following.\n7 Conclusions\nWhere most related work starts from utterance-\nfinal representations, we investigated the use of\nmore temporally-aligned understanding data. We\nfound that in our setting and for our simple learn-\ning methods, this indeed provides a better signal.\nIt remains for future work to more clearly delin-\neate the types of settings where such close align-\nment on the sub-utterance level might be observed.\nAcknowledgments\nThis work was supported by the Cluster of Excel-\nlence Cognitive Interaction Technology ‘CITEC’\n(EXC 277) at Bielefeld University, which is\nfunded by the German Research Foundation(DFG). The first author would like to acknowledge\nthe support from the China Scholarship Council.\nReferences\nJacob Andreas and Dan Klein. 2015. Alignment-\nbased compositional semantics for instruction fol-\nlowing. In Proceedings of the 2015 Conference on\nEmpirical Methods in Natural Language Process-\ning, pages 1165–1174 , pages 1165–1174, Lisbon,\nPortugal. Association for Computational Linguistic.\nYoav Artzi and Luke Zettlemoyer. 2013. Weakly su-\npervised learning of semantic parsers for mapping\ninstructions to actions. Transactions of the Associa-\ntion for Computational Linguistics , 1:49–62.\nJonathan Berant, Andrew Chou, Roy Frostig, and\nPercy Liang. 2013. Semantic parsing on freebase\nfrom question-answer pairs. In EMNLP , volume 2,\npage 6.\nBenjamin B ¨orschinger, Bevan K. Jones, and Mark\nJohnson. 2011. Reducing grounded learning tasks\nto grammatical inference. In Proceedings of the\nConference on Empirical Methods in Natural Lan-\nguage Processing , pages 1416–1425. Association\nfor Computational Linguistics.\nSatchuthananthavale R.K. Branavan, Harr Chen,\nLuke S. Zettlemoyer, and Regina Barzilay. 2009.\nReinforcement learning for mapping instructions to\nactions. In Proceedings of the Joint Conference of\nthe 47th Annual Meeting of the ACL and the 4th In-\nternational Joint Conference on Natural Language\nProcessing of the AFNLP , volume 1, pages 82–90.\nAssociation for Computational Linguistics.\nDavid L. Chen and Raymond J. Mooney. 2011. Learn-\ning to interpret natural language navigation instruc-\ntions from observations. In Proceedings of the 25th\nAAAI Conference on Artificial Intelligence (AAAI-\n2011) , pages 859–865, San Francisco, California.\nAAAI Press.\nHerbert H. Clark. 1996. Using Language . Cambridge\nUniversity Press, Cambridge.\nLi Dong and Mirella Lapata. 2016. Language to log-\nical form with neural attention. In The 54th Annual\nMeeting of the Association for Computational Lin-\nguistics , pages 2368–2378, Berlin, Germany. Asso-\nciation for Computational Linguistics.\nMichael C. Frank and Noah D. Goodman. 2012. Pre-\ndicting Pragmatic Reasoning in Language Games.\nScience , 336(6084):998–998.\nDave Golland, Percy Liang, and Dan Klein. 2010.\nA game-theoretic approach to generating spatial de-\nscriptions. In Proceedings of the 2010 conference\non empirical methods in natural language process-\ning, pages 410–419. Association for Computational\nLinguistics.495\nRohit J. Kate and Raymond J. Mooney. 2007. Learn-\ning language semantics from ambiguous supervi-\nsion. In AAAI , volume 7, pages 895–900.\nJoohyun Kim and Raymond J. Mooney. 2010. Gen-\nerative alignment and semantic parsing for learn-\ning from ambiguous supervision. In Proceedings\nof the 23rd International Conference on Computa-\ntional Linguistics: Posters , pages 543–551. Associ-\nation for Computational Linguistics.\nJoohyun Kim and Raymond J. Mooney. 2012. Un-\nsupervised pcfg induction for grounded language\nlearning with highly ambiguous supervision. In Pro-\nceedings of the 2012 Joint Conference on Empirical\nMethods in Natural Language Processing and Com-\nputational Natural Language Learning , pages 433–\n444. Association for Computational Linguistics.\nThomas Kollar, Stefanie Tellex, Deb Roy, and Nicholas\nRoy. 2014. Grounding verbs of motion in natu-\nral language commands to robots. In Experimental\nrobotics , pages 31–47. Springer.\nCynthia Matuszek, Evan Herbst, Luke Zettlemoyer,\nand Dieter Fox. 2013. Learning to parse natural\nlanguage commands to a robot control system. In\nExperimental Robotics , pages 403–415. Springer.\nWill Monroe and Christopher Potts. 2015. Learning in\nthe rational speech acts model. In In Proceedings of\n20th Amsterdam Colloquium, Amsterdam, Decem-\nber. ILLC .\nHilke Reckman, Jeff Orkin, and Deb K. Roy. 2010.\nLearning meanings of words and constructions,\ngrounded in a virtual game. In Proceedings of the\nConference on Natural Language Processing 2010 ,\npages 67–75, Saarbr ¨ucken, Germany. Saarland Uni-\nversity Press.\nGiampiero Salvi, Luis Montesano, Alexandre\nBernardino, and Jose Santos-Victor. 2012.\nLanguage bootstrapping: Learning word meanings\nfrom perception–action association. IEEE Trans-\nactions on Systems, Man, and Cybernetics, Part B\n(Cybernetics) , 42(3):660–671.\nStefanie Tellex, Pratiksha Thaker, Josh Joseph,\nMatthew R. Walter, and Nicholas Roy. 2012. To-\nward learning perceptually grounded word mean-\nings from unaligned parallel data. In Proceedings\nof the Second Workshop on Semantic Interpretation\nin an Actionable Context , pages 7–14. Association\nfor Computational Linguistics.\nSida I. Wang, Percy Liang, and Christopher D. Man-\nning. 2016. Learning language games through in-\nteraction. In The 54th Annual Meeting of the Asso-\nciation for Computational Linguistics , pages 2368–\n2378, Berlin, Germany. Association for Computa-\ntional Linguistics.496",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "Sk0yhBZNb",
"year": null,
"venue": null,
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=Sk0yhBZNb",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Review of \"interpretable active learning\". Interesting idea, but execution needs improvement. ",
"authors": [],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "HaBr64hLiS",
"year": null,
"venue": null,
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=HaBr64hLiS",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Re: Reply to Reviewer E1tY",
"authors": [],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "kF9Z4WdSYDQ",
"year": null,
"venue": null,
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=kF9Z4WdSYDQ",
"arxiv_id": null,
"doi": null
}
|
{
"title": null,
"authors": [],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "xkUFLGCWAE8",
"year": null,
"venue": "ECCE 2017",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=xkUFLGCWAE8",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Using Wearable Inertial Sensors to Compare Different Versions of the Dual Task Paradigm during Walking",
"authors": [
"Harry J. Witchel",
"Robert Needham",
"Aoife Healy",
"Joseph H. Guppy",
"Jake Bush",
"Cäcilia Oberndorfer",
"Chantal Herberz",
"Carina E. I. Westling",
"Dawit Kim",
"Daniel Roggen",
"Jens Barth",
"Björn M. Eskofier",
"Waqar Rashid",
"Nachiappan Chockalingam",
"Jochen Klucken"
],
"abstract": "The dual task paradigm (DTP), where performance of a walking task co-occurs with a cognitive task to assess performance decrement, has been controversially mooted as a more suitable task to test safety from falls in outdoor and urban environments than simple walking in a hospital corridor. There are a variety of different cognitive tasks that have been used in the DTP, and we wanted to assess the use of a secondary task that requires mental tracking (the alternate letter alphabet task) against a more automatic working memory task (counting backward by ones). In this study we validated the x-io x-IMU wearable inertial sensors, used them to record healthy walking, and then used dynamic time warping to assess the elements of the gait cycle. In the timed 25 foot walk (T25FW) the alternate letter alphabet task lengthened the stride time significantly compared to ordinary walking, while counting backward did not. We conclude that adding a mental tracking task in a DTP will elicit performance decrement in healthy volunteers.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "lPSHpEKe9J7S",
"year": null,
"venue": "ECAI Workshop on Agent Theories, Architectures, and Languages 1994",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=lPSHpEKe9J7S",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Modelling Reactive Behaviour in Vertically Layered Agent Architectures",
"authors": [
"Jörg P. Müller",
"Markus Pischel",
"Michael Thiel"
],
"abstract": "The use of layered architectures for modeling autonomous agents has become popular over the past few years. In this paper, different approaches how these architectures can be build are discussed. A special case, namely vertically layered architectures is discussed by the example of the InteRRaP agent model. The paper focusses on the lower levels of the architecture which provide reactivity, incorporate procedural knowledge, and which connect the cooperation and planning layers with the outside world. We claim that the lower system layers are likely to become a control bottleneck in vertically layered architectures, and that very careful modeling is required to produce the desired agent behaviour.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "c2h8IE6OWf6",
"year": null,
"venue": "ECAI Workshop on Agent Theories, Architectures, and Languages 1994",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=c2h8IE6OWf6",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Multi-Agent Reasoning with Belief Contexts: The Approach and a Case Study",
"authors": [
"Alessandro Cimatti",
"Luciano Serafini"
],
"abstract": "In this paper we discuss the use of belief contexts for the formalization of multi-agent reasoning. In addition to representational power, belief contexts provide implementational advantages. We substantiate this claim by discussing a paradigmatic case study, the Three Wise Men puzzle.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "YzhGP-2a6uX",
"year": null,
"venue": "it Inf. Technol. 2017",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=YzhGP-2a6uX",
"arxiv_id": null,
"doi": null
}
|
{
"title": "E-mail Header Injection Vulnerabilities",
"authors": [
"Sai Prashanth Chandramouli",
"Ziming Zhao",
"Adam Doupé",
"Gail-Joon Ahn"
],
"abstract": "E-mail Header Injection vulnerability is a class of vulnerability that can occur in web applications that use user input to construct e-mail messages. E-mail Header Injection is possible when the mailing script fails to check for the presence of e-mail headers in user input (either form fields or URL parameters). The vulnerability exists in the reference implementation of the built-in mail functionality in popular languages such as PHP, Java, Python, and Ruby. With the proper injection string, this vulnerability can be exploited to inject additional headers, modify existing headers, and alter the content of the e-mail.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "N5hQI_RowVA",
"year": null,
"venue": "NeurIPS 2021 Oral",
"pdf_link": "/pdf/387e4d748ad81d22db42fb4bdf048b8fef213e93.pdf",
"forum_link": "https://openreview.net/forum?id=N5hQI_RowVA",
"arxiv_id": null,
"doi": null
}
|
{
"title": "E(n) Equivariant Normalizing Flows",
"authors": [
"Victor Garcia Satorras",
"Emiel Hoogeboom",
"Fabian Bernd Fuchs",
"Ingmar Posner",
"Max Welling"
],
"abstract": "This paper introduces a generative model equivariant to Euclidean symmetries: E(n) Equivariant Normalizing Flows (E-NFs). To construct E-NFs, we take the discriminative E(n) graph neural networks and integrate them as a differential equation to obtain an invertible equivariant function: a continuous-time normalizing flow. We demonstrate that E-NFs considerably outperform baselines and existing methods from the literature on particle systems such as DW4 and LJ13, and on molecules from QM9 in terms of log-likelihood. To the best of our knowledge, this is the first flow that jointly generates molecule features and positions in 3D.",
"keywords": [
"equivariance",
"normalizing flows",
"molecule generation",
"generative models",
"graph neural networks"
],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "aivfPw1TYd3",
"year": null,
"venue": "EC 2020",
"pdf_link": "https://dl.acm.org/doi/pdf/10.1145/3391403.3399470",
"forum_link": "https://openreview.net/forum?id=aivfPw1TYd3",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Combinatorial Ski Rental and Online Bipartite Matching",
"authors": [
"Hanrui Zhang",
"Vincent Conitzer"
],
"abstract": "We consider a combinatorial variant of the classical ski rental problem --- which we call combinatorial ski rental --- where multiple resources are available to purchase and to rent, and are demanded online. Moreover, the costs of purchasing and renting are potentially combinatorial. The dual problem of combinatorial ski rental, which we call combinatorial online bipartite matching, generalizes the classical online bipartite matching problem into a form where constraints, induced by both offline and online vertices, can be combinatorial. We give a 2-competitive (resp. e / (e - 1)-competitive) deterministic (resp. randomized) algorithm for combinatorial ski rental, and an e / (e - 1)-competitive algorithm for combinatorial online bipartite matching. All these ratios are optimal given simple lower bounds inherited from the respective well-studied special cases. We also prove information-theoretic impossibility of constant-factor algorithms when any part of our assumptions is considerably relaxed.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "BklMDCVtvr",
"year": null,
"venue": null,
"pdf_link": "/pdf/98602af255208c577d004f2e00828c4ceb7d5daf.pdf",
"forum_link": "https://openreview.net/forum?id=BklMDCVtvr",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Discovering the compositional structure of vector representations with Role Learning Networks",
"authors": [
"Paul Soulos",
"Tom McCoy",
"Tal Linzen",
"Paul Smolensky"
],
"abstract": "Neural networks (NNs) are able to perform tasks that rely on compositional structure even though they lack obvious mechanisms for representing this structure. To analyze the internal representations that enable such success, we propose ROLE, a technique that detects whether these representations implicitly encode symbolic structure. ROLE learns to approximate the representations of a target encoder E by learning a symbolic constituent structure and an embedding of that structure into E’s representational vector space. The constituents of the approximating symbol structure are defined by structural positions — roles — that can be filled by symbols. We show that when E is constructed to explicitly embed a particular type of structure (e.g., string or tree), ROLE successfully extracts the ground-truth roles defining that structure. We then analyze a seq2seq network trained to perform a more complex compositional task (SCAN), where there is no ground truth role scheme available. For this model, ROLE successfully discovers an interpretable symbolic structure that the model implicitly uses to perform the SCAN task, providing a comprehensive account of the link between the representations and the behavior of a notoriously hard-to-interpret type of model. We verify the causal importance of the discovered symbolic structure by showing that, when we systematically manipulate hidden embeddings based on this symbolic structure, the model’s output is also changed in the way predicted by our analysis. Finally, we use ROLE to explore whether popular sentence embedding models are capturing compositional structure and find evidence that they are not; we conclude by discussing how insights from ROLE can be used to impart new inductive biases that will improve the compositional abilities of such models.",
"keywords": [
"compositionality",
"generalization",
"neurosymbolic",
"symbolic structures",
"interpretability",
"tensor product representations"
],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "r1vccClCb",
"year": null,
"venue": null,
"pdf_link": "/pdf/37dcfc2fa414e93481ebef2c24570a2c09b5945a.pdf",
"forum_link": "https://openreview.net/forum?id=r1vccClCb",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Neighbor-encoder",
"authors": [
"Chin-Chia Michael Yeh",
"Yan Zhu",
"Evangelos E. Papalexakis",
"Abdullah Mueen",
"Eamonn Keogh"
],
"abstract": "We propose a novel unsupervised representation learning framework called neighbor-encoder in which domain knowledge can be trivially incorporated into the learning process without modifying the general encoder-decoder architecture. In contrast to autoencoder, which reconstructs the input data, neighbor-encoder reconstructs the input data's neighbors. The proposed neighbor-encoder can be considered as a generalization of autoencoder as the input data can be treated as the nearest neighbor of itself with zero distance. By reformulating the representation learning problem as a neighbor reconstruction problem, domain knowledge can be easily incorporated with appropriate definition of similarity or distance between objects. As such, any existing similarity search algorithms can be easily integrated into our framework. Applications of other algorithms (e.g., association rule mining) in our framework is also possible since the concept of ``neighbor\" is an abstraction which can be appropriately defined differently in different contexts. We have demonstrated the effectiveness of our framework in various domains, including images, time series, music, etc., with various neighbor definitions. Experimental results show that neighbor-encoder outperforms autoencoder in most scenarios we considered.",
"keywords": [
"unsupervised learning",
"representation learning",
"autoencoder"
],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "IgUezXdrZ2_",
"year": null,
"venue": null,
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=IgUezXdrZ2_",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Re: Re: Authors response to the Reviewer e14U",
"authors": [],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "umyThMd0-za",
"year": null,
"venue": null,
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=umyThMd0-za",
"arxiv_id": null,
"doi": null
}
|
{
"title": null,
"authors": [],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "O8UNhqNASfN",
"year": null,
"venue": "CoRR 2022",
"pdf_link": "http://arxiv.org/pdf/2210.13027v1",
"forum_link": "https://openreview.net/forum?id=O8UNhqNASfN",
"arxiv_id": null,
"doi": null
}
|
{
"title": "E-Valuating Classifier Two-Sample Tests",
"authors": [
"Teodora Pandeva",
"Tim Bakker",
"Christian A. Naesseth",
"Patrick Forré"
],
"abstract": "We propose E-C2ST, a classifier two-sample test for high-dimensional data based on E-values. Compared to $p$-values-based tests, tests with E-values have finite sample guarantees for the type I error. E-C2ST combines ideas from existing work on split likelihood ratio tests and predictive independence testing. The resulting E-values incorporate information about the alternative hypothesis. We demonstrate the utility of E-C2ST on simulated and real-life data. In all experiments, we observe that when going from small to large sample sizes, as expected, E-C2ST starts with lower power compared to other methods but eventually converges towards one. Simultaneously, E-C2ST's type I error stays substantially below the chosen significance level, which is not always the case for the baseline methods. Finally, we use an MRI dataset to demonstrate that multiplying E-values from multiple independently conducted studies leads to a combined E-value that retains the finite sample type I error guarantees while increasing the power.",
"keywords": [],
"raw_extracted_content": "E-Valuating Classifier Two-Sample Tests\nTeodora Pandeva Tim Bakker Christian A. Naesseth Patrick Forr ´e\nAI4Science Lab, AMLab\nInformatics Institute\nUniversity of AmsterdamAMLab\nInformatics Institute\nUniversity of AmsterdamAMLab\nInformatics Institute\nUniversity of AmsterdamAI4Science Lab, AMLab\nInformatics Institute\nUniversity of Amsterdam\nAbstract\nWe propose E-C2ST, a classifier two-sample test\nfor high-dimensional data based on E-values.\nCompared to p-values-based tests, tests with E-\nvalues have finite sample guarantees for the type\nI error. E-C2ST combines ideas from existing\nwork on split likelihood ratio tests and predic-\ntive independence testing. The resulting E-values\nincorporate information about the alternative hy-\npothesis. We demonstrate the utility of E-C2ST\non simulated and real-life data. In all experi-\nments, we observe that when going from small\nto large sample sizes, as expected, E-C2ST starts\nwith lower power compared to other methods\nbut eventually converges towards one. Simul-\ntaneously, E-C2ST’s type I error stays substan-\ntially below the chosen significance level, which\nis not always the case for the baseline methods.\nFinally, we use an MRI dataset to demonstrate\nthat multiplying E-values from multiple indepen-\ndently conducted studies leads to a combined E-\nvalue that retains the finite sample type I error\nguarantees while increasing the power.\n1 Introduction\nStatistical hypothesis testing is an essential tool in\nevidence-based research. An example from medicine is a\nrandomized controlled trial (RCT) experiment where we\ntest whether a treatment has no effect on patients (null\nhypothesis) versus a positive one (alternative hypothesis).\nIdeally, we would like the statistical test to reject an inef-\nfective treatment with a high probability, i.e., to keep the\nprobability of falsely rejecting the null hypothesis, called\ntype I error , low. This is usually controlled by a deci-\nsion boundary, called significance level , denoted by \u000b. We\nreject the null hypothesis if the computed p-value is be-\nlow\u000b. Thus, the type I error becomes upper-bounded by\nthe significance level. The choice of \u000bis crucial not only\nfor the type I error, but also for the type II error , whichrefers to the probability of failing to reject the null when\nthe alternative hypothesis is true. A low \u000bleads to a low\ntype I error. Unfortunately, lower \u000balso leads to increasing\ntype II error, or, equivalently, decreasing power (defined as\n1\u0000type II error). We refer to this trade-off as detection\nerror trade-off. Therefore, the choice of \u000bis often a subject\nof debate in the scientific community [BBJ+18].\nIn this work, we consider classifier two-sample tests, which\ntry to answer the statistical question of whether two pop-\nulations obtained independently are statistically signifi-\ncantly different. Proposed solutions to the problem have\nbeen focusing on developing tests with high power to dis-\ntinguish real from generated data [LPO16], noise from\ndata [LPO16, HTF01, GH12, MSC+13, GPAM+14] and\nare widely used in simulation-based inference [LBG+21].\nHowever, due to the type I and type II error trade-off dis-\ncussed above, these tests might not be suitable in some\napplications, e.g., the aforementioned RCT example. To\naddress this issue, we will introduce classifier two-sample\ntests with guaranteed type I error control lower than the\nsignificance level.\nTwo-sample testing procedures include classical ap-\nproaches such as Student’s and Welch’s t-tests [Stu08,\nWel47] comparing the means of two normally distributed\nsamples; non-parametric tests, such as the Wilcoxon-\nMann-Whitney test [MW47] that compare the ranks of\nthe two populations in the combined datasets, or the\nKolmogorov-Smirnov tests [Kol33, Smi39] and the Kuiper\ntest [Kui60]. In the high-dimensional data regime, ker-\nnel methods [SS98] have been proposed, which com-\npare the kernel embeddings of both populations [GBR+12,\nCRSG15, JSCG16]. However, all these statistical two-\nsample tests become less powerful on more complex data\nsuch as images, text, etc. For that reason, deep learning ex-\ntensions of the two sample tests have been developed, such\nas [LPO16, CC19, KKKL20, LXL+20], where one trains a\nmodel to distinguish the two populations on train data and\nconduct the statistical test on a test set. All listed meth-\nods estimate a p-value either via asymptotically valid ap-\nproximations or via permutation methods. Based on such a\np-value, one decides whether or not to reject the null witharXiv:2210.13027v1 [stat.ME] 24 Oct 2022\nE-Valuating Classifier Two-Sample Tests\nsignificance \u000b, historically chosen to be \u000b= 0:05.\nCompared to p-values, E-variables have a finite sample type\nI error which is often lower than the chosen significance\nlevel. More formally, E-variables are simply non-negative\nvariablesEthat satisfy\nfor allP2H 0:EP[E]\u00141;\ni.e. the expectation of Ewith respect to any distribution\nfrom the null hypothesis distribution class H0is less than\none. An example of E-variables for singleton hypothesis\nclasses are Bayes factors, i.e. we test if the unknown prob-\nability density pequalsp0orpA:\nH0:p=p0 vs.HA:p=pA:\nHere, we assume that both hypotheses occur with\nequal probability. Then the Bayes factor given\nbyE(x) :=pA(x)\np0(x)is an E-variable w.r.t.H0since\nEp0[E] =RpA(x)\np0(x)p0(x)dx= 1\u00141:Note that observing a\nvery large value of E, which we call an E-value , provides\nevidence against the null hypothesis. The origin and inter-\npretation of E-variables can be traced back to the work of\n[Jr56] in the context of gambling. Recently E-values have\nbeen reintroduced in the work of [GdHK20, Sha19] inter-\npreted as bets against the null hypothesis.\nWe propose a classifier two-sample test based on E-values,\ncalled E-C2ST. First, we introduce the more general frame-\nwork of conditional E-variables with their corresponding\nproperties in Section 2.2. Then, in Section 3, we show how\nto construct E-variables by employing the split-likelihood\ntesting procedure by [WRB20] for any null hypothesis. Af-\nterward, we consider the special case of conditional in-\ndependence testing. By combining the ideas developed\nin Section 3 and the existing work on predictive condi-\ntional independence testing framework by [BK17], we de-\nrive the corresponding E-values (see Section 4). In Sec-\ntion 5, we introduce our proposed classifier two-sample\ntest based on E-variables, called E-C2ST, which formally\nderives from the proposed predictive conditional indepen-\ndence testing framework from Section 4. Finally, we com-\npare E-C2ST experimentally to existing commonly used\ntwo-sample tests on simulated and more complex image\ndata. We also simulate a meta-analysis study on MRI\ndata and illustrate how E-variables can be easily utilized\nfor combining independent studies. We explore the afore-\nmentioned type-I–type-II error trade-off by simultaneously\ntracking the change of type I and type II errors with in-\ncreasing sample size.\nOur empirical results show that even though E-C2ST has\nlower power than other methods, its power converges to 1\nwhen the sample size is large enough. Furthermore, it al-\nways keeps the type I error substantially below the thresh-\nold level, which is not always the case for the baseline\nmethods.Ourcontributions can be summarized as follows\n1. We formalize predictive conditional independence\ntesting by means of E-variables.\n2. Leveraging predictive independence testing allows us\nto develop an E-variable-based classifier two-sample\ntest that has finite sample type I error guarantee.\n3. By considering the type-I–type-II error trade-off of\nstatistical tests, we compare E-C2ST to existing base-\nline methods. We show experimentally that it has a\npower that converges to one with increasing sample\nsize while retaining type I error lower than the chosen\nsignificance level.\n2 Hypothesis Testing with E-Variables\n2.1 Hypothesis Testing\nConsider a sample of data points D=fx1;:::;xNg1,\nreflecting realizations of random variables X1;:::;XN\ndrawn from an unknown probability distribution P2P(\n)\ncoming from some unknown sample space \n, whereP(\n)\nis the set of all probability measures on \n. In hypothesis\ntesting, we usually consider two model classes:\nH0=fP\u00122P(\n)j\u00122\u00020g (null hypothesis) ;\nHA=fP\u00122P(\n)j\u00122\u0002Ag (alternative);\nand we want to decide if Pcomes fromH0or fromHA:\nH0:P2H 0 vs.HA:P2H A;\nbased on the value of some test statistic\nT(N)=T(X1;:::;XN). In most cases the data points\ncome from the same space Xand we would at most\nobserve countably many of such data points Xn. In this\nsetting we can w.l.o.g. assume that \n =XN. If we,\nfurthermore, assume that the Xn,n2N, are drawn i.i.d.\nfromPthenP((Xn)n2N) =NN\nn=1P(Xn)and we can\ndirectly incorporate the product structure into H0andHA\nand restrict ourselves to one of those factors to state Hi\nand (by slight abuse of notations) re-write for i=0;A:\nHi=fP\u00122P(X)j\u00122\u0002ig; (1)\nand implicitly assume that P\u0012((Xn)n2N) =NN\nn=1P\u0012(Xn):Moreover, assume that our probabil-\nity measures P\u00122 Hi, are given via a density w.r.t. a\nproduct reference measure \u0016. We will denote the density\nbyp\u0012(x)orp(xj\u0012), interchangeably.\n1In the following we will write small xif we either mean the\nrealization of an random variable Xor the argument of a function\nliving on the same space. We use capital Xfor a data point if we\nwant to stress its role as a random variable.\nTeodora Pandeva, Tim Bakker, Christian A. Naesseth, Patrick Forr ´e\n2.2 Conditional E-Variables\nNow consider the more general relative framework where\nwe allow hypothesis classes to come from a set of Markov\nkernels, which can be used to model conditional probabil-\nity distributions:\nHi=fP\u0012:Z!P (X)j\u00122\u0002ig\u0012P (X)Z;(2)\nwhere the latter denotes the space of all Markov kernels\nfromZtoX, i.e. for each P\u00122 Hifor fixedz2 Z\nP\u0012(\u0001jz)is a valid probability measure on X:An example of\nconditional hypothesis classes is given in Section 4, where\nthe null hypothesis class represent the set of distributions\nthat reflect the conditional independence of two variables\nafter observing a third one.\nWith respect toH0as defined in Equation 2 we can de-\nfine corresponding E-variables which we call conditional\nE-variables2:\nDefinition 2.1 (Conditional E-variable) .Aconditional E-\nvariable w.r.t.H0\u0012P(X)Zis a non-negative measurable\nmap:\nE:X\u0002Z! R\u00150; (x;z)7!E(xjz);\nsuch that for all P\u00122H 0andz2Z we have:\nE\u0012[Ejz] :=Z\nE(xjz)P\u0012(dxjz)\u00141:\nIn Section 3, we will show that the M-split likelihood\nratio test statistic [WRB20] can be written as a product\nof conditional E-variables, where in the above definition\nZ=Xn\u00001, the space from which previous data points\nwere drawn.\nOne of the notable features of E-variables is their preser-\nvation under multiplication. We can easily combine (con-\nditionally) independent E-variables by simply multiplying\nthem which results in a proper E-variable. This prop-\nerty makes E-variables appealing for meta-analysis studies\n[GdHK20, VW21] as we will demonstrate later experimen-\ntally. A more general result states that we can combine\nbackwards dependent conditional E-variables via multipli-\ncation which is formally given in the following Lemma:\nLemma 2.2 (Products of conditional E-variables) .IfE(1)\nis a a conditional E-variable w.r.t.H(1)\n0\u0012P(Y)ZandE(2)\na conditional E-variable w.r.t.H(2)\n0\u0012P(X)Y\u0002ZthenE(3)\ndefined via their product:\nE(3)(x;yjz) :=E(2)(xjy;z)\u0001E(1)(yjz);\n2A formal definition of the ”unconditional” E-variables intro-\nduced in Section 1 can be easily derived from Defition 2.1 by\ndropping Z:Moreover, if Eis an E-variable and x2 X a fixed\npoint then we call E(x)theE-value ofxw.r.t.E.is a conditional E-variable w.r.t.:\nH(3)\n0:=H(2)\n0\nH(1)\n0\u0012P(X\u0002Y )Z;\nwhere we define the product hypothesis as:\nH(2)\n0\nH(1)\n0:=n\nP\u0012\nP \f\f\fP\u00122H(2)\n0;P 2H(1)\n0o\n;\nwith the product Markov kernels given by:\n(P\u0012\nP ) (dx;dyjz) :=P\u0012(dxjy;z)P (dyjz):\n2.3 Hypothesis Testing with Conditional E-Variables\nIn the context of statistical testing, we can evaluate an E-\nvariableEon the given data points w.r.t. random variables\nX1;:::;XN. Then the decision rule for rejecting the null\nhypothesis at significance level \u000b2[0;1]becomes:\nRejectH0in favor ofHAifE(X1;:::;XN)\u0015\u000b\u00001.\nLemma 2.3 tells us that with this rule the type I error, the\nerror rate of falsely rejecting the H0, is bounded by \u000b.\nLemma 2.3 (Type I error control) .LetEbe a conditional\nE-variable w.r.t.H0\u0012P(X)Z. Then for every \u000b2[0;1],\nP\u00122H 0andz2Z we have:\nP\u0012(E\u0015\u000b\u00001jz)\u0014\u000b:\nProof. This follows from the Markov inequality:\nP\u0012(E\u0015\u000b\u00001jz)\u0014E\u0012[Ejz]\n\u000b\u00001\u00141\n\u000b\u00001=\u000b:\nThus, the E-values can be transformed into more conser-\nvativep-values via the relation p= minf1;1=Egsuch\nthat forP\u00122 H 0it holdsP\u0012(p\u0014\u000bjz)\u0014\u000b:Note\nthat a valid way of constructing an E-variable from the\nrandom variables X1;:::;XNw.r.t. the observed sam-\nple points according to Lemma 2.2 is E(X1;:::;XN) =Q\ni\u0014NE(Xi):\nRemark 2.4 (Type II error and power) .Justified by the\nlaw of large numbers, E-variables with EP[logE]>0for\nP2H Ahave asymptotic power, at least in the i.i.d. case,\nbecause then:\nlogE(X1;:::;XN) =NX\nn=1logE(Xn)\u0019N\u0001EP[logE]:\nHowever, providing a proper analysis in the more general,\nnon-i.i.d. setting is out of the scope of this paper. In the\nappendix we provide a finite sample bound for the type II\nerror, but (only) in the special case of (conditional) i.i.d.\nE-variables, which is based on Sanov’s theorem, see Thm.\nA.1, [Csi84, Bal20]. Nonetheless, to deal with the gen-\neral case, in the following section 3 we use the M-split\nlikelihood ratio construction of E-variables from [WRB20],\nwhere we can optimize the power of the test by training the\nE-variables on some form of training set.\nE-Valuating Classifier Two-Sample Tests\n3M-Split Likelihood Ratio Test\nIn general, constructing an E-variable with respect to any\nH0is not a straightforward task. There exist two main ap-\nproaches. The first approach, see [GdHK20], is based on\nthe reverse information projection of the hypothesis space\nHAontoH0. It is not data-dependent and can be shown to\nbe growth-optimal in the worst case. However, the reverse\ninformation projection is not very explicit in general set-\ntings, especially when working with non-convex hypothe-\nses,HAandH0. The second approach is based on the split\nlikelihood ratio test from [WRB20], which we will discuss\nin this section. Compared to the previous approach, this\nmethod yields a data-driven E-variable, which can in cer-\ntain cases even be stated in closed form.\nAssume that our data set D=fX1;:::;XNgis of sizeN.\nWe now split the index set [N] :=f1;:::;NgintoM\u00152\ndisjoint batches:\n[N] =I(1)_[\u0001\u0001\u0001 _[I(M):\nForm= 1;:::;M we also abbreviate:\nI(<m):=I(1)_[\u0001\u0001\u0001 _[I(m\u00001);\nx(m):= (xn)n2I(m)2Y\nn2I(m)X=:X(m):\nandx(<m),x(\u0014m),I(\u0014m), analogously.\nThen form= 1;:::;M we follow these steps:\n1. Train a model on \u0002Aon all previous points x(<m)in\nan arbitrary way (MLE, MAP, full Bayesian, etc.) and\ngetpA(xjx(<m)). To achieve a high power of the test,\nthe densitypA(xjx(<m))should reflect the true distri-\nbution in the best possible way to generalize well to\nunseen data.\n2. Train a model on \u00020on the data points of the current\nbatchx(m)(conditioned on the previous ones x(<m))\nvia maximum-likelihood fitting (MLE):\n^\u0012(\u0014m)\n0 :=^\u0012(m)\n0(x(\u0014m)) := argmax\n\u00122\u00020p\u0012(x(m)jx(<m));\nand get:p0(xjx(\u0014m)) :=p(xjx(<m);^\u0012(m)\n0(x(\u0014m))).\nNote that under i.i.d. assumptions there is no depen-\ndence onx(<m).\n3. Evaluate both models on the current points x(m)and\ndefineE(m)via their ratio:\nE(m)(x(m)jx(<m)) :=pA(x(m)jx(<m))\np0(x(m)jx(\u0014m)); (3)\n=pA(x(m)jx(<m))\nmax\u00122\u00020p\u0012(x(m)jx(<m)):\nThenE(m)constitutes a conditional E-variable, con-\nditioned on the space X(<m), w.r.t.H(m)j(<m)\n0:For a fixed m, them-th conditional E-variable is more\ndoubtful against the alternative hypothesis class since\nit compares theHA-model’s testperformance, which is\ntrained onx(<m), tested on x(m), with theH0-model’s\ntrain performance both trained and tested on the same x(m)\nin the i.i.d. case. This means that if the alternative is true,\nthen theHA-modelpAhas to perform better on x(m)than\ntheH0-modelp0, while the latter was allowed to be directly\n(over)fitted on x(m). This heuristic intuition behind the m-\nth conditional E-variable explains why one would expect to\nneed more samples to gather enough evidence for rejecting\nthe null hypothesis H0in favor of the alternative HAwhen\ntesting with such E-variables in contrast to other tests.\nIt then follows from Lemma 2.2 that the product:\nE:=E(\u0014M):=MY\nm=1E(m);\ndefines an E-variable w.r.t.H0=H(\u0014M)\n0 .\nTheM-split likelihood ratio test , for significance level \u000b2\n[0;1], rejects the null hypothesis H0ifE(X1;:::;XN)\u0015\n\u000b\u00001. Lemma 2.3 ensures that the type I error is bounded\nby\u000b.\nRemark 3.1 (Initialization and validation set) .In point 1.\nof the above test procedure, we emphasized that the power\nof the proposed test depends on the test performance of the\npredictive distributions pA(xjx(<m)). We can apply any\nwell-known training technique to achieve high predictive\nperformance, without violating the any of the properties of\nE-variables. For example, after we split the data into M\npartitions, for each of the M\u00001training procedures, we\ncan further split the x(<m)dataset into train and validation\nset and use early stopping to prevent overfitting or even\nK-fold cross-validation for hyper-parameter tuning. Note\nthat the test set x(m)does not participate in the training\nprocedure.\n4 Predictive Conditional Independence\nTesting\nIn this section, we combine the ideas of predictive condi-\ntional independence testing from [BK17] with E-variables\nfrom theM-split likelihood ratio test from Section 3 based\non [WRB20] to derive a proper E-variable for conditional\nindependence testing. The desired two sample test will\nlater on be reformulated as an independence test by uti-\nlizing the theoretical results discussed in this section.\nAs a reminder, in conditional independence testing we want\nto test if a random variable Xis independent, or not, of Y\nconditioned on Z:\nH0:X? ?YjZ vs.HA:X\u001a\u001a? ?YjZ;\nTeodora Pandeva, Tim Bakker, Christian A. Naesseth, Patrick Forr ´e\nbased on dataD=f(X1;Y1;Z1);:::; (XN;YN;ZN)g.\nThe corresponding ( full) hypothesis spaces, in the i.i.d. set-\nting, are:\nH\r\n0=fP\u0012(XjZ)\nP\u0012(YjZ)\nP\u0012(Z)j\u00122\u00020g;\nH\r\nA=fP\u0012(X;Y;Z )j\u00122\u0002AgnH 0:\nIf we assume that P(X;Z)isfixed forH0andHAthen\nthis simplifies to the following product hypothesis classes,\ni=0;A:\nHfx\ni:=Hpd\ni\nfP(X;Z)g; (4)\nwhere the conditional hypothesis classes Hpd\ni\u0012\nP(Y)X\u0002Zofpredictive distributions are given by:\nHpd\n0=fP\u0012(YjZ)j\u00122\u00020g; (5)\nHpd\nA=fP\u0012(YjX;Z)j\u00122\u0002AgnH 0:\nEquation 3 applied to Hfx\niunder i.i.d. assumptions leads\nus to the following m-th conditional E-variable, using the\nabbreviation w= (x;y;z ):\nE(m)(w(m)jw(<m)) =pA(y(m)jx(m);z(m);w(<m))\np(y(m)jz(m);^\u00120(y(m);z(m))):\n(6)\nThe reason that we applied Equation 3 to Hfx\niinstead of\nH\r\niis thatE(m)will automatically be a valid conditional E-\nvariable forH\r\n0and evenHpd\n0, as well. This is guaranteed\nby the following:\nLemma 4.1. The mapE(m)defined in 6 constitutes a con-\nditional E-variable w.r.t. all three, Hpd\n0,Hfx\n0andH\r\n0.\n5 Classifier Two-Sample Tests with\nE-Variables\nHere we will formalize the classifier two-sample test with\nE-variables. Assume that we are given two independent\nsamples from two possibly different distributions\nX(0)\n1;:::;X(0)\nN0i.i.d.\u0018P0; X(1)\n1;:::;X(1)\nN1i.i.d.\u0018P1:\nBased on these samples we want to test if those distribu-\ntions are equal or not:\nH0:P0=P1 vs.HA:P06=P1:\nIf we introduce the binary variable Yand the abbreviation:\nP(XjY= 0) :=P0(X); P (XjY= 1) :=P1(X);\nand pool the data points X(y)\nnvia augmenting them with\naY-component: (X(y)\nn;Y(y)\nn)withY(y)\nn:=y, then the\npooled data set can be seen as one i.i.d. sample fromP(X;Y )of sizeN:=N0+N1for some unknown\nmarginalP(Y). We can then reformulate the two-sample\ntest as an independence test:\nH0:X? ?Y vs.HA:X\u001a\u001a? ?Y:\nThis allows us to use the E-variables from Section 4 (with-\nout any conditioning variable Z) for (conditional) indepen-\ndence testing. Furthermore, since Yis a binary variable\nwe can write any Markov kernel P(YjX)as a Bernoulli\ndistribution:\nP\u0012(YjX=x) = Ber(\u001b(g\u0012(x)));\nfor some parameterized measurable function g\u0012and where\n\u001b(t) :=1\n1+exp(\u0000t)is the logistic sigmoid-function. So our\nhypothesis spaces look like:\nH0=fBer(q\u0012)j\u00122\u00020g; q \u00122[0;1];\nHA=fBer(\u001b(g\u0012))j\u00122\u0002AgnH 0;\nand them-th conditional E-variable is given by:\nE(m)(y(m)jx(m);x(<m);y(<m)) (7)\n=pA(y(m)jx(m);x(<m);y(<m))\np(y(m)j^\u00120(y(m)))\n=Y\nn2I(m) \u001b(g^\u0012(<m )\nA(xn))\nN(m)\n1=N(m)!yn\n\u0001 1\u0000\u001b(g^\u0012(<m )\nA(xn))\nN(m)\n0=N(m)!1\u0000yn\n:\nNote that the maximum-likelihood estimator of q\u0012ony(m)\nis^q(m)=N(m)\n1=N(m), the frequency of points in the m-\nbatch that are assigned to class y= 1. Furthermore, g\u0012\nhere is trained on (x(<m);y(<m))via binary classification.\n6 Experiments\nIn this section, we propose an evaluation procedure for\ncomparing statistical tests in terms of combined type I and\ntype II analysis which we refer to as a detection error trade-\noff. Based on that, we compare E-C2ST with other existing\napproaches on synthetic, image, and real-life MRI datasets.\nIn all our experiments, E-C2ST shows a type I error lower\nthan the chosen significance level \u000b= 0:05which is not\nthe case for the other baselines.\n6.1 Implementation\nWe implement E-C2ST according to the testing procedure\nproposed in the previous section. We set M= 2 in all\nexperiments. We compare it to the following baselines\n•S-C2ST (standard C2ST), is the C2ST proposed by\n[LPO16]. We train a binary classifier on the aug-\nmented data. The null hypothesis is that accuracy is\nE-Valuating Classifier Two-Sample Tests\n0.5 and the alternative is that it larger 0.5. Follow-\ning [LPO16], we assume that under the null, the accu-\nracy is normally distributed with mean 0:5and vari-\nance1=(2Nte):\n•L-C2ST (logits C2ST) proposed by [CC19] is a ker-\nnel based test, which again trains a binary classifier\nto distinguish the two classes. The null hypothesis is\nrejected if the difference between the classes logits av-\nerage is not significant. The p-values is computed via\na permutation test. We used the implementation pro-\nvided by [LXL+20].\n•D-MMD is a deep kernel based two-sample test pro-\nposed by [LXL+20]. Here, we train a neural net-\nwork to maximize the statistical test power given in\n[LXL+20]. The reported p-values are computed by\nmeans of a permutation test. We used the implemen-\ntation provided by [LXL+20].\n•ME and ME-resnet .ME is the mean embed-\nding test described by [CC19, JSCG16]. ME-resnet\nis the mean embedding test with features extracted\nfrom ResNet-152 [HZRS16] trained on ILSVRC\n[RDS+15] which we apply in the image data exper-\niments (see Section 6.5). We optimize for the test lo-\ncations on the train data. The resulting p-values are\nreported on the test data. We used the implementation\nby [JSCG16].\n•SCF andSCF-resnet . Smoothed Characteristic Func-\ntions test (SCF) is also a mean embedding test by\n[CC19, JSCG16] with test locations optimized on the\ntrain data. SCF-resnet is the mean embedding test\nwith features extracted again from ResNet-152 trained\non ILSVRC which we apply in the image data experi-\nments (see Section 6.5). We used the implementation\nby [JSCG16].\n•MMD-F and DFDA in their unsupervised version\n[KKKL20]. We used these methods for the image\ndata. First, we extract features with ResNet-152. Then\nwe conduct the tests by means of the proposed test\nstatistics based on maximum mean discrepancy (the\nMMD-F test) and its normalized version (the DFDA\ntest).\nAll experiments are discussed in detail either in the main\npaper or in the appendix (see Section C.1 for additional\ninformation).\n6.2 Training\nWe split the data into train and test sets with equal sizes.\nWe fit a model on the train data, e.g. a classifier for E-\nC2ST, S-C2ST, and L-C2ST, and a deep kernel model for\nD-MMD. The chosen network architecture for each dataset\nFigure 1: Estimated Type I error per method. The error bar refers\nto a 95% confidence interval. E-C2ST consistently has type I er-\nror lower than the significance level (dotted line) compared to the\nother methods.\nis explained in the appendix (See Section C.1). We selected\nmodels that yield powerful two-sample tests, i.e. achieve\nmaximum power for large enough sample sizes. To ensure\na fair comparison, we fix the network architecture to be the\nsame across models for almost all experiments. We chose a\ndifferent architecture for the D-MMD method for the face\nexpression data to increase the method’s statistical power\non this task. For E-C2ST, L-C2ST and E-C2ST, we used\nthe same trained model to perform the corresponding tests\non the test data. The neural network models are trained\nwith early stopping on a validation (20% of the train set) set\nto prevent overfitting. Two methods, MMD-F and DFDA,\nare used without training and are directly applied to the test\nsplits for computing the corresponding test statistics.\n6.3 Type-I-Error vs. Type-II-Error Trade-off\nWe repeat each experiment 100 times, i.e., for fixed sample\nsize, we randomly sample a train and test set from a fixed\ndistribution or as a subsample from a given dataset. Then,\nwe train a model on the train set. Next, we decide whether\nto reject the null hypothesis with significance level \u000b=\n0:05on the test set. From the 100 experiments, we report\nthe rejection rates for all methods, which corresponds to the\ntype I error if the two classes are from the same distribution\nor the power ( 1\u0000type II error) if they are not.\nWe match each type II error (or equivalently, power) exper-\niment with two type I error experiments in the following\nway: we first run a type II error experiment on two datasets\nfrom different distributions, followed by a type I error ex-\nperiment on each of those two datasets individually. The\nresults are summarized in detection error trade-off (DET)\nplots with type II error on the x-axes and type I error on the\ny-axes. The marker size depends on each class’s sample\nsize, and the two type I error experiments are visualized\nwith different marker styles. A test is considered to per-\nform well if all points are below the significance level line\n(the dotted line) and the points approach the origin with an\nincreasing sample size. That reflects in a test with a proper\nTeodora Pandeva, Tim Bakker, Christian A. Naesseth, Patrick Forr ´e\ntype I error control, with converging type II error towards\nzero. In all experiments, only E-C2ST shows this desired\nproperty consistently. The other tests show very good type\nII error performance, but often with type I error above the\nsignificance level.\n6.4 Synthetic Data\nHigh-dimensional Gaussians. We start by illustrating\nall methods’ type I error control on a simple example. For\nthis experiment, the two datasets are sampled from the\nsame multivariate Gaussian distribution. We fix the di-\nmension to be 50. Figure 1 shows the estimated type I\nerror, i.e. the rejection rate. The error bars are the esti-\nmated 95% confidence intervals (Wald intervals) from the\n100 runs. Our method consistently shows the lowest type\nI error, while the baseline methods often show type I error\neven above the significance level (visualized by the dotted\nline). Another method with good type I error control is\nSCF.\nFigure 2: Detection error trade-off plot for the Blob dataset. Com-\npared to all baselines, E-C2ST shows lower type I error than \u000b\n(dotted line).\nBlob Dataset. This dataset is a two-dimensional Gaus-\nsian mixture model with nine modes arranged on a 3\u00023\ngrid used by [CRSG15, GBR+12] in their analysis. The\ntwo distributions differ in their variance as visualized in\nFigure 6 in Section C.1. We carry out type I and type II\nerror experiments as explained in Section 6.3 and display\nthe results in a DET plot (see Figure 2). D-MMD outper-\nforms the other methods in terms of power, at the cost of\na high type I error. ME, SCF, S-C2ST, and L-C2ST often\nhave type I error above the significance level. In contrast,\nE-C2ST (this paper) keeps type I error substantially below\nthe significance level, while, as expected, the type II error\ndecreases with increasing sample size.\n6.5 Image Data\nMNIST vs DCGAN-MNIST. The MNIST dataset\n[LBBH98] consists of 70 000 handwritten digits. As in\n[LXL+20], we compare MNIST with generated MNISTimages from a pretrained DCGAN model [RMC15]. Fig-\nure 3 makes a one-to-one comparison between our method\nand each baseline method. Almost all baseline methods\nachieve maximum power with very few samples on this\ntask. However, we see the same tendency as before for\nalmost all baseline methods: the estimated type I error is\noften higher than the significance level. An exception here\nis D-MMD which achieves good type I error control at the\ncost of power, on which it underperforms E-C2ST.\nFacial Expressions. The Karolinska Directed Emotional\nFaces (KDEF) data set [LF ¨O98] is used by [JSCG16,\nLPO16, KKKL20] to distinguish between positive (happy,\nneutral, surprised) and negative (afraid, angry, disgusted)\nemotions from faces. We compare all methods on the same\ntask. As before, we compare type I vs. type II error in Fig-\nure 4. Compared to other cases, S-C2ST, D-MMD, MMD-\nF and DFDA have a better type I error but still higher than\nthe on of E-C2ST. Here, ME-ResNet and SCF-ResNet do\nnot perform well. ME-ResNet was not included in Figure 4\ndue to the poor results. See Section C.1 for more details.\n6.6 Meta-Analysis on MRI Data\nIn scientific meta-analysis, outcomes from multiple studies\nare combined into a single conclusion. In this section, we\npresent an experiment showing that such meta-analysis on\nC2STs may easily be done using E-values. In particular,\nfollowing Lemma 2.2, we perform a combined test by tak-\ning the product of the three independent E-values. We com-\nbine E-values from three independent experiments of the\nsame sample size – making a single accept/reject decision\nas a result of the three experiments whose data sampled\nfrom the same distributions – and show that this increases\nthe power of E-C2ST. We compare to a meta-analysis per-\nformed with S-C2ST and L-C2ST, where we use Fisher’s\nmethod to compute combined p-values [Fis25].\nFor our analysis we leverage the NYU fastMRI open\ndatabase containing a large number of MRI knee volumes\n(made up of individual slices) [ZKS+18], augmented with\nper-slice clinical pathology annotations by [ZYZ+22]. We\nrestrict ourselves to the singlecoil (non-parallel imaging)\nsetting for simplicity. As in the previous section, our ex-\nperiments analyse both the type-I and type II errors. For\nthe latter, the goal will be for a classifier to distinguish knee\nslices containing one or more pathologies (i.e. unhealthy)\nfrom healthy slices. To ensure independence, we partition\nthe dataset into three equally sized parts. For each of these\nparts we individually train a binary classifier – consisting of\nthe standard 16-channel fastMRI U-Net encoder [ZKS+18]\nand a single linear layer – to classify individual MRI slices.\nWe compute E-values andp-values on the test data for each\nof these classifiers. We perform this procedure 100 times,\nand compute type I and type II errors from the resulting de-\ncisions. We refer to Appendix C.5.1 for additional details.\nE-Valuating Classifier Two-Sample Tests\nFigure 3: Detection error trade-off plots for MNIST. We compare E-C2ST to each baseline separately. E-C2ST has substantially lower\ntype I error compared to all other methods. D-MMD also shows very good type I error control but it underperforms our method in terms\nof type II error.\nFigure 4: Detection error trade-off plot for the KDEF data for\nsample size = 100;200;:::; 1000 . Here most of the methods\n(except L-C2ST and SCF) show type I error below the signifi-\ncance level. E-C2ST is the only test with zero type I error.\nResults are presented in Figure 5 for various sample sizes.\nWe compare the meta-analysis using combined E-values to\nan analysis using the individual E-values instead (i.e. per-\nforming the statistical test on the basis of values from the\nthree experiments individually, rather than on their combi-\nnation). We also compare to similar analyses performed\nwith S-C2ST and L-C2ST, using the same trained classi-\nfier in each case. As expected, E-C2ST dominates S-C2ST\nand L-C2ST on type I error for all sample sizes. Power is\nagain lower than for the p-value tests, but is improved both\nby increasing sample size (as we saw for various datasets\nin previous experiments, and show in Appendix C.5.2 for\nMRI data) and by using a combination of the three inde-\npendent E-values. Performing tests using the product of\nE-values increases power effectively, while still maintain-\ning low type I error. Using the Fisher combination further\nincreases the power of the p-value tests, but often at the\ncost of type I error, especially for L-C2ST: in some cases,\n50 100 200 250 500\nT est data size0.0000.0250.0500.0750.100Error valueE-C2ST Type-I errors\nalpha\nproduct\nindividual\n50 100 200 250 500\nT est data sizeS-C2ST Type-I errors\nalpha\nfisher combination\nindividual\n50 100 200 250 500\nT est data sizeL-C2ST Type-I errors\nalpha\nfisher combination\nindividual\n50 100 200 250 500\nT est data size0.00.20.40.60.81.0Error valueE-C2ST Type-II errors\nproduct\nindividual\n50 100 200 250 500\nT est data sizeS-C2ST Type-II errors\nfisher combination\nindividual\n50 100 200 250 500\nT est data sizeL-C2ST Type-II errors\nfisher combination\nindividualFigure 5: Meta-analysis (3 experiments) for MRI data. Testing\nusing combined E-values increases power, while maintaining type\nI error control.\nusing the combined test statistic raises type I error above\nthe significance threshold.\n7 Conclusion\nWe propose E-CS2T, an E-value-based classifier two-\nsample test. E-CS2T combines existing frameworks on pre-\ndictive conditional independence tests and split-likelihood\nratio tests for constructing a valid conditional E-variable.\nWe prove that the resulting E-variable has a finite sample\ntype I error control. Moreover, we have empirically shown\nthat compared to the baseline methods, E-CS2T has better\ntype I control. However, a shortcoming of E-CS2T is that\nit needs to gather more evidence for correctly rejecting the\nnull. To explore this type-I–type-II error trade-off for each\nstatistical test, we visually combined the test’s type I and\ntype II error trajectories. Finally, we have shown on MRI\ndata that E-variables can be utilized for combining inde-\npendently conducted studies, resulting in increased power\nwhile maintaining finite sample type I error guarantees.\nTeodora Pandeva, Tim Bakker, Christian A. Naesseth, Patrick Forr ´e\nAcknowledgements\nWe would like to thank Peter Gr ¨unwald for his exciting talk\nat the AI4Science Colloquium, which introduced us to the\ntheory and background of E-variables and safe testing. He\ninspired us to learn more about these topics and to pursue\nthis research project.\nTim Bakker is partially supported by the Efficient Deep\nLearning research program, which is financed by the Dutch\nResearch Council (NWO) in the domain “Applied and En-\ngineering Sciences” (TTW).\nReferences\n[Bal20] Akshay Balsubramani, Sharp finite-sample\nconcentration of independent variables ,\narXiv preprint arXiv:2008.13293 (2020).\n[BBJ+18] Daniel J Benjamin, James O Berger, Magnus\nJohannesson, Brian A Nosek, E-J Wagen-\nmakers, Richard Berk, Kenneth A Bollen,\nBj¨orn Brembs, Lawrence Brown, Colin\nCamerer, et al., Redefine statistical signifi-\ncance , Nature human behaviour (2018).\n[BK17] Samuel Burkart and Franz J. Kir ´aly,Predic-\ntive Independence Testing, Predictive Condi-\ntional Independence Testing, and Predictive\nGraphical Modelling , arXiv preprint (2017).\n[BMRS+22] T. Bakker, M. Muckley, A. Romero-Soriano,\nM. Drozdzal, and L. Pineda, On learning\nadaptive acquisition policies for undersam-\npled multi-coil MRI reconstruction , Proceed-\nings of Machine Learning Research, 2022.\n[BvHW20] Tim Bakker, Herke van Hoof, and Max\nWelling, Experimental design for MRI by\ngreedy policy search , Advances in Neural In-\nformation Processing Systems, 2020.\n[CC19] Xiuyuan Cheng and Alexander Cloninger,\nClassification logit two-sample testing by\nneural networks , arXiv preprint (2019).\n[CRSG15] Kacper P Chwialkowski, Aaditya Ramdas,\nDino Sejdinovic, and Arthur Gretton, Fast\ntwo-sample testing with analytic represen-\ntations of probability measures , Advances\nin Neural Information Processing Systems\n(2015).\n[Csi84] Imre Csisz ´ar,Sanov Property, Generalized\nI-Projection and a Conditional Limit Theo-\nrem, The Annals of Probability (1984).\n[Fis25] R.A. Fisher, Statistical methods for research\nworkers , Edinburgh Oliver & Boyd, 1925.[GBR+12] Arthur Gretton, Karsten M Borgwardt,\nMalte J Rasch, Bernhard Sch ¨olkopf, and\nAlexander Smola, A kernel two-sample test ,\nThe Journal of Machine Learning Research\n(2012).\n[GdHK20] Peter Gr ¨unwald, Rianne de Heide, and\nWouter M. Koolen, Safe Testing , 2020 Infor-\nmation Theory and Applications Workshop\n(ITA), IEEE, 2020.\n[GH12] Michael U. Gutmann and Aapo Hyv ¨arinen,\nNoise-contrastive estimation of unnormal-\nized statistical models, with applications to\nnatural image statistics , Journal of Machine\nLearning Research (2012).\n[GPAM+14] Ian Goodfellow, Jean Pouget-Abadie, Mehdi\nMirza, Bing Xu, David Warde-Farley, Sher-\njil Ozair, Aaron Courville, and Yoshua Ben-\ngio, Generative adversarial nets , Advances\nin Neural Information Processing Systems,\n2014.\n[HTF01] Trevor Hastie, Robert Tibshirani, and Jerome\nFriedman, The elements of statistical learn-\ning, Springer Series in Statistics, Springer\nNew York Inc., New York, NY , USA, 2001.\n[HZRS16] Kaiming He, Xiangyu Zhang, Shaoqing Ren,\nand Jian Sun, Deep residual learning for im-\nage recognition , Proceedings of the IEEE\nconference on computer vision and pattern\nrecognition, 2016.\n[Jr56] Kelly Jr, A new interpretation of information\nrate, World Scientific, 1956.\n[JSCG16] Wittawat Jitkrittum, Zolt ´an Szab ´o, Kacper P\nChwialkowski, and Arthur Gretton, Inter-\npretable distribution features with maximum\ntesting power , Advances in Neural Informa-\ntion Processing Systems (2016).\n[KB14] Diederik P Kingma and Jimmy Ba, Adam:\nA method for stochastic optimization , arXiv\npreprint arXiv:1412.6980 (2014).\n[KKKL20] Matthias Kirchler, Shahryar Khorasani, Mar-\nius Kloft, and Christoph Lippert, Two-\nsample testing using deep learning , Interna-\ntional Conference on Artificial Intelligence\nand Statistics, PMLR, 2020.\n[Kol33] Andrey Kolmogorov, Sulla determinazione\nempirica di una lgge di distribuzione , Inst.\nItal. Attuari, Giorn. (1933).\nE-Valuating Classifier Two-Sample Tests\n[Kui60] Nicolaas H Kuiper, Tests concerning random\npoints on a circle , Nederl. Akad. Wetensch.\nProc. Ser. A, 1960.\n[LBBH98] Yann LeCun, L ´eon Bottou, Yoshua Bengio,\nand Patrick Haffner, Gradient-based learn-\ning applied to document recognition , Pro-\nceedings of the IEEE (1998).\n[LBG+21] Jan-Matthis Lueckmann, Jan Boelts, David\nGreenberg, Pedro Goncalves, and Jakob\nMacke, Benchmarking simulation-based in-\nference , Proceedings of The 24th Interna-\ntional Conference on Artificial Intelligence\nand Statistics, 2021.\n[LF¨O98] Daniel Lundqvist, Anders Flykt, and Arne\n¨Ohman, Karolinska directed emotional\nfaces , Cognition and Emotion (1998).\n[LPO16] David Lopez-Paz and Maxime Oquab, Re-\nvisiting Classifier Two-Sample Tests , arXiv\npreprint (2016).\n[LXL+20] Feng Liu, Wenkai Xu, Jie Lu, Guangquan\nZhang, Arthur Gretton, and Danica J\nSutherland, Learning deep kernels for non-\nparametric two-sample tests , International\nconference on machine learning, PMLR,\n2020.\n[MSC+13] Tomas Mikolov, Ilya Sutskever, Kai Chen,\nGreg S Corrado, and Jeff Dean, Distributed\nrepresentations of words and phrases and\ntheir compositionality , Advances in Neural\nInformation Processing Systems, Curran As-\nsociates, Inc., 2013.\n[MW47] Henry B Mann and Donald R Whitney, On a\ntest of whether one of two random variables\nis stochastically larger than the other , The\nannals of mathematical statistics (1947).\n[PBR+20] Luis Pineda, Sumana Basu, Adriana\nRomero, Roberto Calandra, and Michal\nDrozdzal, Active MR k-space sampling\nwith reinforcement learning , International\nConference on Medical Image Computing\nand Computer-Assisted Intervention, 2020.\n[RDS+15] Olga Russakovsky, Jia Deng, Hao Su,\nJonathan Krause, Sanjeev Satheesh, Sean\nMa, Zhiheng Huang, Andrej Karpathy,\nAditya Khosla, Michael Bernstein, et al., Im-\nagenet large scale visual recognition chal-\nlenge , International journal of computer vi-\nsion (2015).[RMC15] Alec Radford, Luke Metz, and Soumith\nChintala, Unsupervised representation\nlearning with deep convolutional generative\nadversarial networks , arXiv preprint (2015).\n[Sha19] Glenn Shafer, The language of betting as a\nstrategy for statistical and scientific commu-\nnication , arXiv preprint (2019).\n[Smi39] NV Smirnov, Sur les hearts de la courbe de\ndistribution empirique , Bullentin mathdma-\ntioue de 1University de Moscou. Serie inter-\nnational (1939).\n[SS98] Alex J Smola and Bernhard Sch ¨olkopf,\nLearning with kernels , Citeseer, 1998.\n[Stu08] Student, The probable error of a mean ,\nBiometrika (1908).\n[VW21] Vladimir V ovk and Ruodu Wang, E-values:\nCalibration, combination and applications ,\nThe Annals of Statistics (2021).\n[Wel47] Bernard L Welch, The generalization of ‘stu-\ndent’s’problem when several different pop-\nulation varlances are involved , Biometrika\n(1947).\n[WRB20] Larry Wasserman, Aaditya Ramdas, and\nSivaraman Balakrishnan, Universal Infer-\nence, Proceedings of the National Academy\nof Sciences (2020).\n[ZKS+18] Jure Zbontar, Florian Knoll, Anuroop Sri-\nram, Matthew J. Muckley, Mary Bruno,\nAaron Defazio, Marc Parente, Krzysztof J.\nGeras, Joe Katsnelson, Hersh Chandarana,\nZizhao Zhang, Michal Drozdzal, Adri-\nana Romero, Michael Rabbat, Pascal Vin-\ncent, James Pinkerton, Duo Wang, Nafissa\nYakubova, Erich Owens, C. Lawrence Zit-\nnick, Michael P. Recht, Daniel K. Sod-\nickson, and Yvonne W. Lui, fastMRI: An\nopen dataset and benchmarks for acceler-\nated MRI , CoRR (2018).\n[ZYZ+22] Ruiyang Zhao, Burhaneddin Yaman, Yuxin\nZhang, Russell Stewart, Austin Dixon, Flo-\nrian Knoll, Zhengnan Huang, Yvonne W.\nLui, Michael S. Hansen, and Matthew P.\nLungren, fastMRI+, Clinical pathology an-\nnotations for knee and brain fully sampled\nmagnetic resonance imaging data , Scientific\nData (2022).\nTeodora Pandeva, Tim Bakker, Christian A. Naesseth, Patrick Forr ´e\nA Type II Error Control\nA general finite sample bound for the type II error of testing based on the product of (conditional) i.i.d. E-variables can be\nachieved by Sanov’s theorem, see [Csi84, Bal20].\nTheorem A.1 (Type II error control for conditional i.i.d. E-variables) .LetE:X\u0002Z! R\u00150be a conditional E-variable\nw.r.t.H0givenZ. LetX1;:::;XN: \n\u0002Z!X be conditional random variables that are i.i.d. conditioned on Z. Let\nE(N):=QN\nn=1E(XnjZ). Let\u000b2(0;1],\rN:=\u00001\nNlog\u000b\u00150and for\r2R\u00150put:\nAjz\n\r:=fQ2P(X)jEX\u0018Q[logE(Xjz)]\u0014\rg:\nThen for every P\u00122H Aandz2Z we have the following type II error bound:\nP\u0012\u0010\nE(N)\u0014\u000b\u00001\f\f\fZ=z\u0011\n\u0014exp\u0010\n\u0000N\u0001KL(Ajz\n\rNkPjz\n\u0012)\u0011\n; (8)\nwhich converges to 0ifKL(Ajz\n\rNkPjz\n\u0012)>0for some\r >0. Note that for a subset A\u0012P (X)we abbreviate:\nKL(AkP) := inf\nQ2AKL(QkP):\nProof. If^PN:=1\nNPN\nn=1\u000eXnjZis the empirical distribution then we get the following equivalence, when conditioned on\nZ=z:\nE(N)jz\u0014\u000b\u00001()Y\nn=1E(Xnjz)\u0014\u000b\u00001\n()1\nNNX\nn=1logE(Xnjz)\u0014\u00001\nNlog\u000b=:\rN\n()EX\u0018^Pjz\nN[logE(Xjz)]\u0014\rN\n() ^Pjz\nN2Ajz\n\rN:\nThe bound then follows by a simple application of Sanov’s theorem, see [Csi84, Bal20], for each z2Z individually:\nP\u0012\u0010\nE(N)\u0014\u000b\u00001\f\f\fZ=z\u0011\n=P\u0012\u0010\n^PN2Ajz\n\rN\f\f\fZ=z\u0011\n\u0014exp\u0010\n\u0000N\u0001KL(Ajz\n\rNkPjz\n\u0012)\u0011\n; (9)\nwhich requires the i.i.d. assumption (conditioned on Z) and thatAjz\n\rNis completely convex, which it is.\nThe unconditional version follows from the above by using the one-point space Z=f\u0003gand reads like:\nCorollary A.2 (Type II error control for i.i.d. E-variables) .LetX1;:::;XNbe an i.i.d. sample, E:X ! R\u00150be an\nE-variable w.r.t.H0andE(N):=QN\nn=1E(Xn). Let\u000b2(0;1],\rN:=\u00001\nNlog\u000b\u00150and for\r2R\u00150put:\nA\r:=fQ2P(X)jEQ[logE]\u0014\rg:\nThen for every P\u00122H Awe have the following type II error bound:\nP\u0012\u0010\nE(N)\u0014\u000b\u00001\u0011\n\u0014exp (\u0000N\u0001KL(A\rNkP\u0012)); (10)\nwhich converges to 0ifKL(A\rkP\u0012)>0for some\r >0.\nRelating to the simpler unconditional case of the Corollary we can make the following clarifying remarks.\nRemark A.3. 1. The condition: KL(A\rkP\u0012)>0for some\r >0, is slightly stronger than the condition: EP\u0012[logE]>\n0. Ifsupx2XjlogE(x)j<1then one can show that both those conditions are equivalent.\n2. If there exist \u000e;\r > 0such that for all P\u00122H Awe have KL(A\rkP\u0012)\u0015\u000ethen we easily deduce the uniform type II\nerror bound for N\u0015\u0000log\u000b\n\r:\nsup\nP\u00122H AP\u0012\u0010\nE(N)\u0014\u000b\u00001\u0011\n\u0014exp (\u0000N\u0001\u000e): (11)\nE-Valuating Classifier Two-Sample Tests\nB Proofs\nProof of Lemma 2.2\nProof. By Fubini’s theorem we get:\nE\u0012; h\nE(3)\f\f\fzi\n=Z\nE(3)(x;yjz) (P\u0012\nP ) (dx;dyjz)\n=Z Z\nE(2)(xjy;z)\u0001E(1)(yjz)\nP\u0012(dxjy;z)P (dyjz)\n=Z\u0012Z\nE(2)(xjy;z)P\u0012(dxjy;z)\u0013\n\u0001E(1)(yjz)P (dyjz)\n\u0014Z\n1\u0001E(1)(yjz)P (dyjz)\u00141:\nProof that the E-variable defined in Equation 3 is a conditional E-variable w.r.t.H(m)j(<m)\n0:\nProof. For\u00122\u00020we have:\nE\u0012h\nE(m)\f\f\fx(<m)i\n=Z\nE(m)(x(m)jx(<m))p\u0012(x(m)jx(<m))\u0016(dx(m))\n=ZpA(x(m)jx(<m))\nmax ~\u00122\u00020p~\u0012(x(m)jx(<m))p\u0012(x(m)jx(<m))\u0016(dx(m))\n\u0014ZpA(x(m)jx(<m))\np\u0012(x(m)jx(<m))p\u0012(x(m)jx(<m))\u0016(dx(m))\n=Z\npA(x(m)jx(<m))\u0016(dx(m)) = 1:\nProof of Lemma 4.1\nProof. For\u00122\u00020we get:\nE\u0012h\nE(m)\f\f\fx(m);z(m);w(<m)i\n=ZpA(y(m)jx(m);z(m);w(<m))\np(y(m)jz(m);^\u00120(y(m);z(m)))p\u0012(y(m)jz(m))\u0016(dy(m))\n=ZpA(y(m)jx(m);z(m);w(<m))\nmax ~\u00122\u00020p~\u0012(y(m)jz(m))p\u0012(y(m)jz(m))\u0016(dy(m))\n\u0014ZpA(y(m)jx(m);z(m);w(<m))\np\u0012(y(m)jz(m))p\u0012(y(m)jz(m))\u0016(dy(m))\n=Z\npA(y(m)jx(m);z(m);w(<m))\u0016(dy(m)) = 1:\nThis shows the claim for Hpd\n0. The claims forHfx\n0andH\r\n0immediately follow from Lemma 2.2 by multiplying E(m)with\nthe constant E-variable 1andHpd\n0with the hypotheses fP(X;Z)gorfP\u0012(X;Z)j\u00122\u00020g, resp.\nTeodora Pandeva, Tim Bakker, Christian A. Naesseth, Patrick Forr ´e\nLemma B.1. LetE1;:::;EDareDE-variables with respect to the same null hypothesis class H0:Then, the average over\nallDE-variables is an E-variable \u0016E:=1\nDPD\ni=1Eiis an E-variable.\nProof. Letp\u00122H 0:Then, for \u0016Ewe get\nE\u0012\u0002\u0016E\u0003\n=Z\u00101\nDDX\ni=1Ei(x)\u0011\np\u0012(x)\u0016(dx) =1\nDDX\ni=1Z\nEi(x)p\u0012(x)\u0016(dx)\u00141\nDDX\ni=11 = 1\nE-Valuating Classifier Two-Sample Tests\nLayer (type) Output Shape Param #\nLinear-1 [batch size;50] 150\nBatchNorm1d-2 [batch size;50] 100\nReLU-3 [batch size;50] 0\nLinear-4 [batch size;50] 2550\nBatchNorm1d-5 [batch size;50] 100\nReLU-6 [batch size;50] 0\nLinear-7 [batch size;2] 102\nTable 1: The network architecture employed in the synthetic experiments for all baselines.\nLayer (type) Output Shape Param #\nConv2d-1 [batch size;32;30;30] 320\nConv2d-2 [batch size;64;28;28] 18496\nDropout-3 [batch size;64;14;14] 0\nLinear-4 [batch size;128] ([ batch size;600]) 1605760 (7527000)\nDropout-5 [batch size;128] ([ batch size;300]) 0\nLinear-6 [batch size;2] ([batch size;300]) 258 (180300)\nTable 2: The network architecture employed in the MNIST experiment for E-C2ST, L-C2ST, S-C2ST. The D-MMD\narchitecture differences are specified in brackets.\nC Experiments\nIn this section, we explain the implementation and training of our models in detail. In Section C.1, we discuss the ar-\nchitecture choice and training of E-C2ST and the other baseline methods for the synthetic and image data experiments.\nAdditional experiments are shown in Sections C.2 and C.4. Details regarding the implementation and training of the MRI\nexperiment are given in Sections C.5.1 and supplementary results in Section C.5.2.\nC.1 Training\nFirst, we discuss the training of the deep two-sample tests (E-C2ST, S-C2ST, L-C2ST, D-MMD). We used Adam optimizer\n[KB14] with learning rates 5\u00011e\u00004for the synthetic data and 1e\u00004for the image data. The batch size for the image data\nis128. For the synthetic data we used the whole dataset. In all experiments we used 80% of the training data for training\nand20% for validation with early stopping.\n•High Dimensional Gaussians andBlob data. The used network architectures are described in Table 1. For the\ninitialization of the D-MMD parameters we used the ones provided by [LXL+20]. We trained the models with early\nstopping with patience 15 epochs (High Dimensional Gaussians) and 20 (50 for D-MMD) epochs (Blob data).\n•MNIST. The dataset is obtained from https://github.com/fengliu90/DK-for-TST . Table 2 outlines\nthe neural network architectures. The D-MMD architecture differs from the C2ST ones in the last two linear layer.\nAgain we used the parameter initialization provided by [LXL+20]. We trained the models with early stopping with\npatience of 20epochs.\n•Face Expression Data. For the C2ST methods we used the DCGAN Discriminator provided in Table 3. We set the\npatience parameter to 10 epochs. We couldn’t train D-MMD on this task by utilizing the Discriminator architecture\ndue to the method’s sensitivity to the parameter initialization. We leave this for future work. We proposed another\napproach inspired by [KKKL20]. We extract features by using ResNet and then we train a two-layer network with\nsize 300 with an ReLU activation layer in-between on the extracted features with patience of 100 epochs. It yielded\nvery good results as displayed in Figure 4.\nFor ME (ME-ResNet) and SCF (SCF-ResNet), we set J= 5in all experiments.\nC.2 High-Dimensional Gaussians.\nWe run E-C2ST and the other baseline methods on high-dimensional Gaussian data. For the type II error experiment, we\nsample the two datasets from N(0;I50)andN(0;diag(2;1;:::; 1)). We run two type I error experiments on datasets either\nTeodora Pandeva, Tim Bakker, Christian A. Naesseth, Patrick Forr ´e\nLayer (type) Output Shape Param #\nConv2d-1 [batch size;16;16;16] 448\nLeakyReLU-2 [batch size;16;16;16] 0\nDropout2d-3 [batch size;16;16;16] 0\nConv2d-4 [batch size;32;8;8] 4640\nLeakyReLU-5 [batch size;32;8;8] 0\nDropout2d-6 [batch size;32;8;8] 0\nBatchNorm2d-7 [batch size;32;8;8] 64\nConv2d-8 [batch size;64;4;4] 18496\nLeakyReLU-9 [batch size;64;4;4] 0\nDropout2d-10 [batch size;64;4;4] 0\nBatchNorm2d-11 [batch size;64;4;4] 128\nConv2d-12 [batch size;128;2;2] 73856\nLeakyReLU-13 [batch size;128;2;2] 0\nDropout2d-14 [batch size;128;2;2] 0\nBatchNorm2d-15 [batch size;128;2;2] 256\nFlatten-16 [batch size;512] 0\nLinear-17 [batch size;100] 51300\nReLU-18 [batch size;100] 0\nLinear-19 [batch size;2] 202\nTable 3: The network architecture employed in the MNIST experiment for E-C2ST, L-C2ST, S-C2ST.\nFigure 6: The two classes of the blob dataset.\nsampled fromN(0;I50)orN(0;diag(2;1;:::; 1)), respectively. The results are displayed in Figure 9b. All methods show\nvery good type II error control. E-C2ST has lower power in the low sample size case (sample size=100). The type I error\ncontrol is above the significance level for all baseline methods but SCF. However, this method underperforms compared to\nthe other baselines, which complies with the discussed type I and II error trade-off.\nThe type I error for N(0;I50)is further investigated in Section 1. We extend this experiment by increasing the sample size.\nFigure 7 shows the estimated type I error and the corresponding 95% confidence interval per method for sample size =\n1000;1500;2000;2500 . Only S-C2ST shows an improvement in the type I error control compared to the results discussed\nin Section 1. However, both experiments do not show evidence that the methods improve their type I error control with\nincreasing sample size.\nC.3 Blob Dataset.\nThe two Blob distributions used in the corresponding type 2 error experiment are visualized in Figure 6. The means are\nthe same for both classes and are arranged in a 3\u00023grid. The two populations differ in their variance. Figure 9a displays\nthe same experiment as the one in Figure 2. Here, we compare E-C2ST to one of the baseline methods in each sub-figure.\nC.4 Face Expression Data\nFigure 9c shows a DET plot between E-C2ST and each baseline method. This figure is another representation of the\nexperiment from Figure 4. As we mentioned earlier, D-MMD performs as well as E-C2ST in terms of type I and type\nII error control. However, we used different model architectures for the training as described in Section C.1. Figure\nE-Valuating Classifier Two-Sample Tests\nFigure 7: Type I error control. The error bars refer to 95% confidence interval. E-C2ST has the lowest type I error across all methods.\nFigure 8: E-C2ST vs D-MMD. We trained D-MMD with network architecture close to E-C2ST one. The resulted D-MMD is under-\npowered compared to E-C2ST and with similar type I error control.\n8 compares E-C2ST and D-MMD on the same task, when both models have approximately the same architecture. They\ndiffer in the last layer, where the D-MMD model has an output size of 300as given by [LXL+20]. Here, D-MMD has lower\npower than in Figure 9c. It turns our that D-MMD performance is very sensitive to the kernel parameters initialization and\nwe had difficulties training that model. A hyper-parameter tuning procedure is needed which we leave for future work.\nC.5 MRI details\nIn this section we provide additional implementation details and results on the MRI data.\nC.5.1 Data and training details\nWe leverage the NYU fastMRI open database containing a large number of MRI knee volumes (made up of individual\nslices) [ZKS+18], augmented with per-slice clinical pathology annotations by [ZYZ+22]. We restrict ourselves to the sin-\nglecoil (non-parallel imaging) setting for simplicity. We sample data from the combined fastMRI knee train and validation\nsets, omitting the potentially slightly differently distributed test set [PBR+20, BvHW20, BMRS+22].\nWe perform our experiments on class-stratified data; since there are many more healthy than unhealthy slices in the data,\nwe must discard some of the former. Some slices are labeled as containing no pathologies, but exist in the same volume\nas unhealthy slices. Since such slices are especially likely to have been mislabeled, or be correlated with nearby unhealthy\nslices, we discard them before stratifying the data. We call the resulting filtered dataset Df.\nWe then choose a dataset size Nand sampleN=2slices of each class (healthy and unhealthy) from Df. This allows us to\ndo two type I error tests: one for each class. For both classes, we randomly label half the data as positive and the other\nhalf as negative. We then split half the data off as test data, and split a further 20% of the remaining training data off\nas validation data. For the type II error tests, we instead independently sample N=2slices of each class from Df, and\nrandomly discard half, such that the total number of datapoints corresponds to the type I error experiments. Here we label\nthe unhealthy slices as positive and the healthy ones as negative. We then perform the same train-val-test split. In each\ncase, this results in N=4test points,N=5train points, and N=20validation points.\nThe binary classifiers – consisting of a standard 16-channel fastMRI U-Net encoder [ZKS+18] and a single linear layer –\nare trained on size 64 batches of full-resolution slices (320\u0002320) by the Adam optimiser [KB14] with learning rate 10\u00005\nTeodora Pandeva, Tim Bakker, Christian A. Naesseth, Patrick Forr ´e\n(a) Blob Data\n(b) High Dimensional Gaussians Data\n(c) Face Expression Data\nFigure 9: Detection error trade-off plots comparing E-C2ST and each baseline method. E-C2ST consistently shows the lowest type I\nerror among all methods.\nE-Valuating Classifier Two-Sample Tests\nfor 30 epochs. We employ early stopping on the validation loss with patience 3. We compute the E-value andp-value for\neach classifier on the test data.\nC.5.2 Additional results\nIn this section we present two additional results: a detection error trade-off plot for MRI data similar to those shown for\nthe other datasets in the main text, and a meta-analysis experiment using the average of E-values, following Lemma B.1.\nFor the detection error trade-off plot, we perform the experimental procedure outlined in Sections 6.2 and 6.3, comparing\nE-C2ST to S-C2ST and L-C2ST. As in Section C.4, we run into similar issues when training D-MMD. An extensive\nhyperparameter search is needed to find a powerful D-MMD model, which we leave for future work.\nWe use dataset sizes N2f200;400;800;1000;2000;3000;4000;5000g, corresponding to (test) sample sizes of 50, 100,\n200, 250, 500, 750, 1000, and 1250. We repeat all experiments 100 times to compute type I and type II errors.\n0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7\nType II error0.000.020.040.060.080.10Type I errorComparison = E-C2ST, S-C2ST\n0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7\nType II errorComparison = E-C2ST, L-C2ST\nLabel\nS-C2ST\nL-C2ST\nE-C2ST\nSize\n50\n250\n500\n750\n1000\n1250\nDataset\nDataset 1\nDataset 2\nFigure 10: Detection error trade-off plot for MRI data. E-C2ST dominates S-C2ST and L-C2ST on type I error, while power converges\nto 1 with increased sample size.\nResults are shown in Figure 10. E-C2ST shows proper type I error control across sample sizes, while S-C2ST and L-C2ST\noften do not achieve type I error below the significance level. Power of E-C2ST is somewhat lower than that of S-C2ST\nand L-C2ST for small sample sizes, but converges to 1 with more data.\nIn the meta-analysis experiments of the main text, we combined E-values by taking their product (Lemma 2.2) and demon-\nstrated that this procedure improved E-C2ST’s power on MRI data while maintaining low type I error. Here we show\nadditional results using the average of E-values (following Lemma B.1) in Figure 11. Performing tests using either the\naverage or the product of E-values increases power effectively, while still maintaining low type I error.\nTeodora Pandeva, Tim Bakker, Christian A. Naesseth, Patrick Forr ´e\n50 100 200 250 500\nT est data size0.0000.0250.0500.0750.100Error valueE-C2ST Type-I errors\nalpha\naverage\nproduct\nindividual\n50 100 200 250 500\nT est data sizeS-C2ST Type-I errors\nalpha\nfisher combination\nindividual\n50 100 200 250 500\nT est data sizeL-C2ST Type-I errors\nalpha\nfisher combination\nindividual\n50 100 200 250 500\nT est data size0.00.20.40.60.81.0Error valueE-C2ST Type-II errors\naverage\nproduct\nindividual\n50 100 200 250 500\nT est data sizeS-C2ST Type-II errors\nfisher combination\nindividual\n50 100 200 250 500\nT est data sizeL-C2ST Type-II errors\nfisher combination\nindividual\nFigure 11: Meta-analysis (3 experiments) for MRI data. Testing using combined E-values increases power, while maintaining type I\nerror control.",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "1XbDq1D--nPe",
"year": null,
"venue": "WMT@EACL2009",
"pdf_link": "https://aclanthology.org/W09-0434.pdf",
"forum_link": "https://openreview.net/forum?id=1XbDq1D--nPe",
"arxiv_id": null,
"doi": null
}
|
{
"title": "A Quantitative Analysis of Reordering Phenomena.",
"authors": [
"Alexandra Birch",
"Phil Blunsom",
"Miles Osborne"
],
"abstract": "Alexandra Birch, Phil Blunsom, Miles Osborne. Proceedings of the Fourth Workshop on Statistical Machine Translation. 2009.",
"keywords": [],
"raw_extracted_content": "Proceedings of the Fourth Workshop on Statistical Machine Translation , pages 197–205,\nAthens, Greece, 30 March – 31 March 2009. c/circlecopyrt2009 Association for Computational Linguistics\nA Quantitative Analysis of Reordering Phenomena\nAlexandra Birch Phil Blunsom Miles Osborne\[email protected] [email protected] [email protected]\nUniversity of Edinburgh\n10 Crichton Street\nEdinburgh, EH8 9AB, UK\nAbstract\nReordering is a serious challenge in sta-\ntistical machine translation. We propose\na method for analysing syntactic reorder-\ning in parallel corpora and apply it to un-\nderstanding the differences in the perfor-\nmance of SMT systems. Results at recent\nlarge-scale evaluation campaigns show\nthat synchronous grammar-based statisti-\ncal machine translation models produce\nsuperior results for language pairs such as\nChinese to English. However, for language\npairs such as Arabic to English, phrase-\nbased approaches continue to be competi-\ntive. Until now, our understanding of these\nresults has been limited to differences in\nBLEUscores. Our analysis shows that cur-\nrent state-of-the-art systems fail to capture\nthe majority of reorderings found in real\ndata.\n1 Introduction\nReordering is a major challenge in statistical ma-\nchine translation. Reordering involves permuting\nthe relative word order from source sentence to\ntranslation in order to account for systematic dif-\nferences between languages. Correct word order is\nimportant not only for the fluency of output, it also\naffects word choice and the overall quality of the\ntranslations.\nIn this paper we present an automatic method\nfor characterising syntactic reordering found in a\nparallel corpus. This approach allows us to analyse\nreorderings quantitatively, based on their number\nand span, and qualitatively, based on their relation-\nship to the parse tree of one sentence. The methods\nwe introduce are generally applicable, only requir-\ning an aligned parallel corpus with a parse over the\nsource or the target side, and can be extended to\nallow for more than one reference sentence and\nderivations on both source and target sentences.Using this method, we are able to compare the re-\nordering capabilities of two important translation\nsystems: a phrase-based model and a hierarchical\nmodel.\nPhrase-based models (Och and Ney, 2004;\nKoehn et al., 2003) have been a major paradigm\nin statistical machine translation in the last few\nyears, showing state-of-the-art performance for\nmany language pairs. They search all possible re-\norderings within a restricted window, and their\noutput is guided by the language model and a\nlexicalised reordering model (Och et al., 2004),\nboth of which are local in scope. However, the\nlack of structure in phrase-based models makes it\nvery difficult to model long distance movement of\nwords between languages.\nSynchronous grammar models can encode\nstructural mappings between languages which al-\nlow complex, long distance reordering. Some\ngrammar-based models such as the hierarchical\nmodel (Chiang, 2005) and the syntactified target\nlanguage phrases model (Marcu et al., 2006) have\nshown better performance than phrase-based mod-\nels on certain language pairs.\nTo date our understanding of the variation in re-\nordering performance between phrase-based and\nsynchronous grammar models has been limited to\nrelative B LEUscores. However, Callison-Burch et\nal. (2006) showed that B LEUscore alone is insuffi-\ncient for comparing reordering as it only measures\na partial ordering on n-grams. There has been little\ndirect research on empirically evaluating reorder-\ning.\nWe evaluate the reordering characteristics of\nthese two paradigms on Chinese-English and\nArabic-English translation. Our main findings are\nas follows: (1) Chinese-English parallel sentences\nexhibit many medium and long-range reorderings,\nbut less short range ones than Arabic-English, (2)\nphrase-based models account for short-range re-\norderings better than hierarchical models do, (3)\n197\nby contrast, hierarchical models clearly outper-\nform phrase-based models when there is signif-\nicant medium-range reordering, and (4) none of\nthese systems adequately deal with longer range\nreordering.\nOur analysis provides a deeper understand-\ning of why hierarchical models demonstrate bet-\nter performance for Chinese-English translation,\nand also why phrase-based approaches do well at\nArabic-English.\nWe begin by reviewing related work in Sec-\ntion 2. Section 3 describes our method for ex-\ntracting and measuring reorderings in aligned and\nparsed parallel corpora. We apply our techniques\nto human aligned parallel treebank sentences in\nSection 4, and to machine translation outputs in\nSection 5. We summarise our findings in Section 6.\n2 Related Work\nThere are few empirical studies of reordering be-\nhaviour in the statistical machine translation lit-\nerature. Fox (2002) showed that many common\nreorderings fall outside the scope of synchronous\ngrammars that only allow the reordering of child\nnodes. This study was performed manually and\ndid not compare different language pairs or trans-\nlation paradigms. There are some comparative\nstudies of the reordering restrictions that can be\nimposed on the phrase-based or grammar-based\nmodels (Zens and Ney, 2003; Wellington et al.,\n2006), however these do not look at the reordering\nperformance of the systems. Chiang et al. (2005)\nproposed a more fine-grained method of compar-\ning the output of two translation systems by us-\ning the frequency of POS sequences in the output.\nThis method is a first step towards a better under-\nstanding of comparative reordering performance,\nbut neglects the question of what kind of reorder-\ning is occurring in corpora and in translation out-\nput.\nZollmann et al. (2008) performed an empiri-\ncal comparison of the B LEU score performance\nof hierarchical models with phrase-based models.\nThey tried to ascertain which is the stronger model\nunder different reordering scenarios by varying\ndistortion limits the strength of language models.\nThey show that the hierarchical models do slightly\nbetter for Chinese-English systems, but worse for\nArabic-English. However, there was no analysis of\nthe reorderings existing in their parallel corpora,\nor on what kinds of reorderings were produced in\ntheir output. We perform a focused evaluation of\nthese issues.Birch et al. (2008) proposed a method for ex-\ntracting reorderings from aligned parallel sen-\ntences. We extend this method in order to constrain\nthe reorderings to a derivation over the source sen-\ntence where possible.\n3 Measuring Reordering\nReordering is largely driven by syntactic differ-\nences between languages and can involve complex\nrearrangements between nodes in synchronous\ntrees. Modeling reordering exactly would be\nsparse and heterogeneous and thus we make an\nimportant simplifying assumption in order for the\ndetection and extraction of reordering data to be\ntractable and useful. We assume that reordering\nis a binary process occurring between two blocks\nthat are adjacent in the source. We extend the\nmethods proposed by Birch et al. (2008) to iden-\ntify and measure reordering. Modeling reordering\nas the inversion in order of two adjacent blocks is\nsimilar to the approach taken by the Inverse Trans-\nduction Model (ITG) (Wu, 1997), except that here\nwe are not limited to a binary tree. We also detect\nand include non-syntactic reorderings as they con-\nstitute a significant proportion of the reorderings.\nBirch et al. (2008) defined the extraction pro-\ncess for a sentence pair that has been word aligned.\nThis method is simple, efficient and applicable to\nall aligned sentence pairs. However, if we have ac-\ncess to the syntax tree, we can more accurately\ndetermine the groupings of embedded reorder-\nings, and we can also access interesting informa-\ntion about the reordering such as the type of con-\nstituents that get reordered. Figure 1 shows the\nadvantage of using syntax to guide the extraction\nprocess. Embedded reorderings that are extracted\nwithout syntax assume a right branching structure.\nReorderings that are extracted using the syntac-\ntic extraction algorithm reflect the correct sentence\nstructure. We thus extend the algorithm to extract-\ning syntactic reorderings. We require that syntac-\ntic reorderings consist of blocks of whole sibling\nnodes in a syntactic tree over the source sentence.\nIn Figure 2 we can see a sentence pair with an\nalignment and a parse tree over the source. We per-\nform a depth first recursion through the tree, ex-\ntracting the reorderings that occur between whole\nsibling nodes. Initially a reordering is detected be-\ntween the leaf nodes P and NN. The block growing\nalgorithm described in Birch et al. (2008) is then\nused to grow block A to include NT and NN, and\nblock B to include P and NR. The source and tar-\nget spans of these nodes do not overlap the spans198\nFigure 1. An aligned sentence pair which shows two\ndifferent sets of reorderings for the case without and\nwith a syntax tree.\nof any other nodes, and so the reordering is ac-\ncepted. The same happens for the higher level re-\nordering where block A covers NP-TMP and PP-\nDIR, and block B covers the VP. In cases where\nthe spans do overlap spans of nodes that are not\nsiblings, these reorderings are then extracted us-\ning the algorithm described in Birch et al. (2008)\nwithout constraining them to the parse tree. These\nnon-syntactic reorderings constitute about 10% of\nthe total reorderings and they are a particular chal-\nlenge to models which can only handle isomorphic\nstructures.\nRQuantity\nThe reordering extraction technique allows us to\nanalyse reorderings in corpora according to the\ndistribution of reordering widths and syntactic\ntypes. In order to facilitate the comparison of dif-\nferent corpora, we combine statistics about in-\ndividual reorderings into a sentence level metric\nwhich is then averaged over a corpus. This met-\nric is defined using reordering widths over the tar-\nget side to allow experiments with multiple lan-\nguage pairs to be comparable when the common\nlanguage is the target.\nWe use the average RQuantity (Birch et al.,\n2008) as our measure of the amount of reordering\nin a parallel corpus. It is defined as follows:\nRQuantity =/summationtext\nr∈R|rAt|+|rBt|\nI\nwhere Ris the set of reorderings for a sentence,\nIis the target sentence length, AandBare the\ntwo blocks involved in the reordering, and |rAs|\nis the size or span of block A on the target side.\nRQuantity is thus the sum of the spans of all the\nreordering blocks on the target side, normalised\n$\n%\n$\n %\nFigure 2. A sentence pair from the test corpus, with its\nalignment and parse tree. Two reorderings are shown\nwith two different dash styles.\nby the length of the target sentence. The minimum\nRQuantity for a sentence would be 0. The max-\nimum RQuantity occurs where the order of the\nsentence is completely inverted and the RQuantity\nis/summationtextI\ni=2i. See, for example, Figure 1 where the\nRQuantity is9\n4.\n4 Analysis of Reordering in Parallel\nCorpora\nCharacterising the reordering present in different\nhuman generated parallel corpora is crucial to un-\nderstanding the kinds of reordering we must model\nin our translations. We first need to extract reorder-\nings for which we need alignments and deriva-\ntions. We could use automatically generated an-\nnotations, however these contain errors and could\nbe biased towards the models which created them.\nThe GALE project has provided gold standard\nword alignments for Arabic-English (AR-EN) and\nChinese-English (CH-EN) sentences.1A subset of\nthese sentences come from the Arabic and Chi-\nnese treebanks, which provide gold standard parse\ntrees. The subsets of parallel data for which we\nhave both alignments and parse trees consist of\n1see LDC corpus LDC2006E93 version GALE-Y1Q4199\n●●●●●●● ●●●\n●0.0 0.2 0.4 0.6 0.8 1.0\nSentence Length BinRQuantity\n0−9 20−29 40−49 60−69 80−89 >=100●CH.EN.RQuantity\nAR.EN.RQuantityFigure 3. Sentence level measures of RQuantity for the\nCH-EN and AR-EN corpora for different English sen-\ntence lengths.\n●●\n●\n●\n●●\n●●\n●\n●0 500 1000 1500 2000 2500\nReordering WidthNumber of Reorderings\n234567−89−10 16−20●CH−EN\nAR−EN\nFigure 4. Comparison of reorderings of different widths\nfor the CH-EN and AR-EN corpora.\n3,380 CH-EN sentences and 4,337 AR-EN sen-\ntences.\nFigure 3 shows that the different corpora have\nvery different reordering characteristics. The CH-\nEN corpus displays about three times the amount\nof reordering (RQuantity) than the AR-EN cor-\npus. For CH-EN, the RQuantity increases with\nsentence length and for AR-EN, it remains con-\nstant. This seems to indicate that for longer CH-\nEN sentences there are larger reorderings, but this\nis not the case for AR-EN. RQuantity is low for\nvery short sentences, which indicates that these\nsentences are not representative of the reordering\ncharacteristics of a corpus. The measures seem\nto stabilise for sentences with lengths of over 20\nwords.\nThe average amount of reordering is interesting,\nbut it is also important to look at the distribution\nof reorderings involved. Figure 4 shows the re-\norderings in the CH-EN and AR-EN corpora bro-\n●●\n●\n●\n●\n●\n●\n●\n●\n●0 5 10 15 20 25 30\nWidths of Reorderings% Number of Reorderings for Width\n234567−89−10 16−20●NP\nDNP\nCP\nNP.PNFigure 5. The four most common syntactic types being\nreordered forward in target plotted as % of total syntac-\ntic reorderings against reordering width (CH-EN).\nken down by the total width of the source span\nof the reorderings. The figure clearly shows how\ndifferent the two language pairs are in terms of\nreordering widths. Compared to the CH-EN lan-\nguage pair, the distribution of reorderings in AR-\nEN has many more reorderings over short dis-\ntances, but many fewer medium or long distance\nreorderings. We define short ,medium orlong dis-\ntance reorderings to mean that they have a reorder-\ning of width of between 2 to 4 words, 5 to 8 and\nmore than 8 words respectively.\nSyntactic reorderings can reveal very rich\nlanguage-specific reordering behaviour. Figure 5\nis an example of the kinds of data that can be used\nto improve reordering models. In this graph we se-\nlected the four syntactic types that were involved\nin the largest number of reorderings. They cov-\nered the block that was moved forward in the tar-\nget (block A). We can see that different syntactic\ntypes display quite different behaviour at different\nreordering widths and this could be important to\nmodel.\nHaving now characterised the space of reorder-\ning actually found in parallel data, we now turn\nto the question of how well our translation models\naccount for them. As both the translation models\ninvestigated in this work do not use syntax, in the\nfollowing sections we focus on non-syntactic anal-\nysis.\n5 Evaluating Reordering in Translation\nWe are interested in knowing how current trans-\nlation models perform specifically with regard to\nreordering. To evaluate this, we compare the re-\norderings in the parallel corpora with the reorder-\nings that exist in the translated sentences. We com-200\nNone Low Medium High\nAverage RQuantity\nCH-EN 0 0.39 0.82 1.51\nAR-EN 0 0.10 0.25 0.57\nNumber of Sentences\nCH-EN 105 367 367 367\nAR-EN 293 379 379 379\nTable 1. The RQuantity and the number of sentences\nfor each reordering test set.\npare two state-of-the-art models: the phrase-based\nsystem Moses (Koehn et al., 2007) (with lexi-\ncalised reordering), and the hierarchical model Hi-\nero (Chiang, 2007). We use default settings for\nboth models: a distortion limit of seven for Moses,\nand a maximum source span limit of 10 words for\nHiero. We trained both models on subsets of the\nNIST 2008 data sets, consisting mainly of news\ndata, totalling 547,420 CH-EN and 1,069,658 AR-\nEN sentence pairs. We used a trigram language\nmodel on the entire English side (211M words)\nof the NIST 2008 Chinese-English training cor-\npus. Minimum error rate training was performed\non the 2002 NIST test for CH-EN, and the 2004\nNIST test set for AR-EN.\n5.1 Reordering Test Corpus\nIn order to determine what effect reordering has\non translation, we extract a test corpus with spe-\ncific reordering characteristics from the manually\naligned and parsed sentences described in Sec-\ntion 4. To minimise the impact of sentence length,\nwe select sentences with target lengths from 20 to\n39 words inclusive. In this range RQuantity is sta-\nble. From these sentences we first remove those\nwith no detected reorderings, and we then divide\nup the remaining sentences into three sets of equal\nsizes based on the RQuantity of each sentence. We\nlabel these test sets: “none”, “low”, “medium” and\n“high”.\nAll test sentences have only one reference En-\nglish sentence. MT evaluations using one refer-\nence cannot make strong claims about any partic-\nular test sentence, but are still valid when used to\ncompare large numbers of hypotheses.\nTable 1 and Figure 6 show the reordering char-\nacteristics of the test sets. As expected, we see\nmore reordering for Chinese-English than for Ara-\nbic to English.\nIt is important to note that although we might\nname a set “low” or “high”, this is only relative\nto the other groups for the same language pair.\nThe “high” AR-EN set, has a lower RQuantity\nthan the “medium” CH-EN set. Figure 6 shows\n0 50 100 150 200 250\nWidths of ReorderingsNumber of Reorderings\n234567−89−10 16−20Low\nMedium\nHighFigure 6. Number of reorderings in the CH-EN test set\nplotted against the total width of the reorderings.\nnonelowmedhigh allMOSES\nHIERO14 16 18 20 22\nFigure 7. BLEUscores for the different CH-EN reorder-\ning test sets and the combination of all the groups for\nthe two translation models.The 95% confidence levels\nas measured by bootstrap resampling are shown for\neach bar.\nthat the CH-EN reorderings in the higher RQuan-\ntity groups have more and longer reorderings. The\nAR-EN sets show similar differences in reordering\nbehaviour.\n5.2 Performance on Test Sets\nIn this section we compare the translation output\nfor the phrase-based and the hierarchical system\nfor different reordering scenarios. We use the test\nsets created in Section 5.1 to explicitly isolate the\neffect reordering has on the performance of two\ntranslation systems.\nFigure 7 and Figure 8 show the B LEU score\nresults of the phrase-based model and the hierar-\nchical model on the different reordering test sets.\nThe 95% confidence intervals as calculated by\nbootstrap resampling (Koehn, 2004) are shown for\neach of the results. We can see that the models\nshow quite different behaviour for the different\ntest sets and for the different language pairs. This\ndemonstrates that reordering greatly influences the201\nnonelowmedhigh allMOSES\nHIERO161820222426Figure 8. BLEUscores for the different AR-EN reorder-\ning test sets and the combination of all the groups for\nthe two translation models. The 95% confidence lev-\nels as measured by bootstrap resampling are shown for\neach bar.\nBLEUscore performance of the systems.\nIn Figure 7 we see that the hierarchical model\nperforms considerably better than Moses on the\n“medium” CH-EN set, although the confidence\ninterval for these results overlap somewhat. This\nsupports the claim that Hiero is better able to cap-\nture longer distance reorderings than Moses.\nHiero performs significantly worse than Moses\non the “none” and “low” sets for CH-EN, and\nfor all the AR-EN sets, other than “none”. All\nthese sets have a relatively low amount of reorder-\ning, and in particular a low number of medium\nand long distance reorderings. The phrase-based\nmodel could be performing better because it\nsearches all possible permutations within a certain\nwindow whereas the hierarchical model will only\npermit reorderings for which there is lexical evi-\ndence in the training corpus. Within a small win-\ndow, this exhaustive search could discover the best\nreorderings, but within a bigger window, the more\nconstrained search of the hierarchical model pro-\nduces better results. It is interesting that Hiero is\nnot always the best choice for translation perfor-\nmance, and depending on the amount of reorder-\ning and the distribution of reorderings, the simpler\nphrase-based approach is better.\nThe fact that both models show equally poor\nperformance on the “high” RQuantity test set sug-\ngests that the hierarchical model has no advantage\nover the phrase-based model when the reorder-\nings are long enough and frequent enough. Nei-\nther Moses nor Hiero can perform long distance\nreorderings, due to the local constraints placed on\ntheir search which allows performance to be lin-\near with respect to sentence length. Increasing the\nwindow in which these models are able to perform\nreorderings does not necessarily improve perfor-\n●●●\n●●\n●\n● ●020406080100120140\nWidths of ReorderingsNumber of Reorderings\n2345678>8●None\nLow\nMedium\nHighFigure 9. Reorderings in the CH-EN MOSES transla-\ntion of the reordering test set, plotted against the total\nwidth of the reorderings.\nmance, due to the number of hypotheses the mod-\nels must discriminate amongst.\nThe performance of both systems on the “high”\ntest set could be much worse than the B LEUscore\nwould suggest. A long distance reordering that has\nbeen missed, would only be penalised by B LEU\nonce at the join of the two blocks, even though it\nmight have a serious impact on the comprehension\nof the translation. This flaw seriously limits the\nconclusions that we can draw from B LEU score,\nand motivates analysing translations specifically\nfor reordering as we do in this paper.\nReorderings in Translation\nAt best, B LEUcan only partially reflect the re-\nordering performance of the systems. We therefore\nperform an analysis of the distribution of reorder-\nings that are present in the systems’ outputs, in or-\nder to compare them with each other and with the\nsource-reference distribution.\nFor each hypothesis translation, we record\nwhich source words and phrase pairs or rules were\nused to produce which target words. From this we\ncreate an alignment matrix from which reorder-\nings are extracted in the same manner as previ-\nously done for the manually aligned corpora.\nFigure 9 shows the distribution of reorderings\nthat occur between the source sentence and the\ntranslations from the phrase-based model. This\ngraph is interesting when compared with Figure 6,\nwhich shows the reorderings that exist in the orig-\ninal reference sentence pair. The two distribu-\ntions are quite different. Firstly, as the models use\nphrases which are treated as blocks, reorderings\nwhich occur within a phrase are not recorded. This\nreduces the number of shorter distance reorder-\nings in the distribution in Figure 6, as mainly short202\n●●●●●\n●\n●\n●0 10 20 30 40 50\nWidths of ReorderingsNumber of Reorderings\n2345678>8●None\nLow\nMedium\nHighFigure 10. Reorderings in the CH-EN Hiero translation\nof the reordering test set, plotted against the total width\nof the reorderings.\nphrases pairs are used in the hypothesis. However,\neven taking reorderings within phrase pairs into\naccount, there are many fewer reorderings in the\ntranslations than in the references, and there are\nno long distance reorderings.\nIt is interesting that the phrase-based model is\nable to capture the fact that reordering increases\nwith the RQuantity of the test set. Looking at the\nequivalent data for the AR-EN language pair, a\nsimilar pattern emerges: there are many fewer re-\norderings in the translations than in the references.\nFigure 10 shows the reorderings from the output\nof the hierarchical model. The results are very dif-\nferent to both the phrase-based model output (Fig-\nure 9) and to the original reference reordering dis-\ntribution (Figure 6). There are fewer reorderings\nhere than even in the phrase-based output. How-\never, the Hiero output has a slightly higher B LEU\nscore than the Moses output. The number of re-\norderings is clearly not the whole story. Part of the\nreason why the output seems to have few reorder-\nings and yet scores well, is that the output of hier-\narchical models does not lend itself to the analysis\nthat we have performed successfully on the ref-\nerence or phrase-based translation sentence pairs.\nThis is because the output has a large number of\nnon-contiguous phrases which prevent the extrac-\ntion of reorderings from within their span. Only\n4.6% of phrase-based words were blocked off due\nto non-contiguous phrases but 47.5% of the hier-\narchical words were. This problem can be amelio-\nrated with the detection and unaligning of words\nwhich are obviously dependent on other words in\nthe non-contiguous phrase.\nEven taking blocked off phrases into account,\nhowever, the number of reorderings in the hierar-\n●●\n●\n●\n●●\n●●\n●0100 200 300 400 500 600\nReordering WidthNumber of Reorderings\n234567−89−10 16−20●Test.Set\nPhrase.Based\nHierarchicalFigure 11. Number of reorderings in the original CH-\nEN test set, compared to the reorderings retained by\nthe phrase-based and hierarchical models. The data is\nshown relative to the length of the total source width of\nthe reordering.\nchical output is still low, especially for the medium\nand long distance reorderings, as compared to the\nreference sentences. The hierarchical model’s re-\nordering behaviour is very different to human re-\nordering. Even if human translations are freer and\ncontain more reordering than is strictly necessary,\nmany important reorderings are surely being lost.\nTargeted Automatic Evaluation\nComparing distributions of reorderings is inter-\nesting, but it cannot approach the question of how\nmany reorderings the system performed correctly.\nIn this section we identify individual reorderings\nin the source and reference sentences and detect\nwhether or not they have been reproduced in the\ntranslation.\nEach reordering in the original test set is ex-\ntracted. Then the source-translation alignment is\ninspected to determine whether the blocks in-\nvolved in the original reorderings are in the reverse\norder in the translation. If so, we say that these re-\norderings have been retained from the reference to\nthe translation.\nIf a reordering has been translated by one phrase\npair, we assume that the reordering has been re-\ntained, because the reordering could exist inside\nthe phrase. If the segmentation is slightly differ-\nent, but a reordering of the correct size occurred at\nthe right place, it is also considered to be retained.\nFigure 11 shows that the hierarchical model\nretains more reorderings of all widths than the\nphrase-based system. Both systems retain few re-\norderings, with the phrase-based model missing\nalmost all the medium distance reorderings, and\nboth models failing on all the long distance re-203\nCorrect Incorrect NA\nRetained 61 4 10\nNot Retained 32 31 12\nTable 2. Correlation between retaining reordering and it\nbeing correct - for humans and for system\norderings. This is possibly the most direct evi-\ndence of reordering performance so far, and again\nshows how Hiero has a slight advantage over the\nphrase-based system with regard to reordering per-\nformance.\nTargeted Manual Analysis\nThe relationship between targeted evaluation\nand the correct reordering of the translation still\nneeds to be established. The translation system can\ncompensate for not retaining a reordering by us-\ning different lexical items. To judge the relevance\nof the targeted evaluation we need to perform a\nmanual evaluation. We present evaluators with the\nreference and the translation sentences. We mark\nthe target ranges of the blocks that are involved\nin the particular reordering we are analysing, and\nask the evaluator if the reordering in the translation\nis correct, incorrect or not applicable. The not ap-\nplicable case is chosen when the translated words\nare so different from the reference that their order-\ning is irrelevant. There were three evaluators who\neach judged 25 CH-EN reorderings which were re-\ntained and 25 CH-EN reorderings which were not\nretained by the Moses translation model.\nThe results in Table 2 show that the retained\nreorderings are generally judged to be correct. If\nthe reordering is not retained, then the evaluators\ndivided their judgements evenly between the re-\nordering being correct or incorrect. It seems that\nthe fact that a reordering is not retained does in-\ndicate that its ordering is more likely to be incor-\nrect. We used Fleiss’ Kappa to measure the cor-\nrelation between annotators. It expresses the ex-\ntent to which the amount of agreement between\nraters is greater than what would be expected if\nall raters made their judgements randomly. In this\ncase Fleiss’ kappa is 0.357 which is considered to\nbe a fair correlation.\n6 Conclusion\nIn this paper we have introduced a general and\nextensible automatic method for the quantitative\nanalyse of syntactic reordering phenomena in par-\nallel corpora.\nWe have applied our method to a systematic\nanalysis of reordering both in the training corpus,\nand in the output, of two state-of-the-art transla-\ntion models. We show that the hierarchical modelperforms better than the phrase-based model in sit-\nuations where there are many medium distance re-\norderings. In addition, we find that the choice of\ntranslation model must be guided by the type of re-\norderings in the language pair, as the phrase-based\nmodel outperforms the hierarchical model when\nthere is a predominance of short distance reorder-\nings. However, neither model is able to capture the\nreordering behaviour of the reference corpora ad-\nequately. These result indicate that there is still\nmuch research to be done if statistical machine\ntranslation systems are to capture the full range of\nreordering phenomena present in translation.\nReferences\nAlexandra Birch, Miles Osborne, and Philipp Koehn. 2008.\nPredicting success in machine translation. In Proceedings\nof the Empirical Methods in Natural Language Process-\ning.\nChris Callison-Burch, Miles Osborne, and Philipp Koehn.\n2006. Re-evaluating the role of Bleu in machine trans-\nlation research. In Proceedings of the European Chapter\nof the Association for Computational Linguistics , Trento,\nItaly.\nDavid Chiang, Adam Lopez, Nitin Madnani, Christof Monz,\nPhilip Resnik, and Michael Subotin. 2005. The Hiero\nmachine translation system: Extensions, evaluation, and\nanalysis. In Proceedings of the Human Language Tech-\nnology Conference and Conference on Empirical Methods\nin Natural Language Processing , pages 779–786, Vancou-\nver, Canada.\nDavid Chiang. 2005. A hierarchical phrase-based model for\nstatistical machine translation. In Proceedings of the As-\nsociation for Computational Linguistics , pages 263–270,\nAnn Arbor, Michigan.\nDavid Chiang. 2007. Hierarchical phrase-based translation.\nComputational Linguistics (to appear) , 33(2).\nHeidi J. Fox. 2002. Phrasal cohesion and statistical machine\ntranslation. In Proceedings of the Conference on Empiri-\ncal Methods in Natural Language Processing , pages 304–\n311, Philadelphia, USA.\nPhilipp Koehn, Franz Och, and Daniel Marcu. 2003. Sta-\ntistical phrase-based translation. In Proceedings of the\nHuman Language Technology and North American Asso-\nciation for Computational Linguistics Conference , pages\n127–133, Edmonton, Canada. Association for Computa-\ntional Linguistics.\nPhilipp Koehn, Hieu Hoang, Alexandra Birch, Chris\nCallison-Burch, Marcello Federico, Nicola Bertoldi,\nBrooke Cowan, Wade Shen, Christine Moran, Richard\nZens, Chris Dyer, Ondrej Bojar, Alexandra Constantin,\nand Evan Herbst. 2007. Moses: Open source toolkit\nfor statistical machine translation. In Proceedings of\nthe Association for Computational Linguistics Companion\nDemo and Poster Sessions , pages 177–180, Prague, Czech\nRepublic. Association for Computational Linguistics.204\nPhilipp Koehn. 2004. Statistical significance tests for ma-\nchine translation evaluation. In Dekang Lin and Dekai\nWu, editors, Proceedings of EMNLP 2004 , pages 388–\n395, Barcelona, Spain, July. Association for Computa-\ntional Linguistics.\nDaniel Marcu, Wei Wang, Abdessamad Echihabi, and Kevin\nKnight. 2006. SPMT: Statistical machine translation with\nsyntactified target language phrases. In Proceedings of the\nConference on Empirical Methods in Natural Language\nProcessing , pages 44–52, Sydney, Australia.\nFranz Josef Och and Hermann Ney. 2004. The alignment\ntemplate approach to statistical machine translation. Com-\nputational Linguistics , 30(4):417–450.\nFranz Josef Och, Daniel Gildea, Sanjeev Khudanpur, Anoop\nSarkar, Kenji Yamada, Alex Fraser, Shankar Kumar, Li-\nbin Shen, David Smith, Katherine Eng, Viren Jain, Zhen\nJin, and Dragomir Radev. 2004. A smorgasbord of fea-\ntures for statistical machine translation. In Proceedings of\nHuman Language Technology Conference and Conference\non Empirical Methods in Natural Language Processing ,\npages 161–168, Boston, USA. Association for Computa-\ntional Linguistics.\nBenjamin Wellington, Sonjia Waxmonsky, and I. Dan\nMelamed. 2006. Empirical lower bounds on the complex-\nity of translational equivalence. In Proceedings of the In-\nternational Conference on Computational Linguistics and\nof the Association for Computational Linguistics , pages\n977–984, Sydney, Australia.\nDekai Wu. 1997. Stochastic inversion transduction gram-\nmars and bilingual parsing of parallel corpora. Computa-\ntional Linguistics , 23(3):377–403.\nRichard Zens and Hermann Ney. 2003. A comparative study\non reordering constraints in statistical machine translation.\nInProceedings of the Association for Computational Lin-\nguistics , pages 144–151, Sapporo, Japan.\nAndreas Zollmann, Ashish Venugopal, Franz Och, and Jay\nPonte. 2008. A systematic comparison of phrase-based,\nhierarchical and syntax-augmented statistical mt. In Pro-\nceedings of International Conference On Computational\nLinguistics .205",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "UEdbFwbaWS",
"year": null,
"venue": "I3E 2015",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=UEdbFwbaWS",
"arxiv_id": null,
"doi": null
}
|
{
"title": "An Empirical Study on the Adoption of Online Household e-waste Collection Services in China",
"authors": [
"Shang Gao",
"Jinjing Shi",
"Hong Guo",
"Jiawei Kuang",
"Yibing Xu"
],
"abstract": "Online household e-waste collection services are emerging as new solutions to disposing household e-waste in China. This study aims to investigate the adoption of online household e-waste collection services in China. Based on the previous technology diffusion theories (e.g., TAM, UTAUT), a research model with six research hypotheses was proposed in this research. The research model was empirically tested with a sample of 203 users of online household e-waste collection services in China. The results indicated that five of the six research hypotheses were significantly supported. And the most significant determinant for the behavioral intention to use online household e-waste service was effort expectancy. However, facilitating condition did not have significant impact on users’ behavior of using online household e-waste collection services.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "eeSTW0ZF4T",
"year": null,
"venue": "AIAI 2019",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=eeSTW0ZF4T",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Banner Personalization for e-Commerce",
"authors": [
"Ioannis Maniadis",
"Konstantinos N. Vavliakis",
"Andreas L. Symeonidis"
],
"abstract": "Real-time website personalization is a concept that is being discussed for more than a decade, but has only recently been applied in practice, according to new marketing trends. These trends emphasize on delivering user-specific content based on behavior and preferences. In this context, banner recommendation in the form of personalized ads is an approach that has attracted a lot of attention. Nevertheless, banner recommendation in terms of e-commerce main page sliders and static banners is even today an underestimated problem, as traditionally only large e-commerce stores deal with it. In this paper we propose an integrated framework for banner personalization in e-commerce that can be applied in small-medium e-retailers. Our approach combines topic-models and a neural network, in order to recommend and optimally rank available banners of an e-commerce store to each user separately. We evaluated our framework against a dataset from an active e-commerce store and show that it outperforms other popular approaches.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "hB8qR2pmwH",
"year": null,
"venue": "ECIR 2004",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=hB8qR2pmwH",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Broadcast News Gisting Using Lexical Cohesion Analysis",
"authors": [
"Nicola Stokes",
"Eamonn Newman",
"Joe Carthy",
"Alan F. Smeaton"
],
"abstract": "In this paper we describe an extractive method of creating very short summaries or gists that capture the essence of a news story using a linguistic technique called lexical chaining. The recent interest in robust gisting and title generation techniques originates from a need to improve the indexing and browsing capabilities of interactive digital multimedia systems. More specifically these systems deal with streams of continuous data, like a news programme, that require further annotation before they can be presented to the user in a meaningful way. We automatically evaluate the performance of our lexical chaining-based gister with respect to four baseline extractive gisting methods on a collection of closed caption material taken from a series of news broadcasts. We also report results of a human-based evaluation of summary quality. Our results show that our novel lexical chaining approach to this problem outperforms standard extractive gisting methods.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "SJ45c4-_-r",
"year": null,
"venue": "KDD 2001",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=SJ45c4-_-r",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Mining e-commerce data: the good, the bad, and the ugly",
"authors": [
"Ron Kohavi"
],
"abstract": "Organizations conducting Electronic Commerce (e-commerce) can greatly benefit from the insight that data mining of transactional and clickstream data provides. Such insight helps not only to improve the electronic channel (e.g., a web site), but it is also a learning vehicle for the bigger organization conducting business at brick-and-mortar stores. The e-commerce site serves as an early alert system for emerging patterns and a laboratory for experimentation. For successful data mining, several ingredients are needed and e-commerce provides all the right ones (the Good). Web server logs, which are commonly used as the source of data for mining e-commerce data, were designed to debug web servers, and the data they provide is insufficient, requiring the use of heuristics to reconstruct events. Moreover, many events are never logged in web server logs, limiting the source of data for mining (the Bad). Many of the problems of dealing with web server log data can be resolved by properly architecting the e-commerce sites to generate data needed for mining. Even with a good architecture, however, there are challenging problems that remain hard to solve (the Ugly). Lessons and metrics based on mining real e-commerce data are presented.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "y9P43_4Sa6K",
"year": null,
"venue": "IEEE Pacific Rim Conference on Multimedia 2002",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=y9P43_4Sa6K",
"arxiv_id": null,
"doi": null
}
|
{
"title": "MORF: A Distributed Multimodal Information Filtering System",
"authors": [
"Yi-Leh Wu",
"Edward Y. Chang",
"Kwang-Ting Cheng",
"Chengwei Chang",
"Chen-Cha Hsu",
"Wei-Cheng Lai",
"Ching-Tung Wu"
],
"abstract": "The proliferation of objectionable information on the Internet has reached a level of serious concern. To empower end-users with the choice of blocking undesirable and offensive Web-sites, we propose a multimodal personalized information filter, named MORF. The design of MORF aims to meet three major performance goals: effeciency, accuracy, and personalization. To achieve these design goals, we have devised a multimodality classification algorithm and a personalization algorithm. Empirical study and initial statistics collected from the MORF filters deployed at sites in the U.S. and Asia show that MORF is both e.cient and effective, compared to the traditional URL- and text-based .ltering approaches.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "gngmf5pjb8",
"year": null,
"venue": "SC 2021",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=gngmf5pjb8",
"arxiv_id": null,
"doi": null
}
|
{
"title": "E.T.: re-thinking self-attention for transformer models on GPUs",
"authors": [
"Shiyang Chen",
"Shaoyi Huang",
"Santosh Pandey",
"Bingbing Li",
"Guang R. Gao",
"Long Zheng",
"Caiwen Ding",
"Hang Liu"
],
"abstract": "Transformer-based deep learning models have become a ubiquitous vehicle to drive a variety of Natural Language Processing (NLP) related tasks beyond their accuracy ceiling. However, these models also suffer from two pronounced challenges, that is, gigantic model size and prolonged turnaround time. To this end, we introduce ET. that r<u>E</u>-thinks self-attention computation for <u>T</u>ransformer models on GPUs with the following contributions: First, we introduce a novel self-attention architecture, which encompasses two tailored self-attention operators with corresponding sequence length-aware optimizations, and operation reordering optimizations. Second, we present an attention-aware pruning design which judiciously uses various pruning algorithms to reduce more computations hence achieves significantly shorter turnaround time. For the pruning algorithms, we not only revamp the existing pruning algorithms, but also tailor new ones for transformer models. Taken together, we evaluate E.T. across a variety of benchmarks for Transformer, BERTBASE and DistilBERT, where E.T. presents superior performance over the mainstream projects, including the popular Nvidia Enterprise solutions, i.e., TensorRT and FasterTransformer.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "L_EwuNivkSi",
"year": null,
"venue": "Internet Res. 2010",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=L_EwuNivkSi",
"arxiv_id": null,
"doi": null
}
|
{
"title": "BizSeeker: A hybrid semantic recommendation system for personalized government-to-business e-services",
"authors": [
"Jie Lu",
"Qusai Shambour",
"Yisi Xu",
"Qing Lin",
"Guangquan Zhang"
],
"abstract": "The purpose of this paper is to develop a hybrid semantic recommendation system to provide personalized government to business (G2B) e‐services, in particular, business partner recommendation e‐services for Australian small to medium enterprises (SMEs)., – The study first proposes a product semantic relevance model. It then develops a hybrid semantic recommendation approach which combines item‐based collaborative filtering (CF) similarity and item‐based semantic similarity techniques. This hybrid approach is implemented into an intelligent business‐partner‐locator recommendation‐system prototype called BizSeeker., – The hybrid semantic recommendation approach can help overcome the limitations of existing recommendation techniques. The recommendation system prototype, BizSeeker, can recommend relevant business partners to individual business users (e.g. exporters), which therefore will reduce the time, cost and risk of businesses involved in entering local and international markets., – The study would be of great value in e‐government personalization research. It would facilitate the transformation of the current G2B e‐services into a new stage wherein the e‐government agencies offer personalized e‐services to business users. The study would help government policy decision‐makers to increase the adoption of e‐government services., – Providing personalized e‐services by e‐government can be seen as an evolution of the intentions‐based approach and will be one of the next directions of government e‐services. This paper develops a new recommender approach and systems to improve personalization of government e‐services.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "JAwIpZKHAA1",
"year": null,
"venue": null,
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=JAwIpZKHAA1",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Response to Reviewer E1on (Part 1/2)",
"authors": [],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "EozjWt-Rbgb",
"year": null,
"venue": "Distributed Parallel Databases 2002",
"pdf_link": "https://link.springer.com/content/pdf/10.1023/A:1016503218569.pdf",
"forum_link": "https://openreview.net/forum?id=EozjWt-Rbgb",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Workflow View Based E-Contracts in a Cross-Organizational E-Services Environment",
"authors": [
"Dickson K. W. Chiu",
"Kamalakar Karlapalem",
"Qing Li",
"Eleanna Kafeza"
],
"abstract": "In an e-service environment, workflow involves not only a single organization but also a number of business partners. Therefore, workflow inter-operability in such an environment is an important issue for enacting workflows. In this article, we introduce our approach of using workflow views as a fundamental support for E-service workflow inter-operability and for controlled visibility of (sub-)workflows by external parties. We discuss various aspects of a workflow view, and their semantics with example usage. Furthermore, we develop a contract model based on workflow views and demonstrate how management of e-contracts can be facilitated, with an Internet start-up E-service inter-organization workflow example.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "NzCCSfaO4Xc",
"year": null,
"venue": "Inf. Syst. E Bus. Manag. 2008",
"pdf_link": "https://link.springer.com/content/pdf/10.1007/s10257-007-0056-y.pdf",
"forum_link": "https://openreview.net/forum?id=NzCCSfaO4Xc",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Evaluating transaction trust and risk levels in peer-to-peer e-commerce environments",
"authors": [
"Yan Wang",
"Duncan S. Wong",
"Kwei-Jay Lin",
"Vijay Varadharajan"
],
"abstract": "Trust evaluation is critical to peer-to-peer (P2P) e-commerce environments. Traditionally the evaluation process is based on other peers' recommendations neglecting transaction amounts. This may lead to the bias in transaction trust evaluation and risk the new transaction. The weakness may be exploited by dishonest sellers to obtain good transaction reputation by selling cheap goods and then cheat buyers by selling expensive goods. In this paper we present a novel model for transaction trust evaluation, which differentiates transaction amounts when computing trust values. The trust evaluation is dependent on transaction history, the amounts of old transactions, and the amount of the new transaction. Therefore, the trust value can be taken as the risk indication of the forthcoming transaction and is valuable for the decision-making of buyers.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "VjdwM8MzvU",
"year": null,
"venue": "MGC@Middleware 2009",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=VjdwM8MzvU",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Semantic middleware for e-science knowledge spaces",
"authors": [
"Joe Futrelle",
"Jeff Gaynor",
"Joel Plutchak",
"James D. Myers",
"Robert E. McGrath",
"Peter Bajcsy",
"Jason Kastner",
"Kailash Kotwani",
"Jong Sung Lee",
"Luigi Marini",
"Rob Kooper",
"Terry McLaren",
"Yong Liu"
],
"abstract": "The Tupelo semantic content management middleware implements Knowledge Spaces that enable scientists to locate, use, link, annotate, and discuss data and metadata as they work with existing applications in distributed environments. Tupelo is built using a combination of commonly-used Semantic Web technologies for metadata management, content management technologies for data management, and workflow technologies for management of computation, and can interoperate with other tools using a variety of standard interfaces and a client and desktop API. Tupelo's primary function is to facilitate interoperability, providing a Knowledge Space \"view\" of distributed, heterogeneous resources such as institutional repositories, relational databases, and semantic web stores. Knowledge Spaces have driven recent work creating e-Science cyberenvironments to serve distributed, active scientific communities. Tupelo-based components deployed in desktop applications, on portals, and in AJAX applications interoperate to allow researchers to develop, coordinate and share datasets, documents, and computational models, while preserving process documentation and other contextual information needed to produce a complete and coherent research record suitable for distribution and archiving.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "ARQ3_Y9Mis",
"year": null,
"venue": null,
"pdf_link": "https://arxiv.org/abs/2106.02318",
"forum_link": "https://openreview.net/forum?id=ARQ3_Y9Mis",
"arxiv_id": null,
"doi": null
}
|
{
"title": "AdaTag: Multi-Attribute Value Extraction from Product Profiles with Adaptive Decoding",
"authors": [
"Jun Yan",
"Nasser Zalmout",
"Yan Liang",
"Christan Earl Grant",
"Xiang Ren",
"Xin Luna Dong"
],
"abstract": "Automatic extraction of product attribute values is an important enabling technology in e-Commerce platforms. This task is usually modeled using sequence labeling architectures, with several extensions to handle multi-attribute extraction. One line of previous work constructs attribute-specific models, through separate decoders or entirely separate models. However, this approach constrains knowledge sharing across different attributes. Other contributions use a single multi-attribute model, with different techniques to embed attribute information. But sharing the entire network parameters across all attributes can limit the model's capacity to capture attribute-specific characteristics. In this paper we present AdaTag, which uses adaptive decoding to handle extraction. We parameterize the decoder with pretrained attribute embeddings, through a hypernetwork and a Mixture-of-Experts (MoE) module. This allows for separate, but semantically correlated, decoders to be generated on the fly for different attributes. This approach facilitates knowledge sharing, while maintaining the specificity of each attribute. Our experiments on a real-world e-Commerce dataset show marked improvements over previous methods.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "eM7dqOCjazN",
"year": null,
"venue": "Linguamática 2010",
"pdf_link": "https://www.linguamatica.com/index.php/linguamatica/article/download/56/87",
"forum_link": "https://openreview.net/forum?id=eM7dqOCjazN",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Análise Morfossintáctica para Português Europeu e Galego: Problemas, Soluções e Avaliação",
"authors": [
"Marcos García",
"Pablo Gamallo"
],
"abstract": "Resumo As diferentes tarefas de análise morfossintáctica têm muita importância para posteriores níveis do processamento da linguagem natural. Por isso, estes processos devem ser realizados com ferramentas que garantam bons desempenhos em relação à cobertura, precisão e robustez na análise. FreeLing é uma suite com licença GPL desenvolvida pelo Grupo TALP da Universitat Politècnica de Catalunya. Este software contém -entre outros- módulos de tokenização, segmentação de orações, reconhecimento de entidades e anotação morfossintáctica. Com o fim de obtermos ferramentas que nos sirvam de base para a análise sintáctica, bem como para disponibilizar software livre para o processamento de superfície de Português Europeu e Galego, adaptámos FreeLing para estas variedades. A primeira delas foi desenvolvida com ajuda de recursos linguísticos disponíveis on-line, enquanto os ficheiros do Galego tiveram como base a versão anterior de FreeLing (criados pelo Seminario de Lingüística Informática da Universidade de Vigo), que já realizava a análise desta língua. O presente trabalho descreve os principais aspectos do desenvolvimento das ferramentas, com ênfase nos problemas encontrados e nas soluções adoptadas em cada caso. Além disso, são apresentados os resultados de avaliação do módulo PoS-tagger.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "rQrykxKCgtI",
"year": null,
"venue": "AVI 2004",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=rQrykxKCgtI",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Usability of E-learning tools",
"authors": [
"Carmelo Ardito",
"Maria De Marsico",
"Rosa Lanzilotti",
"Stefano Levialdi",
"Teresa Roselli",
"Veronica Rossano",
"Manuela Tersigni"
],
"abstract": "The new challenge for designers and HCI researchers is to develop software tools for effective e-learning. Learner-Centered Design (LCD) provides guidelines to make new learning domains accessible in an educationally productive manner. A number of new issues have been raised because of the new \"vehicle\" for education. Effective e-learning systems should include sophisticated and advanced functions, yet their interface should hide their complexity, providing an easy and flexible interaction suited to catch students' interest. In particular, personalization and integration of learning paths and communication media should be provided.It is first necessary to dwell upon the difference between attributes for platforms (containers) and for educational modules provided by a platform (contents). In both cases, it is hard to go deeply into pedagogical issues of the provided knowledge content. This work is a first step towards identifying specific usability attributes for e-learning systems, capturing the peculiar features of this kind of applications. We report about a preliminary users study involving a group of e-students, observed during their interaction with an e-learning system in a real situation. We then propose to adapt to the e-learning domain the so called SUE (Systematic Usability Evaluation) inspection, providing evaluation patterns able to drive inspectors' activities in the evaluation of an e-learning tool.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "2AUeZ3UuXfj",
"year": null,
"venue": null,
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=2AUeZ3UuXfj",
"arxiv_id": null,
"doi": null
}
|
{
"title": null,
"authors": [],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "UvWA1_eKGYD4",
"year": null,
"venue": "WWW 2020",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=UvWA1_eKGYD4",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Keywords Generation Improves E-Commerce Session-based Recommendation",
"authors": [
"Yuanxing Liu",
"Zhaochun Ren",
"Wei-Nan Zhang",
"Wanxiang Che",
"Ting Liu",
"Dawei Yin"
],
"abstract": "By exploring fine-grained user behaviors, session-based recommendation predicts a user’s next action from short-term behavior sessions. Most of previous work learns about a user’s implicit behavior by merely taking the last click action as the supervision signal. However, in e-commerce scenarios, large-scale products with elusive click behaviors make such task challenging because of the low inclusiveness problem, i.e., many relevant products that satisfy the user’s shopping intention are neglected by recommenders. Since similar products with different IDs may share the same intention, we argue that the textual information (e.g., keywords of product titles) from sessions can be used as additional supervision signals to tackle above problem through learning more shared intention within similar products. Therefore, to improve the performance of e-commerce session-based recommendation, we explicitly infer the user’s intention by generating keywords entirely from the click sequence in the current session. In this paper, we propose the e-commerce session-based recommendation model with keywords generation (abbreviated as ESRM-KG) to integrate keywords generation into e-commerce session-based recommendation. Specifically, the ESRM-KG model firstly encodes an input action sequence into a high dimensional representation; then it presents a bi-linear decoding scheme to predict the next action in the current session; synchronously, the ESRM-KG model addresses incepts the high dimensional representation of its encoder to generate explainable keywords for the whole session. We carried out extensive experiments in the context of click prediction on a large-scale real-world e-commerce dataset. Our experimental results show that the ESRM-KG model outperforms state-of-the-art baselines with the help of keywords generation.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "R9bXI3RrPx",
"year": null,
"venue": "WWW 2020",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=R9bXI3RrPx",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Keywords Generation Improves E-Commerce Session-based Recommendation",
"authors": [
"Yuanxing Liu",
"Zhaochun Ren",
"Wei-Nan Zhang",
"Wanxiang Che",
"Ting Liu",
"Dawei Yin"
],
"abstract": "By exploring fine-grained user behaviors, session-based recommendation predicts a user’s next action from short-term behavior sessions. Most of previous work learns about a user’s implicit behavior by merely taking the last click action as the supervision signal. However, in e-commerce scenarios, large-scale products with elusive click behaviors make such task challenging because of the low inclusiveness problem, i.e., many relevant products that satisfy the user’s shopping intention are neglected by recommenders. Since similar products with different IDs may share the same intention, we argue that the textual information (e.g., keywords of product titles) from sessions can be used as additional supervision signals to tackle above problem through learning more shared intention within similar products. Therefore, to improve the performance of e-commerce session-based recommendation, we explicitly infer the user’s intention by generating keywords entirely from the click sequence in the current session. In this paper, we propose the e-commerce session-based recommendation model with keywords generation (abbreviated as ESRM-KG) to integrate keywords generation into e-commerce session-based recommendation. Specifically, the ESRM-KG model firstly encodes an input action sequence into a high dimensional representation; then it presents a bi-linear decoding scheme to predict the next action in the current session; synchronously, the ESRM-KG model addresses incepts the high dimensional representation of its encoder to generate explainable keywords for the whole session. We carried out extensive experiments in the context of click prediction on a large-scale real-world e-commerce dataset. Our experimental results show that the ESRM-KG model outperforms state-of-the-art baselines with the help of keywords generation.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "RsYG2DAsc1",
"year": null,
"venue": "UAI 2023",
"pdf_link": "/pdf/394e230ed5f6a12fa961a3064915ffda23b6e9ef.pdf",
"forum_link": "https://openreview.net/forum?id=RsYG2DAsc1",
"arxiv_id": null,
"doi": null
}
|
{
"title": "\"Private Prediction Strikes Back!\" Private Kernelized Nearest Neighbors with Individual R\\'{e}nyi Filter",
"authors": [
"Yuqing Zhu",
"Xuandong Zhao",
"Chuan Guo",
"Yu-Xiang Wang"
],
"abstract": "Most existing approaches of differentially private (DP) machine learning focus on private training. Despite its many advantages, private training lacks the flexibility in adapting to incremental changes to the training dataset such as deletion requests from exercising GDPR’s right to be forgotten. We revisit a long-forgotten alternative, known as private prediction, and propose a new algorithm named Individual Kernelized Nearest Neighbor (Ind-KNN). Ind-KNN is easily updatable over dataset changes and it allows precise control of the R\\'{e}nyi DP at an individual user level --- a user's privacy loss is measured by the exact amount of her contribution to predictions; and a user is removed if her prescribed privacy budget runs out. Our results show that Ind-KNN consistently improves the accuracy over existing private prediction methods for a wide range of $\\epsilon$ on four vision and language tasks. We also illustrate several cases under which Ind-KNN is preferable over private training with NoisySGD. ",
"keywords": [
"private prediction",
"differential privacy",
"kNN"
],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "iebkTH1sBYb",
"year": null,
"venue": null,
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=iebkTH1sBYb",
"arxiv_id": null,
"doi": null
}
|
{
"title": null,
"authors": [],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "gFbuN_vZ1RB",
"year": null,
"venue": "ICETC 2019",
"pdf_link": "https://dl.acm.org/doi/pdf/10.1145/3369255.3369287",
"forum_link": "https://openreview.net/forum?id=gFbuN_vZ1RB",
"arxiv_id": null,
"doi": null
}
|
{
"title": "ALDO: An Innovative Digital Framework for Active E-Learning",
"authors": [
"Ilaria Bartolini",
"Andrea Di Luzio"
],
"abstract": "In this paper, we propose ALDO (Active e-Learning by DOing), a novel, advanced digital framework supporting integrated facilities for effective, active e-Learning. The ALDO framework includes an active repository for collecting/sharing relevant materials, collaborative editing services for enriching so collected \"raw\" materials, and advanced data visualization tools (e.g., interactive maps, graphs, and timelines) to explore the spatial and temporal dimension of specific data contexts. Although the present research was carried out within the European Horizon 2020 Project DETECt (Detecting Transcultural Identity in European Popular Crime Narratives), focusing on the specific data context of European crime narrative, the generality of ALDO technological framework makes it suitable for any type of study/teaching activity. More in details, ALDO consists of a multi-functional digital infrastructure (back-end) for the integration of collaborative editing and e-Learning activities in formal and informal educational contexts. The platform supports effective services for collecting, sharing, retrieving, and analyzing data, together with advanced online collaboration tools, an e-Learning platform and advanced data visualization tools, all made available to teachers/students through a dedicated Web portal (front-end). The design and creation of above tools and services for teaching, together with their uses, are presented and discussed through a series of real examples taken from DETECt.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "7_t4Gvubkeo",
"year": null,
"venue": "NeurIPS 2021 Poster",
"pdf_link": "/pdf/a364531d0f7337bd583f842b635a9ac3d328924a.pdf",
"forum_link": "https://openreview.net/forum?id=7_t4Gvubkeo",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Cardinality constrained submodular maximization for random streams",
"authors": [
"Paul Liu",
"Aviad Rubinstein",
"Jan Vondrak",
"Junyao Zhao"
],
"abstract": "We consider the problem of maximizing submodular functions in single-pass streaming and secretaries-with-shortlists models, both with random arrival order.\nFor cardinality constrained monotone functions, Agrawal, Shadravan, and Stein~\\cite{SMC19} gave a single-pass $(1-1/e-\\varepsilon)$-approximation algorithm using only linear memory, but their exponential dependence on $\\varepsilon$ makes it impractical even for $\\varepsilon=0.1$.\nWe simplify both the algorithm and the analysis, obtaining an exponential improvement in the $\\varepsilon$-dependence (in particular, $O(k/\\varepsilon)$ memory).\nExtending these techniques, we also give a simple $(1/e-\\varepsilon)$-approximation for non-monotone functions in $O(k/\\varepsilon)$ memory. For the monotone case, we also give a corresponding unconditional hardness barrier of $1-1/e+\\varepsilon$ for single-pass algorithms in randomly ordered streams, even assuming unlimited computation. \n\nFinally, we show that the algorithms are simple to implement and work well on real world datasets.",
"keywords": [
"submodular",
"streaming model",
"optimization",
"cardinality constraint",
"approximation algorithms"
],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "2hO9MyHLzbv",
"year": null,
"venue": "World Wide Web 2019",
"pdf_link": "https://link.springer.com/content/pdf/10.1007/s11280-019-00702-z.pdf",
"forum_link": "https://openreview.net/forum?id=2hO9MyHLzbv",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Mining latent patterns in geoMobile data via EPIC",
"authors": [
"Arvind Narayanan",
"Saurabh Verma",
"Zhi-Li Zhang"
],
"abstract": "We coin the term geoMobile data to emphasize datasets that exhibit geo-spatial features reflective of human behaviors. We propose and develop an EPIC framework to mine latent patterns from geoMobile data and provide meaningful interpretations: we first ‘E’xtract latent features from high dimensional geoMobile datasets via Laplacian Eigenmaps and perform clustering in this latent feature space; we then use a state-of-the-art visualization technique to ‘P’roject these latent features into 2D space; and finally we obtain meaningful ‘I’nterpretations by ‘C’ulling cluster-specific significant feature-set. We illustrate that the local space contraction property of our approach is most superior than other major dimension reduction techniques. Using diverse real-world geoMobile datasets, we show the efficacy of our framework via three case studies.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.